Uploaded by Cagatay

978-1-4614-7993-2

advertisement
The Economics of Information, Communication,
and Entertainment
The Impacts of Digital Technology in the 21st Century
James Alleman
Áine Marie Patricia Ní-Shúilleabháin
Paul N. Rappoport Editors
Demand for
Communications
Services - Insights
and Perspectives
Essays in Honor of Lester D. Taylor
The Economics of Information,
Communication, and Entertainment
The Impacts of Digital Technology
in the 21st Century
Series Editor
Darcy Gerbarg, New York NY, USA
For further volumes:
http://www.springer.com/series/8276
James Alleman
Áine Marie Patricia Ní-Shúilleabháin
Paul N. Rappoport
Editors
Demand for
Communications Services Insights and Perspectives
Essays in Honor of Lester D. Taylor
123
Editors
James Alleman
College of Engineering and Applied
Science
University of Colorado—Boulder
Boulder, CO
USA
Paul N. Rappoport
Department of Economics
Temple University
Philadelphia, PA
USA
Áine Marie Patricia Ní-Shúilleabháin
Columbia Institute for Tele-Information
Columbia Business School
New York, NY
USA
ISSN 1868-0453
ISBN 978-1-4614-7992-5
DOI 10.1007/978-1-4614-7993-2
ISSN 1868-0461 (electronic)
ISBN 978-1-4614-7993-2 (eBook)
Springer New York Heidelberg Dordrecht London
Library of Congress Control Number: 2013951159
Springer Science+Business Media New York 2014
This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of
the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations,
recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or
information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar
methodology now known or hereafter developed. Exempted from this legal reservation are brief
excerpts in connection with reviews or scholarly analysis or material supplied specifically for the
purpose of being entered and executed on a computer system, for exclusive use by the purchaser of the
work. Duplication of this publication or parts thereof is permitted only under the provisions of
the Copyright Law of the Publisher’s location, in its current version, and permission for use must
always be obtained from Springer. Permissions for use may be obtained through RightsLink at the
Copyright Clearance Center. Violations are liable to prosecution under the respective Copyright Law.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this
publication does not imply, even in the absence of a specific statement, that such names are exempt
from the relevant protective laws and regulations and therefore free for general use.
While the advice and information in this book are believed to be true and accurate at the date of
publication, neither the authors nor the editors nor the publisher can accept any legal responsibility for
any errors or omissions that may be made. The publisher makes no warranty, express or implied, with
respect to the material contained herein.
Printed on acid-free paper
Springer is part of Springer Science+Business Media (www.springer.com)
Contents
Overview: The Future of Telecommunications,
Media and Technology . . . . . . . . . . . . . . . . . . . . . . . . . .
Áine Marie Patricia Ní-Shúilleabháin, James Alleman
and Paul N. Rappoport
ix
Prologue I: Research Demands on Demand Research . . . . . . . . . . . .
Eli Noam
xv
Prologue II: Lester Taylor’s Insights . . . . . . . . . . . . . . . . . . . . . . .
Timothy J. Tardiff and Daniel S. Levy
xxv
Part I
Advances in Theory
1
Regression with a Two-Dimensional Dependent Variable . . . . . . .
Lester D. Taylor
3
2
Piecewise Linear L1 Modeling . . . . . . . . . . . . . . . . . . . . . . . . . .
Kenneth O. Cogger
17
Part II
3
4
Empirical Applications: Information and Communication
Technologies
‘‘Over the Top:’’ Has Technological Change Radically
Altered the Prospects for Traditional Media? . . . . . . . . . . . . . . .
Robert W. Crandall
33
Forecasting Video Cord-Cutting: The Bypass of Traditional
Pay Television . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Aniruddha Banerjee, Paul Rappoport and James Alleman
59
v
vi
5
6
7
8
9
Contents
Blended Traditional and Virtual Seller Market Entry
and Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
T. Randolph Beard, Gary Madden and Md. Shah Azam
83
How Important is the Media and Content Sector
to the European Economy? . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Ibrahim Kholilul Rohman and Erik Bohlin
113
Product Differences and E-Purchasing: An Empirical
Study in Spain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Teresa Garín-Muñoz and Teodosio Pérez-Amaral
133
Forecasting the Demand for Business
Communications Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Mohsen Hamoudia
153
Residential Demand for Wireless Telephony . . . . . . . . . . . . . . . .
Donald J. Kridel
Part III
Empirical Applications: Other Areas
10
Pricing and Maximizing Profits Within Corporations . . . . . . . . .
Daniel S. Levy and Timothy J. Tardiff
11
Avalanche Forecasting: Using Bayesian Additive Regression
Trees (BART) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Gail Blattenberger and Richard Fowles
Part IV
171
185
211
Evidenced Based Policy Applications
12
Universal Rural Broadband: Economics and Policy . . . . . . . . . . .
Bruce Egan
231
13
Who Values the Media? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Scott J. Savage and Donald M. Waldman
255
14
A Systems Estimation Approach to Cost, Schedule,
and Quantity Outcomes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
R. Bruce Williamson
273
Contents
Part V
vii
Conclusion
15
Fifty Years of Studying Economics . . . . . . . . . . . . . . . . . . . . . . .
Lester D. Taylor
291
16
Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Áine M. P. Ní-Shúilleabháin, James Alleman and Paul N. Rappoport
305
Appendix: The Contribution of Lester D. Taylor
Using Bibliometrics. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Sharon G. Levin and Stanford L. Levin
307
Biographies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
315
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
325
Overview: The Future of Telecommunications,
Media, and Technology
Introduction
This book grew out of a conference organized by James Alleman and Paul
Rappoport, conducted on 10 October 2011 in Jackson Hole, Wyoming, in honor of
the work of Lester D. Taylor. The conference lasted just one weekend, but the
papers are more durable.1
We begin with two Prologues; the first is written by Eli M. Noam. He focuses
on demand analysis for media and communication firms. He notes that demand
analysis in the information sector must recognize the ‘public good’ characteristics
of media products and networks, while taking into account the effects of
interdependent user behavior; the strong cross-elasticities in a market; as well as
the phenomenon of supply creating its own demand.
Noam identifies several challenges. The first involves privacy concerns (since
companies do not want to share their data). In addition, research and analytical
data collection are falling behind. Third is the demonstrable lack of linkage of
economic with behavioral data. The fourth major hurdle consists in the lack of
bridges from the academic world of textbook theory involving consumer demand
to the practical empirical world of media researchers.
The second Prologue by Timothy Tardiff and Daniel Levy focuses more
narrowly on Lester Taylor’s body of work, in particular its practical applications
and usefulness in analyses of, and practices within, the ICT sector.
The remainder of this book is divided into four parts: Advances in Theory;
Empirical Applications; Evidence-Based Policy Applications; and a final Conclusion. The contents of these Parts are discussed in detail below.
The book closes with an Appendix by Sharon Levin and Stanford Levin
detailing the contributions of Professor Taylor using Bibliometrics.
Ordinarily, Festschrifts only have contributions by the honoree’s students and
followers; however, this book is blessed by two contributions by Professor Taylor.
1
Thanks are due to the Columbia Institute for Tele-Information (CITI), as well as to the
International Telecommunications Society (ITS) for sponsorship. Thanks are also due to Mohsen
Hamoudia (for sponsorship on behalf of the International Institute of Forecasters (IIF)); Centris,
and the authors for a riveting set of papers as summarized above, and as collected herein/below.
ix
x
Overview: The Future of Telecommunications, Media, and Technology
The first is a seminal contribution to demand estimation when log–log
transformations are inappropriate because of the existence within the data set of
negative values. The second, serving as part of the Conclusion, provides insight
into economics from Taylor’s fifty-plus years in the field.
Advances in Theory
Lester Taylor develops a procedure for dealing with variables which cannot be
transformed into the traditional log–log in regression models. Rather, he suggests
representing such dependent variables in polar coordinates. In this case, twoequation models can be specified with estimation proceeding in terms of functions
involving sines, cosines, and radius vectors. Taylor’s approach permits generalization to higher dimensions, and can be applied in circumstances in which values
of the dependent variable can be points in the complex plane.
Kenneth Cogger demonstrates how piecewise linear models may be estimated
with the L1 criterion (which minimizes the sum of absolute errors, as distinct from
the ordinary least squares (OLS) criterion). He introduces the Quantile Regression
program, using Mixed Integer Linear Programming. If an OLS program is desired,
a Mixed Integer Quadratic Programming approach may prove useful.
Empirical Applications: Information and Communication
Technologies
Robert Crandall investigates the impact of recent changes in the Telecommunication, Media, and Technology (TMT) sector on participants in the traditional
media sector. He focuses on empirical evidence on how changes in equipment and
access to media have affected consumers’ time management. He examines the
effects of the profound changes taking place in the TMT sector on the economic
prospects of the variety of firms engaged in traditional communications and media.
Crandall demonstrates that while market participants may currently recognize the
threats posed to traditional media companies, the disruptions have been relatively
modest thus far—and have had little apparent effect on the financial market’s
assessment of the future of media companies.
Aniruddha Banerjee, Paul N. Rappoport, and James Alleman report on efforts to
forecast the effect of consumer choices on the future of video cord-cutting. This
chapter presents evidence on household ownership of OTT (Over the Top)enabling devices as well as subscription to OTT services; this paper also forecasts
the effects of both phenomena upon traditional video. This chapter also examines
how consumers’ OTT choices are determined by household geo-demographic
characteristics, device ownership, and subscription history. Ordered logit regressions are used to analyze and forecast future choices of devices and services, and
Overview: The Future of Telecommunications, Media, and Technology
xi
to estimate switching probabilities for OTT substitution by different consumer
profiles.
T. Randolph Beard, Gary Madden, and Md. Shah Azam focus upon analysis of
blended ‘bricks and clicks’ firms, as well as on virtual firms lacking any presence
offline. Their study utilizes a data set of small Australian firms, and examines the
relationship between the strategic motivation for entry, and the results of entry.
Utilizing a bivariate ordered probit model with endogenous dummy variables, the
endogeneity of firm strategic goals and implicit estimates of the parameters of the
post-entry business is analyzed. Their study finds that the goal of the firm
materially affects subsequent performance: firms entering to expand their market
size ordinarily succeed, but those entering to reduce costs do not. Blended firms
enjoy no strong advantages over pure online entrants.
Ibrahim Kholilul Rohman and Erik Bohlin focus upon the media and content
sub-segments of the Information and Communication Technology (ICT) sector—
as defined by the OECD—and in tandem with the ISIC classification of these two
components. The media and content sector—by these definitions—consists of the
printing industry, motion pictures, video and television, music content, games
software, and online content services. The authors aim to measure the contribution
of the media and content sector in driving economic output in the European
economies, and to decompose the change of output into several sources. This paper
aims to forecast the impact of a reduction in price on national GDP. The main
methodology in this study is the Input–Output (IO) table. The study reveals that
the price elasticity of media and content sectors to GDP is approximately 0.17 %.
The impact varies across countries, but France, Sweden, and Norway were found
to have the higher elasticity coefficients. It found that price reductions mainly
affects the financial sector, together with manufacturing of ICT products besides
the media and content sectors themselves.
Teresa Garín-Muñoz and Teodosio Pérez-Amaral demonstrate key determinants of online shopping in Spain. They model how socio-demographic variables,
attitudes, and beliefs toward internet shopping affect both the decision and usage
of online shopping. In this chapter, three different samples of internet users are
defined as separate groups: those who purchase online (buyers); those who look for
information online but purchase in stores (browsers); and those who do not shop
online at all (non-internet shoppers). Logit models are used to select the useful
factors to assess the propensity to shop online.
Donald J. Kridel focuses upon residential demand for wireless telephony as it
continues to grow at a dramatic rate. His chapter analyzes the residential demand
for wireless telephony using a sample of surveyed US households. Using a large
data set with over 20,000 observations, Kridel estimates a discrete-choice model
for the demand for wireless telephony. Preliminary elasticity estimates indicate
that residential wireless demand is price-inelastic.
Daniel S. Levy and Timothy J. Tardiff focus upon pricing and maximizing
profits within corporations. Their chapter addresses examples of how particular
businesses establish prices and improve profitability. Levy and Tardiff focus upon
issues identified in Lester Taylor’s research such as theoretical and practical
xii
Overview: The Future of Telecommunications, Media, and Technology
approaches to methodological issues including endogeneity of supply and demand;
how cross-elasticities among products offered by the same company are addressed
when improving profitability; how to deal with competing products in establishing
prices; and how to define and measure costs.
Gail Blattenberger and Richard Fowles provide another focused example of
applied empirical research. Whether or not an avalanche crosses a heavily
traveled, dangerous road to two popular ski resorts in Utah is modeled via
Bayesian additive tree methods. Utilizing daily winter data from 1995 to 2010,
their results demonstrate that using Bayesian tree analysis outperforms traditional
statistical methods in terms of realized misclassification costs that account for
asymmetric losses arising from Type 1 and Type 2 errors.
Mohsen Hamoudia focuses upon estimation and modeling of Business
Communications Services (BCS)—accounting for both demand and supply—the
French market is used to illustrate the approach. This paper first provides a broad
overview of the demand for BCS: it briefly describes the scope of BCS products
and solutions, their structure, evolution, and key drivers. Then it presents the
specification and estimation of the demand and supply models and some
illustrative forecasts.
Multi-equation models are employed, with demand variables (eight), supply
variables (three), and independent variables (three). The models estimated include
dummy variables to accommodate qualitative effects—notably market openness.
Three-stage least squares—a combination of two-stage least squares and
seemingly unrelated regression—is employed for estimation of supply and
demand for BCS, allowing for endogeneity of function arguments. The results of
the analyses suggest that investigation of more complex lag structures than those
employed in this paper may be appropriate.
Evidence-Based Policy Applications
Bruce Egan addresses the matter of providing universal rural broadband service.
His chapter discusses the efficacy of investments to date (focusing upon the case of
the USA), and compares that with what might be achieved if coupled with rational
policies based upon economic welfare. Wyoming (USA) serves as a case study to
illustrate the vast differences in what is achievable versus actual (or likely) results.
Egan offers four main policy recommendations: eliminate entry barriers from rural
waivers; reduce tariffs for network interconnection; target direct subsidies in a
technologically neutral fashion; and reform spectrum regulations.
Scott Savage and Donald M. Waldman examine consumer demand for their
local media environment (newspapers, radio, television, the internet, and
Smartphone). Details regarding the purchasing habits of representative consumers
are provided, distinguishing among men and women; different age groups;
minorities and majorities; and their willingness to pay for community news,
multiculturalism, advertising-free programs, and multiple valuation agendas.
Overview: The Future of Telecommunications, Media, and Technology
xiii
R. Bruce Williamson analyzes cost, schedule, and quantity outcomes in a
sample of major defense acquisition programs since 1965. A systems approach is
applied here with Seemingly Unrelated Regression (SUR), to explore crossequation correlated error structures, and to compare results with those of singleequation models in analysis of defense acquisition programs. The author finds that
SUR coefficient estimates improve upon single equation regression results for cost
growth, schedule slippage, and quantity changes. These results are invoked to
propose that SUR is useful for enhancing the quality and reliability of the weapons
acquisition program for the Department of Defense, USA.
Conclusion
Lester Taylor focuses as an economist upon a set of principal relationships that
currently face the US, and the world at large. He discusses important concepts
from the theory of consumer choice; unappreciated contributions of Keynes in the
General Theory; fluid capital and fallacies of composition; transfer problems; and
lessons learned from the financial meltdown of 2007–2009.
Áine M. P. Ní-Shúilleabháin, James Alleman, and Paul N. Rappoport conclude,
focusing upon future research: where we go from here.
Appendix
Sharon Levin and Stanford Levin provide a biography of Professor Taylor as well as
focusing on the extensive contributions of Professor Taylor using techniques
beyond the tradition counting publications and citations to his influence to
consumer and telecommunications demand. As you can anticipate, his contributions
are extensive.
Áine Marie Patricia Ní-Shúilleabháin
James Alleman
Paul N. Rappoport
Prologue I: Research Demands
on Demand Research
Overview
This should be the golden age of demand research. Many of the constraints of the
past have relaxed when it comes to data collection. Yet the methodologies of
demand analysis created by thought leaders such as Lester Taylor (1972, 1974;
Houthakker and Taylor 2010) have not grown at the same pace and are holding
back our understanding and power of prediction.
Demand research is, of course, highly important. On the macro-level, governments and businesses need to know what to expect by way of aggregate or sectorial
demand, such as for housing or energy. On the micro-level, every firm wants to
know who its potential buyers are, their willingness to pay, their price sensitivity,
what product features they value, and what they like about competing products
(Holden and Nagle 2001).
Yet it is always difficult to determine demand. It is easy to graph a hypothetical
curve or equation in the classroom but hard to determine the real world nature of
demand and the factors that go into it.
Demand analysis is particularly important and difficult for media and communications firms (Kim 2006; Burney et al. 2002; Green et al. 2002; Taylor and
Rappoport 1997; Taylor et al. 1972). They must grapple with high investment needs
ahead of demand, a rapid rate of change in markets and products, and an instability
of user preferences. Demand analysis in the information sector must recognize the
‘‘public good’’ characteristics of media products and networks, while taking into
account the effects of interdependent user behavior, the strong cross-elasticities in a
market, as well as the phenomenon of supply creating its own demand.
There is a continuous back-and-forth between explanations of whether ‘‘powerful suppliers’’ or ‘‘powerful users’’ determine demand in the media sector.
Research in the social sciences has not resolved this question (Livingstone 1993).
On one side of this debate is the ‘‘Nielsen Approach,’’ where the power is seen to
lie with the audience. User preferences govern and it is up to the media companies
to satisfy these preferences (Stavitsky 1995). Demand creates supply. The other
side of the debate is the ‘‘Marketing’’ or ‘‘Madison Avenue Approach.’’ In this
view, the power to create and determine demand lies with the media
xv
xvi
Prologue I: Research Demands on Demand Research
communications firms themselves and the marketing messages they present
(Bagdikian 2000). Supply creates demand.
In contrast to most other industries, demand measurement techniques affect
firms’ bottom lines, directly and instantly, and hence are part of public debate and
commercial disputes. When, for television, a transition from paper diaries to
automatic people meters took place in 1990, the effect of people meters on the
bottom line was palpable. The introduction of people meters permanently lowered
overall TV ratings by an average of 4.5 points. Of the major networks, CBS lost
2.0 points and NBC showed an average loss of 1.5 points. In New York City, Fox
5, UPN 9, and WB 11 showed large drops. Ratings for cable channels showed a
gain of almost 20 percent (Adams 1994). In 1990, each ratings point was worth
approximately $140 million/year. The decrease in ratings would have cost major
networks between $400 and $500 million annually. Thus, demand analysis can
have an enormous impact on a business in the media and communications sector.
The forecasting of demand creates a variety of issues. One can divide these
problems into two broad categories. ‘‘Type I Errors’’ exist when the wrong action
is taken. In medicine this is called a ‘‘false positive.’’ Human nature mercifully
contains eternal optimism but this also clouds the judgment of many demand
forecasts. By a wide margin entrepreneurs overestimate the demand for products
rather than underestimate it. Eighty percent of films, music, recordings, and books
do not break-even. Observe AT&T’s 1963 prediction that, ‘‘there will be 10
million picture phones in use by US households in 1980,’’ yet in 1980 picture
phones were little more than a novelty for a few and remained so for decades
(Carey and Elton 2011). Another example of this type of error can be seen in the
view summarized in 1998 by the Wall Street Journal that ‘‘The consensus forecast
by media analysts is of 30 million satellite phone subscribers by 2006.’’ In actuality, their high cost and the concurrent advancements in terrestrial mobile networks have relegated satellite phones to small niches only.
The second category of demand forecasting mistakes is a ‘‘Type II Error,’’
when the correct action is not being taken. This is a ‘‘false negative.’’ There are
multiple historical examples in the media and communications sector. In 1939, the
New York Times reported that television could never compete with radio since it
requires families to stare into a screen. Thomas Watson, chairman of IBM, proclaimed in 1943, ‘‘I think there is a world market for maybe five computers.’’
Today there are two billion computers, not counting smartphones and other
‘‘smart’’ devices. In 1977, Ken Olsen, President of the world’s number two
computer firm Digital Equipment Corporation, stated, ‘‘There is no reason anyone
would want a computer in their home.’’ A 1981 McKinsey study for AT&T
forecast that there would only be 900,000 cell phones in use worldwide by the year
2000. In reality, in that year there were almost one billion cell phones in use and
three billion in 2011.
Prologue I: Research Demands on Demand Research
xvii
Major Stages of Demand Analysis
When it comes to demand analysis the two major stages are data collection and
data interpretation. Data used to be gathered in a leisurely fashion with interviews,
surveys, focus groups, and test marketing. Television and radio audiences were
tracked through paper diaries. This data sample was small yet expensive, unreliable, and subject to manipulation. Four times a year, during the ‘‘sweeps’’ periods,
the audiences of local stations were measured based on samples of 540 households
subject to a barrage of the networks’ most attractive programs. Other media data
was collected through the self-reporting of sales. This is still practiced by newspapers and magazines, and is notoriously unreliable. For book best-seller lists,
stores are sampled. This, too, has been subverted. Two marketing consultants spent
30,000 dollars in purchases of their own book and were able to profitably propel it
into the bestseller list. For film consumption attendance figures are reported
weekly. According to the editor of a major entertainment magazine, these numbers
are ‘‘made up—fabricated every week.’’ In parallel to these slow and unreliable
data collection methods, analytical tools were similarly time-insensitive: they were
lengthy studies using methodologies that could not be done speedily. In fact, many
academic demand models could never be applied realistically at all. They included
variables and information that were just not available. And the methodologies
themselves had shortcomings.
Estimation Models
The major methodological approaches to demand estimation include econometric
modeling, conjoint analysis, and diffusion models. Econometric estimations are
usually created with economic variables for the elasticities for price, income, as
well as socio-demographic control variables (Cameron 2006). The price of substitutes is often used. There are generic statistical problems to any econometric
estimation, such as serial correlation, multicollinearity, homoscedasticity, lags, and
exogeneity (Farrar and Glauber 1967). Moreover, predicting the future requires the
assumption that behavior in the future is similar to behavior in the past.
One needs to choose and assume a specific mathematical model for the relationship between price, sales, and the variables. If the specification is incorrect the
results will be misleading. Examples of this are several demand estimation models
for newsprint, the paper used by daily newspapers. This demand estimation is of
great importance to newspaper companies who need to know and plan for the price
of their main physical input. It is also of great importance to paper and forestry
companies who must make long-term investments in tree farming.
Here is how the different models described the past and project the future
(Hetemäki and Obersteiner 2002), and a comparison with subsequent reality. One
model is that of the United Nation’s Food and Agriculture Organization (FAO).
xviii
Prologue I: Research Demands on Demand Research
Fig. 1 Forecasts for Newsprint Consumption in the US, 1995–2020—Various Models (Hetemaki
and Obersteiner 2002)
Another is that of the Regional Planning Association (RPA). And there are seven
other models.
As one can see, the models, though using the same past data from 1970–1993,
thereafter radically diverge and predict, for 2010, in a range from 16.4 million tons
to about 11 million tons. The gap widens by 2020 to over 130 %, making them
essentially useless as a forecasting tool for decision makers in business and policy.
On top of that, none of the models could predict the decline of newspapers due to
the internet, as shown by the ‘x’ markings on the graph. The actual figures were,
for 2010, 5.4 million and for 2011, about 4.3 million—literally off the chart of the
original estimates. The worst of the predictions is the UN’s authoritative prediction
which is a basic input into many countries’ policy making, as well as for global
conferences assessing pressure on resources.
Econometric models have also been employed by the film industry. They have
tried to create some black-box demand models to aid in the green-lighting of film
projects. Essentially, coefficients are estimated for a variety of variables such as
genres, e.g., science fiction; stars, e.g., Angelina Jolie; plot, e.g., happy endings,
directors, e.g., Rob Reiner, and other factors. These models include the Motion
Picture Intelligencer (MIP), MOVIEMOD, and others (Eliashberg et al. 2000;
Wood 1997). Such models are proprietary and undisclosed, but even after
employing them, films still bomb at an 80 % rate.
A second traditional empirical methodology for demand analysis has been
conjoint analysis (Green and Rao 1971; Allison et al. 1992). This method permits
the researcher to identify the value (utility) that a consumer attaches to various
product attributes. The subject is asked to select among different bundles of
attributes and one measures the trade-off in the utility of such attributes. There is
not much theory in conjoint analysis, but it is a workable methodology.
Prologue I: Research Demands on Demand Research
xix
An example is an attribute-importance study for MP3 players. On a scale of
1–10, people’s preference weights were found to be quality: 8.24; styling: 6.11;
price: 2.67; user friendliness: 7.84; battery life: 4.20; and customer service: 5.66.
These weights enable the researcher to predict the price which the consumer
would pay for a product of various combinations of attributes (Nagle and Holden
1995). There are computer packages that generate an optimal set of trade-off
questions and interpret results. But the accuracy of this technique is debatable.
People rarely decompose a product by its features and are likely to be more
affected by generic perspectives such as brand reputation or recommendations, not
by feature trade-offs.
The third major empirical method for demand analysis is an epidemic diffusion
model. Such a model is composed of a logistic function such as y(t) = 1/(1 + c e-kt).
This technique, like the others, has its own inherent problems (Guo 2010). It is
difficult to find the acceleration point and the ‘‘saturation level.’’ Comparisons of
the product are made and forecasted with some earlier product that is believed to
have been similar.
Thus, in the past, demand analysis was constrained by weak data and clunky
analytical models. Recently, however, things have changed on the data collection
end. Data have ceased to be the constraint that it once was as more advanced
collection tools have emerged. First, there are now increasing ways to measure
peoples’ actual sensory perceptions to media content and to products more generally. ‘‘Psycho-physiology’’ techniques measure heart rate (HR), brainwaves
(electroencephalographic activity, EEG), skin perspiration (electrodermal activity,
EDA), muscle reaction (electromyography, EMG), and breathing regularity
(respiratory sinus arrhythmia, RSA) (see Ravaha 2000; Nacke et al. 2010). These
tools can be used in conjunction with audience perception analyzers, which are
hand-held devices linked to software and hardware that registers peoples’
responses and their intensity.
Second, the technology of consumer surveying has also improved enormously.
There are systems of automated and real-time metering. Radio and television
listening and channel-surfing can be followed in real-time. Measuring tools are
carried by consumers, such as the Passive People Meter (PPM) (Arbitron 2011;
Maynard 2005). The TiVo Box and the cable box allow for instant gathering of
large amounts of data. Music sales are automatically logged and registered; geographic real-time data is collected for the use of the internet, mobile applications,
and transactions (Roberts 2006; Cooley et al. 2002). Mobile Research, or
M-Research, uses data gathered from cell phones for media measurement and can
link it to locations. Radio-frequency identification (RFID) chips can track product
location (Weinstein 2005). Even more powerful is the matching of such data.
Location, transaction, media consumption, and personal information can be correlated in real-time (Lynch and Watt 1999). This allows, for example, the measurement in real-time of advertising effectiveness and content impact, and enables
sophisticated pricing and program design.
Thus, looking ahead, demand data measurement will be increasingly real-time,
global, composed of much larger samples, yet simultaneously more individualized.
xx
Prologue I: Research Demands on Demand Research
This will allow for increasing accuracy in the matching of advertising, pricing, and
consumer behavior.
Of course, there are problems as data collection continues to improve (O’Leary
1999). The first challenge is the coordination and integration of these data flows
(Clark 2006). This is a practical issue (Deck and Wilson 2006). Companies are
working on solutions (Carter and Elliott 2009; Gordon 2007). Nielsen has launched a data service (Gorman 2009), Nielsen DigitalPlus, which integrates set top
box data with People Meter data, transaction data from Nielsen Monitor Plus, retail
and scanning information from AC Nielsen, and modeling and forecasting information from several databases (Claritas, Spectra, and Bases.) Nielsen intends to
add consumers’ activities on the internet and mobile devices into this mass of data.
The second challenge is that of privacy: the power of data collection has grown
to an extent that it is widely perceived to be an intrusive threat (Clifton 2011;
Matatov et al. 2010; Noam 1995). So there will be further legal constraints on data
collection, use, matching, retention, and dissemination.
The third problem is that when it comes to the use of these rich data streams,
academic and analytical research are falling behind (Holbrook et al. 1986;
Weinberg and Weiss 1986). When one looks at what economists in demand
research do these days, judging from the articles’ citations, they still show little
connectedness to other disciplines or to corporate demand research. There is a
weak appreciation of the literatures of academic marketing studies, of information
science on data mining (Cooley 2002), of the behavioral sciences (Ravaha et al.
2008), of communications research (Zillman 1988; Vorderer et al. 2004), and even
in the recent work by behavioral economists (Camerer 2004). There is little
connection to real-world demand applications—the work that Nielsen or Simmons
or the media research departments of networks do (Coffey 2001). Conversely, the
work process of Nielsen and similar companies seems to be largely untouched by
the work of academic economists, which is damning to both sides.
The next challenge is therefore to create linkage of economic and behavioral
data. Right now there is no strong link of economic behavioral models and
analysis. Behavioral economics is in its infancy (Kahneman 2003, 2012), and it
relies mostly on individualized, traditional, slowpoke data methods of surveys and
experiments. The physiologists’ sensor-based data techniques, mentioned above,
have yet to find a home in economic models or applied studies. There is also a
need to bridge the academic world of textbook theory of consumer demand with
the practical empirical work of media researchers.
Thus, the biggest challenge in moving demand studies forward is the creation of
new research methodologies. The more powerful data collection tools will push,
require, and enable the next generation of analytical tools. One should expect a
renaissance in demand analysis. Until it arrives one should expect frustration.
In short: What we need today, again, is a Lester Taylor.
Eli Noam
Prologue I: Research Demands on Demand Research
xxi
References
Arbitron (2011) The portable people meter system. http://www.arbitron.com/portable_
people_meters/thesystem_ppm.htm. Accessed 1 June 2011
Adams WJ (1994) Changes in ratings patterns for prime time before, during and after the
introduction of the people meter. J Media Econ 7(2):15–28
Allison N, Bauld S, Crane M, Frost L, Pilon T, Pinnell J, Srivastava R, Wittink D, Zandan P
(1992) Conjoint analysis: a guide for Designing and Interpreting conjoint studies. American
marketing association, Chicago
Bagdikian BH (2000) Media monopoly. Beacon Press, Boston
Camerer C (2004) Advances in behavioral economics. Princeton University Press, Princeton
Cameron S (2006) Determinants of the demand for live entertainment: some survey-based
evidence. Econ Issues 11(2):51–64
Carey J, Elton M (2010) When media are new: understanding the dynamics of new media
adoption and use. University of Michigan Press, Michigan
Carey J, Elton M (2011) Forecasting demand for new consumer services: challenges and
alternatives. New infotainment technologies in the home: demand-side perspectives.
Lawrence Erlbaum Associates, New Jersey, pp 35–57
Carter B, Stuart E (2009) Media companies seek rival for nielsen ratings. New York Times.
http://www.citi.columbia.edu/B8210/cindex.htm. Accessed 8 June 2011
Clark D (2006) Ad measurement is going high tech. Wall Street Journal. http://www.
utdallas.edu/*liebowit/emba/hightechmeter.htm. Accessed 1 June 2011
Clifton C (2011) Privacy-preserving data mining at 10: what’s next? Purdue University, West
Lafayette. http://crpit.com/confpapers/CRPITV121Clifton.pdf
Coffey S (2001) Internet audience measurement: a practitioner’s view. J Interact Advert 1(2):13
Cooley R, Deshpande M, Tan P (2002) Web usage mining: discovery and applications of usage
patterns from web data. SIGKDD Explor 18(2). http://www.sigkdd.org/explorations/
issues/1-2-2000-01/srivastava.pdf. Accessed 2 June 2011
Deck CA, Wilson B (2006) Tracking customer search to price discriminate. Electronic Inquiry
44(2):280–295
Eliashberg J, Jonker J, Sawhney MS, Wierenga B (2000) MOVIEMOD: an implementable
decision-support system for prerelease market evaluation of motion pictures. Mark Sci
19(3):226–243
Farrar DE, Glauber, Robert R (1967) Multicollinearity in regression analysis: the problem
revisited. Rev Econ Stat 49(1):92–107
Gordon R (2007) ComScore, Nielsen report dissimilar numbers due to methodology differences.
Newspaper association of America. http://www.naa.org/Resources/Articles/Digital-EdgePondering-Panels/Digital-Edge-Pondering-Panels.aspx. Accessed 1 June 2011
Gorman B (2009) Nielsen to have ‘‘Internet Meters’’ in place prior to 2010–2011 Seasons. TV by
the numbers. http://tvbythenumbers.com/2009/12/01/nielsen-to-have-internet-meters-inplace-prior-to-2010-11-season/34921
Guo JL (2010) S-curve networks and a new method for estimating degree distributions of
complex networks. Chin Phys B 19(12). http://iopscience.iop.org/1674-1056/19/12/120503/
pdf/1674-1056_19_12_120503.pdf
Green J, McBurney P, Parsons S (2002) Forecasting market demand for new telecommunications
services: an introduction. Telematics Inform 19(3). http://www.sciencedirect.com/
science/article/pii/S0736585301000041. Accessed 2 June 2011
Green PE, Rao VR (1971) Conjoint measurement for quantifying judgmental data. J Mark Res
8(3):355–363
Hetemäki L, Obersteiner M (2002) US newsprint demand forecasts to 2020. University of
California, Berkeley. http://groups.haas.berkeley.edu/fcsuit/PDF-papers/LauriFisherPaper.pdf
Holden R, Nagle T (2001) The strategy and tactics of pricing: a guide to profitable decision
making, 3rd edn. Prentice Hall, New Jersey
xxii
Prologue I: Research Demands on Demand Research
Holbrook MB, Lehmann DR, O’Shaughnessy J (1986) Using versus choosing: the relationship of
the consumption experience to reasons for purchasing. Eur J Mark 20(8):49–62
Houthakker HS, Taylor LD (2010) Consumer demand in the United States, 1929–1970, Analyses
and projections, 3rd edn. Harvard University Press, Cambridge
Kahneman D (2012) Thinking, fast and slow. Farrar, Straus, Giroux, New York
Kahneman D (2003) Maps of bounded rationality: psychology for behavioral economics. Am
Econ Rev. http://www.jstor.org/stable/3132137?seq=2
Kim HG (2006) Traditional media audience measurement. Print and Broadcast Media
Livingstone SM (1993) The rise and fall of audience research: an old story with a new ending. Int
J Commun 43(4):5–12
Lynch M, Watt JH (1999) Using the internet for audience and customer research. In: Malkinson
TJ ( ed.) Communication Jazz: Improvising the New International Communication Culture,
IEEE International. 127 http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=799109.
Accessed 1 June 2011
Maynard J (2005) Local people meters may mean sweeping changes on TV. The Washington Post.
http://www.washingtonpost.com/wp-dyn/content/article/2005/04/27/AR2005042702324.html.
Accessed June 2011
Matatov N, Oded M, Lior R (2010) Privacy-preserving data mining: a feature set partitioning
approach information sciences elsevier 2010. http://www.sciencedirect.com/science?_ob=
MiamiImageURL&_cid=271625&_user=18704&_pii=S0020025510001234&_check=y&_
origin=&_coverDate=15-Jul-2010&view=c&wchp=dGLbVlt-zSkzk&md5=960ee422099a0b
d825604ce69812aa5e/1-s2.0-S0020025510001234-main.pdf
McBurney P, Parsons S, Green J (2002) Forecasting market demand for new telecommunications
services: an introduction. Telematics Inform 19(3):225–249
Nacke LE, Drachen A, Yannakakis G, Lee Pedersen A (2010) Correlation between heart rate,
Electrodermal activity and player experience in first-person shooter games. Academia.edu.
http://usask.academia.edu/LennartNacke/Papers/333839/Correlation_Between_Heart_Rate_
Electrodermal_Activity_and_Player_Experience_In_First-Person_Shooter_Games. Accessed
19 May 2011
Nagle TT, Holden RK (1995) The strategy and tactics of pricing: a guide to profitable decision
making, 2nd ed. Prentice Hall, Pearson, New Jersey
Noam E (1995) Privacy in telecommunications, Part III. New Telecommun Q 3(4):51–60
O’Leary M (1999) Web measures wrestle with methodologies, each other. Online Magazine 23:
105–106
Ravaha N (2004) Contributions of psychophysiology to media research: review and
recommendations. Media Psychol 6(2): 193–235
Roberts JL (2006) How to count eyeballs on the web. Newsweek, New York, p. 27
Stavitsky A (1995) Guys in suits with charts: audience research in U.S. public radio. J Broadcast
Electron Media 39(2):177–189
Taylor LD, Weiserbs D (1972) On the estimation of dynamic demand functions. Rev Econ Stat
54(4):459–465
Taylor LD, Rodney DA, Leonard W, Tsuang-Hua L, Michael DG, Copeland (1972) Telephone
communications in Canada: demand, production and investment decisions. Bell J Econ
Manage Sci 3(1):175–219
Taylor LD (1974) On the dynamics of dynamic demand models. Recherches Économiques de
Louvain/Louvain Economic Review 40(1): 21–31
Taylor LD, Rappoport PN (1997) Toll price elasticities estimated from a sample of US residential
telephone bills. Inf Econ Policy 9(1):51–70
Vorderer P, Christoph K, Ritterfeld U (2004) Enjoyment: at the heart of media entertainment.
Commun Theory 14(4):388–408
Weinstein R (2005) RFID: a technical overview and its application to the enterprise. A IEEE
Computer Society. http://electricalandelectronics.org/wp-content/uploads/2008/11/01490473.
pdf. Accessed 1 June 2011
Prologue I: Research Demands on Demand Research
xxiii
Wood D (1997) Can computer help hollywood pick hits? The Christian science monitor.
http://www.citi.columbia.edu/B8210/cindex.htm. Accessed 7 June 2011
Weinberg C, Weiss D (1986) A simpler estimation procedure for a micromodeling approach to
the advertising-sales relationship. Mark Sci 5(3):269–272
Zillman D (1988) Communication, social cognition and affect. In Donhohew L, Howard S,
ToryHiggens E (eds) Mood management: using entertainment to full advantage. Erlbaum,
Hillsdale 147–171
Prologue II: Lester Taylor’s Insights
Introduction
Lester Taylor has been at the forefront of what we know about the theory and
practice of consumer demand, in general (Taylor and Houthakker 2010), and
telecommunications demand, in particular (Taylor 1994). Never content with mere
abstract theories, Professor Taylor has applied a wide variety of techniques to
practical problems that have evolved in unpredictable ways as technology and
competition transform major sectors of our economy. His contributions are perhaps most widely recognized in the once pervasively regulated telecommunications industry,1 where he ‘‘wrote the book’’ about what is known about consumer
demand for the products and services. And when technology and competition
transformed that industry, he identified a research agenda to advance our knowledge of consumer demand.
In his 1994 update to an earlier comprehensive survey of telecommunications
demand and elasticity findings, Professor Taylor identified specific gaps that
research needed to address in order for firms to accommodate the changes that were
emerging and accelerating. Those gaps have been increasing because investing in
new technologies to maintain a competitive edge requires greater understanding of
consumer behavior than when there was little competition and products and
services were well established. Perhaps his most ironic observation was that while
the gap between what we know and what businesses needed to know was widening,
information about consumer behavior was becoming increasingly harder to find,
because (1) companies were more likely to consider such information as a proprietary competitive advantage and (2) the once highly talented and large group of
economists and demand specialists that companies assembled during the regulated
monopoly era had become somewhat of a luxury—subject to cost-cutting—as
1
Some of Professor Taylor’s elasticity estimates are being used to this day. For example, Jerry
Hausman (2011) used the classic market elasticity of -0.72 for long-distance services to estimate
a consumer benefit of approximately $5 billion that would result from lower interconnection
charges for wireline long-distance calls.
xxv
xxvi
Prologue II: Lester Taylor’s Insights
these companies faced less regulatory scrutiny and began to prepare for more
competition.
Despite the fact that the public knowledge of demand and the human resources
producing that knowledge have undoubtedly continued to shrink, Professor Taylor’s prescription for what is needed remains valid today.2 Specifically, with the
shift in focus from justifying prices and investment decisions before industry
regulators to competing successfully with new products, services, and technologies
in a largely deregulated world, analytical requirements have in turn shifted from
developing industry elasticities for generally undifferentiated products and services to developing firm-specific own and cross-price elasticities for increasingly
differentiated products.3 And because such demand information would be most
useful to the extent that it can inform pricing, investment planning, marketing, and
budgeting decisions, it is most effectively used in conjunction with complementary
information on product and service costs—a combination that would facilitate the
evaluation of the short-term and long-term profitability of product introduction and
pricing actions. Professor Taylor’s prescient advice from 1994 is still well worth
heeding both by formerly- and never-regulated telecommunications firms (Taylor
1994, p. 270 (footnote omitted)).
In truth, some slimming-down is probably in order, but to allow demand analysis to
languish because of the advent of competition and incentive regulation would be a mistake. The challenge for demand analysis in the telephone companies in the years ahead is
to forge links with marketing departments and to become integrated into company budgeting and forecasting processes. Applied demand analysis has a strategic role to play in a
competitive environment, ranging from the conventional types of elasticity estimation in
traditional markets to the identification of new markets. The possibilities are vast. It only
requires imagination, hard work—and some humility—on the part of economists and
demand analysts.
Professor Taylor’s Telecommunication Demand Findings
Professor Taylor initially summarized what was known about telecommunications
demand in a seminal 1980 book (Taylor 1980) and then updated that work 14 years
later (Taylor 1994). The first book described research findings for an industry that
was regulated by federal and state authorities in the United States and by their
counterparts internationally. By the time of the publication of the second book,
competition had taken root in some segments of the industry—most prominently
for long distance services and telecommunications equipment as a result of the
divestiture of the Bell System in 1984. While competition was beginning to
2
See, in particular Taylor (1994), Chap. 11.
Taylor (1994), pp. 266–270. In addition to the fact that data to develop firm-specific elasticities
is often proprietary, successful estimation introduces the additional challenge of accounting for
the price responses of competing firms.
3
Prologue II: Lester Taylor’s Insights
xxvii
emerge in other segments,4 the primary focus on publicly available demand
information continued to reflect the concerns of a regulated industry: how would
volumes of services offered by ‘‘dominant’’ providers change if regulators
approved price changes; and how would these volume changes affect the revenues,
costs, and profitability of those regulated companies?
Accordingly, Professor Taylor identified trends and regularities in the market
(or industry) price elasticities (and income elasticities) for clearly delineated and
well-understood services. Indeed, he took special notice of the stability in longdistance price elasticities over the period between the publication of his two books,
including the empirical regularity that customers tended to be more price-sensitive
(with elasticities higher in absolute value) as the distance of the call became
longer.
Professor Taylor also cataloged gaps in our knowledge, which by 1994 were
widening as competition grew, technologies advanced, and the formerly distinct
data, video, and voice industries converged. In partial recognition of these trends,
large segments of the industry were further deregulated in the U.S. by the 1996
Telecommunications Act. These trends foreshadowed the facts that (1) prices
would increasingly be the result of market outcomes, rather than regulatory
mandates; (2) businesses would have to introduce new products and services,
rather than continue to offer a stable portfolio of well-recognized products; and (3)
those products would compete with those of other providers deploying sometimes
similar, but other times different (e.g., wireless) technologies. In technical terms,
Professor Taylor recognized that the tide was shifting from a predominant
emphasis on market own-price elasticities to (1) the need to understand crosselasticities among the complementary and substitute products offered by the same
firm and (2) how the firm elasticity, not the industry, or market elasticity was
paramount as formerly ‘‘dominant’’ firms lost share to new rivals. And with regard
to the need to identify and measure firm elasticities, he reminded us that it is not
only the price the firm is considering, but also the reactions of other firms to that
price that affects consumer demand.
Despite the shift in emphasis that competitive and technological trends have
required, there have been fundamental features of Professor Taylor’s approach that
make it timely, if not timeless. In particular, (1) he seeks to provide a theoreticallysound underpinning to the empirical regularities he identifies and (2) he takes
advantage of advances in data sources and analytical techniques in providing an
ever-sounder underpinning for his results. Therefore, while the industry trends he
noted may have changed some well-established empirical regularities that at one
time seemed quite solid, his research approaches and agenda retain their power.
4
For example, large businesses were taking advantage of alternatives that ‘‘bypassed’’ the local
phone companies and certain services offered by these companies, e.g., short-haul long-distance
calls were not the ‘‘natural monopoly’’ services that the Bell System divestiture presumed.
xxviii
Prologue II: Lester Taylor’s Insights
For example, Professor Taylor observed that the magnitude of industry elasticities for various categories of toll calls had been reasonably stable and that the
absolute value of the long-distance elasticity increases with the distance of the
call.5 However, since that time the former distinctions between types of calls (such
as long-distance) have blurred as both traditional telecommunications companies
and newer firms such as wireless providers offer pricing plans that make no distinction between what were once viewed as clearly different types of calls, e.g.,
local and long-distance. Indeed, long-distance calling—a key factor in both the
divestiture of the old Bell System in 1984 and the structure and provisions of the
1996 Telecommunications Act—has ceased to exist as a stand-alone industry. This
is a result of the acquisition of the legacy AT&T and MCI – the two largest
companies in that industry, by the current AT&T (formerly SBC) and Verizon by
early 2006. Consequently, replicating the studies that produced such findings
would be extremely difficult (if even possible). Whether the previous relationship
between price sensitivity and the distance of a call would persist appeared to be
problematic.
And yet, despite these changes in industry structure and product offerings,
Professor Taylor’s explanation of why the empirical regularity was observed
remains relevant. According to this explanation, the reason for the lower price
sensitivity for calls of shorter distance was the likelihood that proportionately more
of these calls were related to economic activities, such as work and shopping, and
hence less avoidable if price were to increase. In contrast, for calls of longer
distances, a relatively greater proportion were not economic in nature, e.g., calls to
friends and relatives, as those ‘‘communities of interest’’ tended to span greater
distances than calls related to economic activities. In other words, the empirical
regularity had a fundamental explanation—it was the distribution of the types of
calls that spanned particular distances, rather than distance itself, that produced the
consistent findings Professor Taylor noted. Therefore, while technological developments—particularly the widespread use of the internet for all types of commercial and personal interactions—has most likely changed how calls of different
types are distributed by distance, the differential price sensitivity of different types
of calls has likely persisted.6
5
Taylor (1994) p. 260. ‘‘All considered, this probably represents the best-established empirical
regularity in telecommunications demand.’’.
6
Professor Taylor’s quest for fundamental explanations of empirical regularities in consumer
behavior and demand was even deeper in the book he co-authored with Houthakker. In this work,
the authors presented a framework for explaining consumption, income, and time expenditures
based on underlying brain functions. See Taylor and Houthakker (2010), Chapter 2.
Prologue II: Lester Taylor’s Insights
xxix
While Professor Taylor’s contributions to the theory, and perhaps even more
importantly, to the practice of demand analysis are most prominent in telecommunications, his advice on how demand practitioners could maintain and enhance
their relevance at the outset of the technological and competitive transition in that
industry cuts across all efforts to use demand and price optimization tools to
improve companies’ performances.
Timothy J. Tardiff
Daniel S. Levy
References
Hausman J (2011) Consumer Benefits of Lower Intercarrier Compensation Rates, Attachment 4
to the universal service and intercarrier compensation proposal filed at the Federal
Communications Commission by AT&T, CenturyLink, FairPoint, Frontier, Verizon, and
Windstream. http://fjallfoss.fcc.gov/ecfs/document/view?id=7021698772. Accessed July 29,
2011
Taylor LD (1980) Telecommunications demand: a survey and critique. Ballinger, Cambridge
Taylor LD (1994) Telecommunications demand in theory and practice. Kluwer, Boston
Taylor LD, Houthakker HS (2010) Consumer demand in the United States: prices, income, and
consumption behavior. 3rd ed. Springer, New York
Part I
Advances in Theory
Chapter 1
Regression with a Two-Dimensional
Dependent Variable
Lester D. Taylor
1.1 Introduction
This chapter focuses on how one might estimate a model in which the dependent
variable is a point in the plane rather than a point on the real line. A situation that
comes to mind is a market in which there are just two suppliers and the desire is to
estimate the market shares of the two. An example would be determination of the
respective shares of AT&T and MCI in the early days of competition in the longdistance telephone market. The standard approach in this situation (when such
would have still been relevant) would be to specify a two-equation model, in
which one equation explains calling activity in the aggregate long-distance market
and a second equation that determines the two carriers’ relative shares. An
equation for aggregate residential calling activity might, for example, relate total
long-distance minutes to aggregate household income, a measure of market size,
and an index of long-distance prices; while the allocation equation might then
specify MCI’s share of total minutes as a function of MCI’s average price per
minute relative to the same for AT&T, plus other quantities thought to be
important.
The purpose of these notes is to suggest an approach that can be applied
in situations of this type in which the variable to be explained is defined in terms of
polar coordinates on a two-dimensional plane. Again, two equations will be
involved, but the approach allows for generalization to higher dimensions, and,
even more interestingly, can be applied in circumstances in which the quantity to
be explained represents the logarithm of a negative number. The latter, as will be
seen, involves regression in the complex plane.
L. D. Taylor (&)
University of Arizona, Tucson, AZ 86718, USA
e-mail: ltaylor@email.arizona.edu
J. Alleman et al. (eds.), Demand for Communications Services - Insights and Perspectives,
The Economics of Information, Communication, and Entertainment,
DOI: 10.1007/978-1-4614-7993-2_1, Springer Science+Business Media New York 2014
3
4
L. D. Taylor
1.2 Regression in Polar Coordinates
Assume that one has two firms selling in the same market, with sales of y1 and y2,
respectively. Total sales will then be given by y = y1 ? y2. The situation can be
depicted as the vector (y1, y2) in the y1y2 plane, with y1 and y2 measured along their
respective axes. In polar coordinates, the point (y1, y2) can be expressed as:
y1 ¼ r cosh
ð1:1Þ
y2 ¼ r sinh
ð1:2Þ
r ¼ ðy21 þ y22 Þ1=2
ð1:3Þ
y1
cos h ¼ 1=2
y21 þ y22
ð1:4Þ
where
sin h ¼ y2
y21
þ y22
1=2
ð1:5Þ
One can now specify a two-equation model for determining y1, y2, and y in
terms of cos h and r (or equivalently in sin h and r):
cos h ¼ f ðX; eÞ
ð1:6Þ
r ¼ gðZ; gÞ;
ð1:7Þ
and
for some functions f and g, X and Z relevant predictors, and e and g unobserved
error terms.
At this point, the two-equation model in expressions (1.6) and (1.7) differs from
the standard approach in that the market ‘‘budget constraint’’ (y = y1 ? y2) is not
estimated directly, but rather indirectly through the equation for the radius vector
r. This being the case, one can legitimately ask, why take the trouble to work with
polar coordinates? The answer is that this framework easily allows for the analysis
of a market with three sellers and can probably be extended to markets in which
n firms for n C 4 compete. Adding a third supplier to the market, with sales equal
to y3, the polar coordinates for the point (y1, y2, y3) in 3-space will be given by:
y1 ¼ r cos a
ð1:8Þ
y2 ¼ r cos b
ð1:9Þ
y3 ¼ r cos c
ð1:10Þ
1 Regression with a Two-Dimensional Dependent Variable
5
r ¼ ðy21 þ y22 þ y23 Þ1=2 ;
ð1:11Þ
where cos a, cos b, and cos c are the direction cosines associated with (y1, y2, y3)
(now viewed as a vector from the origin). From expressions (1.8)–(1.10), one then
has:
y1
cos a ¼
ðy21
cos b ¼
cos c ¼
þ
y22
þ y23 Þ1=2
y2
ðy21 þ y22 þ y23 Þ1=2
y3
ðy21
þ
y22
þ y23 Þ1=2
ð1:12Þ
ð1:13Þ
ð1:14Þ
A three-equation model for estimating the sales vector (y1, y2, y3) can then be
obtained by specifying explanatory equations for r in expression (1.11) and for any
two of the cosine expressions in (1.12)–(1.14).
1.3 Regression in the Complex Plane
An alternative way of expressing a two-dimensional variable (y1, y2) is as
z ¼ y1 þ iy2
ð1:15Þ
pffiffiffiffiffiffiffi
in the complex plane, where y1 and y2 are real and i = 1. The question that is
now explored is whether there is any way of dealing with complex variables in a
regression model. The answer appears to be yes, but before showing this to be the
case, let me describe the circumstance that motivated the question to begin with.
As is well-known, the double-logarithmic function has long been a workhorse in
empirical econometrics, especially in applied demand analysis. However, a serious
drawback of the double-logarithmic function is that it cannot accommodate variables that take on negative values, for the simple reason that the logarithm of a
negative number is not defined as a real number, but rather as a complex number.
Thus, if a way can be found for regression models to accommodate complex
numbers, logarithms of negative numbers could be accommodated as well.
The place to begin, obviously, is with the derivation of the logarithm of a
negative number. To this end, let v be a positive number, so that -v is negative.
The question, then, is what is ln(-v), which one can write as
lnðvÞ ¼ lnð1vÞ
¼ lnð1Þ þ lnðvÞ;
ð1:16Þ
6
L. D. Taylor
which means that problem becomes to find an expression for ln(-1). However,
from the famous equation of Euler,1
eip þ 1 ¼ 0
ð1:17Þ
one has, after rearranging and taking logarithms,
lnð1Þ ¼ ip
ð1:18Þ
lnðvÞ ¼ ip þ lnðvÞ:
ð1:19Þ
Consequently,
To proceed, one now writes ln(-v) as the complex number,
z ¼ lnðvÞ þ ip;
ð1:20Þ
lnðvÞ ¼ r cos h
ð1:21Þ
ip ¼ i r sin h;
ð1:22Þ
so that (in polar coordinates):
where r, which represents the ‘‘length’’ of z—obtained by multiplying z by its
complex conjugate, ln(v)–ip, is equal to
h
i1=2
r ¼ p2 þ lnðvÞ2
:
ð1:23Þ
This is the important expression for the issue in question.
To apply this result, suppose that one has a sample of N observations on
variables y and x that one assumes are related according to
f ðy; xÞ ¼ 0;
ð1:24Þ
for some function f. Assume that both y and x have values that are negative, as well
as positive, and suppose that (for whatever reason) one feels that f should be
double-logarithmic, that is, one posits:
lnðyi Þ ¼ a þ b lnðxi Þ þ ei ; i ¼ 1; . . .; N:
ð1:25Þ
From the foregoing, the model to be estimated can then be written as:
zi ¼ a þ bwi þ ei ;
where
lnð yÞ if y [ 0
1
See Nahin (1998, p. 67).
ð1:26Þ
1 Regression with a Two-Dimensional Dependent Variable
h
i1=2
z ¼ p2 þ lnðyÞ2
7
if y 0
ð1:27Þ
if x 0:
ð1:28Þ
and
lnðxÞ if x [ 0
h
i1=2
w ¼ p2 þ lnðxÞ2
1.4 An Example
In the Third Edition of Consumer Demand in The United States, the structure and
stability of consumption expenditures in the United States was undertaken using a
principal component analysis of 14 exhaustive categories of consumption expenditure using 16 quarters of data for 1996–1999 from the quarterly Consumer
Expenditure Surveys conducted by the Bureau of Labor Statistics (BLS).2 Among
other things, the first two principal components (i.e., those associated with the two
largest latent roots) were found to account for about 85 % of the variation in total
consumption expenditures across households in the samples. Without going into
details, one of the things pursued in the analysis was an attempt to explain these
two principal components, in linear regressions, as functions of total expenditure
and an array of socio-demographical predictors such as family size, age, and
education. The estimated equations for these two principal components using data
from the fourth quarter of 1999 are given in Table 1.1.3 For comparison, estimates
from an equation for the first principal component in which the dependent variable
and total expenditure are expressed in logarithms are presented as well. As is
evident, the double-log specification gives the better results. Any idea, however, of
estimating a double-logarithmic equation for the second principal component was
thwarted by the fact that about 10 % of its values are negative.
The results from applying the procedure described above to the principal component just referred to are given in Table 1.2. As mentioned, the underlying data are
from the BLS Consumer Expenditure Survey for the fourth quarter of 1999, and
consist of a sample of 5,649 U. S. households. The dependent variable in the model
estimated was prepared according to expression (1.27), with the logarithm of y, for
y B 0, calculated for the absolute value of y. The dependent variable is therefore,
z = ln(y) for the observations for which y is positive and [p2 ? ln(-y)2]1/2 for the
2
3
See Taylor and Houthakker (2010), Chap. 5).
Households with after-tax income less than $5,000 are excluded from the analysis.
8
L. D. Taylor
Table 1.1 Principal component regressions BLS consumer expenditure survey 1999 Q4
Variables
PC1 linear
PC2 linear
PC1 double-log
intercept
totexp
NO_EARNR
AGE_REF
FAM_SIZE
dsinglehh
drural
dnochild
dchild1
dchild4
ded10
dedless12
ded12
dsomecoll
ded15
dgradschool
dnortheast
dmidwest
dsouth
dwhite
dblack
dmale
dfdstmps
d4
Estimated
coefficient
t-value
Estimated
coefficient
t-value
Estimated
coefficient
t-value
123.89
0.48
-108.59
-5.27
-37.14
177.78
77.15
40.45
303.05
-63.40
-6.50
183.89
39.82
-4.09
-279.17
-358.85
-63.66
-91.50
-26.85
-202.39
8.01
-39.06
138.25
-12.15
0.34
209.39
-5.28
-4.67
-1.90
3.48
1.32
0.77
4.28
-0.81
-0.08
0.53
0.12
-0.01
-0.80
-1.02
-1.24
-1.92
-0.60
-2.64
-0.09
-1.13
1.55
-0.20
936.53
-0.08
64.41
6.46
43.88
-297.13
-549.14
-299.72
143.66
479.38
92.49
-207.06
288.75
292.48
1419.24
2022.19
96.39
-266.89
-424.42
208.95
129.24
59.61
-525.22
141.63
0.94
-14.04
1.03
2.10
0.83
-2.14
-3.47
-2.10
0.75
2.25
0.40
-0.22
0.31
0.31
1.50
2.12
0.69
-2.06
-3.51
1.00
0.52
0.64
-2.17
0.87
-1.5521
1.0723
-0.0088
-0.0007
-0.0060
0.0527
-0.0140
-0.0220
0.0612
-0.0021
-0.0022
0.1193
0.1000
0.0810
0.0727
0.0675
-0.0232
-0.0395
-0.0266
-0.0328
0.0107
-0.0088
0.0439
0.0076
-20.85
215.23
-2.20
-3.52
-1.80
5.95
-1.41
-2.46
5.06
-0.16
-0.15
2.01
1.69
1.37
1.22
1.13
-2.65
-4.85
-3.49
-2.51
0.69
-1.50
2.86
0.74
observations for which y is negative. All values of total expenditure are positive and
accordingly require no special treatment.
From Table 1.2, one sees that, not only is total expenditure an extremely
important predictor, but also that the R2 of the logarithmic equation is considerably
higher than the R2 for the linear model in Table 1.1: 0.5204 versus 0.0598.
However, as the dependent variable in logarithmic equation is obviously measured
in different units than the dependent variable in the linear equation, a more
meaningful comparison is to compute an R2 for this equation with the predicted
values in original (i.e., arithmetic) units. To do this, one defines two dummy
variables:
d1 ¼ 1 if y [ 0
ð1:29Þ
d2 ¼ 1 if y 0
d2 ¼ 1
if y 0;
ð1:30Þ
1 Regression with a Two-Dimensional Dependent Variable
Table 1.2 Double
logarithmic estimation of
second principal component
using expression (1.27)
9
Variables
Estimated coefficient
t-value
intercept
Lntotexp
NO_EARNR
AGE_REF
FAM_SIZE
dsinglehh
drural
dnochild
dchild1
dchild4
ded10
dedless12
ded12
dsomecoll
ded15
dgradschool
dnortheast
dmidwest
dsouth
dwhite
dblack
dmale
dfdstmps
d4
R2 = 0.5204
-3.1450
1.2045
-0.1161
-0.0014
-0.0017
0.2594
-0.1269
-0.0294
0.2519
0.1586
0.0410
-0.0839
-0.2072
-0.2101
-0.2290
-0.1416
0.0522
0.0084
-0.0118
-0.0117
0.1248
-0.0693
0.2198
-0.0266
-11.54
66.03
-7.97
-1.92
-0.14
8.00
-3.48
-0.89
5.69
3.24
0.76
-0.39
-0.96
-0.97
-1.05
-0.65
1.63
0.28
-0.42
-0.24
2.18
-3.21
3.91
-0.71
and then:
1=2
p ¼ ^z2 d1 p2
:
ð1:31Þ
A predicted value in arithmetic units, ^p follows from multiplying the exponential of p by d2:
^p ¼ d2 ep
ð1:32Þ
An R2 in arithmetic units can now be obtained from the simple regression of
y on ^
p4:
^y ¼ 587:30 þ 1:3105 ^p
ð19:39Þ
4
R2 ¼ 0:6714
ð107:42Þ:
t-ratios are in parentheses. All calculations are done in SAS.
ð1:33Þ
10
L. D. Taylor
However, before concluding that the nonlinear model is really much better than
the linear model, it must be noted that the double-log model contains information
that the linear model does not, namely, that certain of the observations on the
dependent variable take on negative values. Formally, this can be viewed as an
econometric exercise in ‘‘switching regimes,’’ in which (again, for whatever reason) one regime gives rise to positive values for the dependent variable while a
second regime provides for negative values. Thus, one sees that when R2s are
calculated in comparable units, the value of 0.0598 of the linear model is a rather
pale shadow of the value of 0.6714 of the ‘‘double-logarithmic’’ model. Consequently, a more appropriate test of the linear model vis-à-vis the double-logarithmic one would be to include such a ‘‘regime change’’ in its estimation. The
standard way of this doing this would be to re-estimate the linear model with all
the independent variables interacted with the dummy variable defined in expression (1.30). However, a much easier, cleaner and essentially equivalent procedure
is to estimate the model as follows:
y ¼ a0 þ a1 d1 þ ðb0 þ b1 d1 Þ^yp þ e;
ð1:34Þ
where ^yp is the predicted value of y in the original linear model and d1 is the
dummy variable defined in expression (1.29). The resulting equation is:
^y ¼ 2407:26 8813:13d1 ð0:5952 4:7517d1 Þ^yp
ð43:47Þ
ð109:94Þ
ð14:41Þð51:39Þ
ð1:35Þ
2
R ¼ 0:5728:
However, ‘‘fairness’’ now requires that one does a comparable estimation for
the nonlinear model:
^y ¼ 315:41 80:88d1 þ ð1:7235 þ 0:8878d1 Þ^p
ð11:04Þ
ð0:90Þ
ð52:58Þ ð75:95Þ
ð1:36Þ
2
R ¼ 0:8085:
As heads may be starting to swim at this point, it will be useful to spell out
exactly what has been found:
To begin with, one has a quantity, y, that can take negative as well as positive
values, whose relationship with another variable one has reason to think may be
logarithmic.
As the logarithm of a negative number is a complex number, the model is
estimated with a ‘‘logarithmic’’ dependent variable as defined in expression (1.27).
The results, for the example considered, show that the nonlinear model provides a
much better fit (as measured by the R2 between the actual and predicted values
measured in arithmetic units) than the linear model.
Since the nonlinear model treats negative values of the dependent variable
differently than positive values, the nonlinear model can accordingly be viewed as
allowing for ‘‘regime change.’’ When this is allowed for in the linear model (by
allowing negative and positive y to have different structures), the fit of the linear
1 Regression with a Two-Dimensional Dependent Variable
11
model (per Eq. (1.35)) is greatly improved. However, the same is also seen to be
true (cf., Eq. (1.35)) for the nonlinear model.
The conclusion, accordingly, is that, for the data in this example, a nonlinear
model allowing for logarithms of negative numbers gives better results than a
linear model: an R2 of 0.81 versus 0.58 (from Eqs. (1.35) and (1.36)).
On the other hand, there is still some work to be done, for the fact that knowledge
that negative values of the variable being explained are to be treated differently as
arising from a different ‘‘regime’’ means that a model for explaining ‘‘regime’’ needs
to be specified as well. Since ‘‘positive–negative’’ is clearly of a ‘‘yes–no’’ variety,
one can view this as a need to specify a model for explaining the dummy variable d1 in
expression (1.29). As an illustration (but no more than that), results from the estimation of a simple linear ‘‘discriminant’’ function, with d1 as the dependent variable
and the predictors from the original models (total expenditure, age, family, education, etc.) as independent variables are given in Eq. (1.37)5:
^
d1 ¼ 0:0725 þ 0:00001473totexp þ other variables
ð0:83Þ
ð27:04Þ
ð1:37Þ
2
R ¼ 0:1265
1.4.1 An Additional Example
A second example of the framework described above will now be presented using
data from the Bill Harvesting II Survey that was conducted by PNR & Associates in
the mid-1990s. Among other things, information in this survey was collected on
households that made long-distance toll calls (both intra-LATA and inter-LATA)
using both their local exchange carrier and another long-distance company.6 While
data from that era are obviously ancient history in relation to the questions and
problems of today’s information environment, they nevertheless provide a useful
data set for illustrating the analysis of markets in which households face twin suppliers of a service.
For notation, let v and w denote toll minutes carried by the local exchange
company (LEC) and long-distance carrier (OC), respectively, at prices plec and poc.
In view of expressions (1.6) and (1.7) from earlier, the models for both intraLATA and inter-LATA toll calling will be assumed as follows:
5
Interestingly, a much improved fit is obtained in a model with total expenditure and the thirteen
other principal components (which, by construction, are orthogonal to the principal component
that is being explained) as predictors. The R2 of this model is 0.46.
6
Other studies involving the analysis of these data include Taylor and Rappoport (1997) and
Kridel et al. (2002).
12
L. D. Taylor
Table 1.3 IntraLATA toll-calling regression estimates bill harvesting data
cos h
Variables
Models
constant
income
age
hhcomp
hhsize
educ
lecplan
relpricelec/oc
Variables
Models
constant
income
age
hhcomp
hhsize
educ
lecplan
pricelec
priceoc
v/w
Estimated
coefficient
t-ratio
Estimated
coefficient
t-ratio
0.5802
-0.0033
0.0006
0.0158
0.0257
0.0093
0.1887
-0.0950
R2 = 0.1391
6.90
-0.83
0.11
1.48
2.06
0.84
4.02
-6.13
df = 653
32.0549
-0.0921
-1.8350
0.5623
-1.6065
0.4702
34.5264
-6.5848
R2 = 0.0579
2.48
-0.15
-2.13
0.34
-0.84
0.28
4.78
-2.77
df = 653
z
v?w
Estimated
coefficient
t-ratio
Estimated
coefficient
t-ratio
160.3564
2.6379
2.1106
4.0298
4.6863
4.1600
17.6718
-345.1262
-74.9757
R2 = 0.1277
4.69
1.74
-2.58
1.31
0.30
-0.25
6.36
-4.85
-1.44
df = 652
175.1315
2.7803
-5.3457
6.0187
2.9896
-0.5218
129.1676
-393.7434
-98.7487
R2 = 0.1414
4.75
1.70
-2.35
1.39
0.59
-0.12
6.78
-5.13
-1.75
df = 652
cos h ¼ a þ b income þ cðplec =poc Þ þ socio-demographic variables þ e
ð1:38Þ
r ¼ a þ b income þ c plec þ kpoc þ socio-demographic variables þ e ;
ð1:39Þ
where
cos h ¼
v
r
1=2
z ¼ v2 þ w2
:
ð1:40Þ
ð1:41Þ
The estimated equations for intra-LATA and inter-LATA toll calling are tabulated in Tables 1.3 and 1.4. As the concern with the exercise is primarily with
procedure, only a few remarks are in order about the results as such. In the
‘‘shares’’ equations (i.e., with cos h as the dependent variable), the relative price is
the most important predictor (as is to be expected), while income is of little
1 Regression with a Two-Dimensional Dependent Variable
13
Table 1.4 InterLATA toll-calling regression estimates bill harvesting data
Cos h
Variables
Models
constant
income
age
hhcomp
hhsize
educ
lecplan
relpricelec/oc
Variables
Models
constant
income
age
hhcomp
hhsize
educ
lecplan
pricelec
priceoc
v/w
Estimated coefficient
t-ratio
Estimated coefficient
t-ratio
0.2955
-0.0048
0.0156
0.0157
0.0062
0.0239
-0.0135
-0.0855
R2 = 0.0626
2.74
-0.92
2.23
1.21
-0.36
1.56
-0.16
-3.41
df = 387
1.2294
0.2326
-0.1116
1.1785
0.0616
0.3775
-3.6824
-2.2135
R2 = 0.0184
0.20
0.78
-0.28
1.59
0.06
0.43
-0.75
-1.54
df = 387
z
v?w
Estimated coefficient
t-ratio
Estimated coefficient
t-ratio
217.1338
2.3795
-7.7823
-0.5843
-3.8697
20.5839
-3.9209
-91.1808
-576.5074
R2 = 0.0838
3.41
0.94
-2.32
-0.09
-0.48
2.79
-0.09
-0.82
-2.98
df = 386
234.9010
2.1877
-7.8776
-1.7198
-4.2879
24.1515
-2.8309
-124.9212
-599.6088
R2 = 0.0888
3.50
0.82
-2.23
-0.26
-0.50
3.11
-0.06
-1.07
-2.94
df = 386
consequence. In the ‘‘aggregate’’ equations (i.e., with z as the dependent variable),
of the two prices, the LEC price is the more important for intra-LATA calling and
the OC price for inter-LATA. Once again, income is of little consequence in either
market. R2s, though modest, are respectable for cross-sectional data. For comparison, models are also estimated in which the dependent variables are the ratio
(v/w) and sum (v ? w) of LEC and OC minutes.
Elasticities of interest that can be calculated from these four models include the
elasticities of the LEC and OC intra-LATA and inter-LATA minutes with respect
to the LEC price relative to the OC price and the elasticities of aggregate intraLATA and inter-LATA minutes with respect to the each of the carrier’s absolute
price.7 The resulting elasticities, calculated at sample mean values, are tabulated in
Table 1.5. The elasticities in the ‘‘comparison’’ models are seen to be quite a bit
hz=v, where h denotes
The elasticity for LEC minutes in the ‘‘cos h’’ equation is calculated as ^c
the ratio of the LEC price to the OC price. The ‘‘aggregate’’ elasticities are calculated, not for
the sum of LEC and OC minutes, but for the radius vector z (the positive square root of the sum
of squares of LEC and OC minutes). The OC share elasticities are calculated from equations in
which the dependent variable is sin h.
7
14
L. D. Taylor
Table 1.5 Price elasticities Models in Tables 1.3 and 1.4
cos h, Z
V/W, V ? W
Elasticity
IntraLATA toll
Share
LEC (own)
LEC (cross)
OC (own)
OC (cross)
Aggregate
LEC price
OC price
InterLATA toll
Share
LEC (own)
LEC (cross)
OC (own)
OC (cross)
Aggregate
LEC price
OC price
Value
Elasticity
Value
Share
LEC (own)
LEC (cross)
OC (own)
OC (cross)
Aggregate
LEC price
OC price
-0.18
0.12
-0.24
0.34
-0.45
-0.11
Share
LEC (own)
LEC (cross)
OC (own)
OC (cross)
Aggregate
LEC price
OC price
-0.24
0.12
-0.05
0.09
-0.11
-0.70
-0.52
0.55
-0.29
0.59
-0.45
-0.12
-0.52
0.15
-0.20
0.40
-0.13
-0.66
larger than in the ‘‘polar-coordinate’’ models for the LEC and OC shares but are
virtually the same in the two models for aggregate minutes.
As the dependent variables in the ‘‘polar-coordinates’’ and ‘‘comparison’’
models are in different units, comparable measures of fit are calculated, as earlier,
as R2s between actual and predicted values for the ratio of LEC to OC minutes for
the share models and sum of LEC and OC minutes for the aggregate models. For
the ‘‘polar-coordinate’’ equations, estimates of LEC and OC minutes (i.e., v and w)
are derived from the estimates of cos h to form estimates of v/w and v ? w. R2s are
then obtained from simple regressions of actual values on these quantities. The
resulting R2s are presented in Table 1.6. Neither model does a good job of predicting minutes of non-LEC carriers.
Table 1.6 Comparable R2s for share and aggregate models in Tables 1.3 and 1.4
Toll market
cos h
z
v/w
v?w
Models
IntraLATA
InterLATA
0.1414
0.0888
0.0428
0.0058
0.1432
0.0892
0.0579
0.0184
1 Regression with a Two-Dimensional Dependent Variable
15
1.5 Final Words
The purpose of these notes has been to suggest procedures for dealing with
dependent variables in regression models that can be represented as points in the
plane. The ‘‘trick,’’ if it should be seen as such, is to represent dependent variables
in polar coordinates, in which case two-equation models can be specified in which
estimation proceeds in terms of functions involving cosines, sines, and radiusvectors. Situations for which this procedure is relevant include analyses of markets
in which there are duopoly suppliers. The approach allows for generalization to
higher dimensions, and, perhaps most interestingly, can be applied in circumstances in which values of the dependent variable can be points in the complex
plane. The procedures are illustrated using cross-sectional data on household toll
calling from a PNR & Associates Bill Harvesting survey of the mid-1990s and data
from the BLS Survey of Consumer Expenditures for the fourth quarter of 1999.
References
Kridel DJ, Rappoport PN, Taylor LD (2002) IntraLATA long-distance demand: carrier choice,
usage demand, and price elasticities. Int J Forecast 18(4):545–559
Nahin P (1998) An imaginary tale: the story of the square root of -1. Princeton University Press,
New Jersey
Taylor LD, Houthakker HS (2010) Consumer demand in the United States: prices, income, and
consumer behavior, 3rd edn. Springer, Berlin
Taylor LD, Rappoport PN (1997) Toll price elasticities from a sample of 6,500 residential
telephone bills. Inf Econ Policy 9(1):51–70
Chapter 2
Piecewise Linear L1 Modeling
Kenneth O. Cogger
2.1 Introduction
Lester Taylor (Taylor and Houthakker 2010) instilled a deep respect for estimating
parameters in statistical models by minimizing the sum of absolute errors (the L1
criterion) as an important alternative to minimizing the sum of the squared errors
(the Ordinary Least Squares or OLS criterion).
He taught many students about the beauty of L1 estimation, including the
author. His students were the first to prove asymptotic normality in Bassett and
Koenker (1978) and then developed quantile regression (QR), an important
extension of L1 estimation, in Koenker and Bassett (1978).
For the case of a single piece, L1 regression is a linear programming (LP)
problem, a result first shown by Charnes et al. (1955). Koenker and Bassett (1978)
later developed QR and showed that it is a generalization of the LP problem for L1
regression. This LP formulation is reviewed in Appendix 1.
Cogger (2010) discusses various approaches to piecewise linear estimation
procedures in the OLS context, giving references to their application in Economics, Marketing, Finance, Engineering, and other fields. This chapter demonstrates how piecewise linear models may be estimated with L1 or QR using mixed
integer linear programming (MILP). If an OLS approach is desired, a mixed
integer quadratic programming (MIQP) approach may be taken.
That piecewise OLS regression is historically important is demonstrated in
Sect. 2.2, although the estimation difficulties are noted. Section 2.3 develops a
novel modification of MILP that easily produces L1 and QR regression estimates
in piecewise linear regression with one unknown hinge; an Appendix describes the
generalization to any number of unknown hinges. Section 2.4 presents some
computational results for the new algorithm. The final section concludes.
K. O. Cogger (&)
University of Kansas/Peak Consulting Inc., 32154 Christopher Lane,
Conifer, Kansas, CO 80433, USA
e-mail: cogger@peakconsulting.com
J. Alleman et al. (eds.), Demand for Communications Services - Insights and Perspectives,
The Economics of Information, Communication, and Entertainment,
DOI: 10.1007/978-1-4614-7993-2_2, Springer Science+Business Media New York 2014
17
18
K. O. Cogger
2.2 Piecewise Linear Regression
2.2.1 General
The emphasis in this chapter is on the use of L1 and QR to estimate piecewise
linear multiple regressions. When there is only one possible piece, such estimation
is an LP problem. The LP formulations are reviewed in Appendix 1.
The term, ‘‘regression,’’ was first coined by Galton (1886). There, Galton
studied the height of 928 children and the association with the average height of
their parents. Based on the bivariate normal distribution, he developed a confidence ellipse for the association as well as a straight line describing the relationship. Quoting from Galton, ‘‘When mid-parents are taller than mediocrity,
their children tend to be shorter than they.’’ In modern language, ‘‘mediocrity’’
becomes ‘‘average’’ and his conclusion rests on the fact that his straight line had a
slope with value less than one.
Figure 2.1 illustrates his original data along with the straight line he drew along
the major axis of the confidence ellipse. Galton viewed this apparent tendency for
children to revert toward the average as a ‘‘regression,’’ hence his title for the
paper, ‘‘Regression Toward Mediocrity in Hereditary Stature.’’ One should note
that in Fig. 2.1, the vertical axis is Parental height and the horizontal axis is Child
height. In modern terms, this would be described as a regression of Parent on
Child, even though his conclusive language might best be interpreted in terms of a
regression of Child on Parent.
Fig. 2.1 Excerpted diagram from Galton’s seminal regression paper
2 Piecewise Linear L1 Modeling
19
Galton’s paper is of historical primacy. Wachmuth et al. (2003) suggested that
Galton’s original data are better described by a piecewise linear model. Their
resulting OLS piecewise linear fit is:
Parent ¼ 49:81 þ 0:270 Child þ 0:424 ðChild 70:803Þ IðChild [ 70:803Þ
ðP ¼ 8:28E 25Þ ðP ¼ 1:41E 04Þ
which can be generalized as a predictor.
^y ¼ b0 þ b1 x þ b2 ðx HÞ Iðx [ HÞ
ð2:1Þ
1; x [ H
and H is referred to as a hinge. The first
where Iðx [ HÞ ¼
0; x H
usage of the term, ‘‘regression,’’ might therefore better refer to piecewise linear
regression, which is obviously nonlinear in b2 and H.
The Wachsmuth et al. (2003) study was based on OLS and is slightly incorrect
in terms of its estimates, but probably not in its conclusions. Its estimates were
apparently based on the use of Systat, a statistical software program edited by
Wilkinson, one of the authors.1
2.2.2 Specifics
Infrequently, the hinge location H is known. For example, in economic data, one
might know about the occurrence of World War II, the oil embargo of 1973, the
recent debt crisis in 2008, and other events with known time values; if economists
study time series data, their models can change depending on such events. In
climate data, a change was made in the measurement of global CO2 in 1958 and
may influence projected trends. In the study of organizations, known interventions
might have occurred at known times, such as Apple Corporation’s introduction of
(various models) of iPod, iPad, etc.
When the value of H is known, L1, QR, OLS and other statistical techniques are
easy to use. Simply use a binary variable with known value B = 1 if x [ H
(0 otherwise) in Eq. (2.1) and generate another known variable for use. This
produces a linear model with two independent variables, x and z = (x–H)B. For
L1 and QR piecewise multiple regressions with known hinges, the LP formulation
is described in Appendix 2.
1
An empirical suggestion of the usefulness of piecewise linear models is that a Google search on
‘‘piecewise linear regression’’ turned up hundreds of thousands of hits. Similarly large numbers of
hits occur for synonyms such as ‘‘broken stick regression’’, ‘‘two phase regression’’, ‘‘broken line
regression’’, ‘‘segmented regression’’, ‘‘switching regression’’, ‘‘linear spline’’, and the Canadian
and Russian preference, ‘‘hockey stick regression’’.
20
K. O. Cogger
More frequently, the hinge locations H are unknown and must be estimated
from the data, as with the Galton, Hudson, and other data. Wachsmuth et al. (2003)
used a two-stage OLS procedure first developed by Hudson (1966) and later
improved upon slightly ffiby Hinkley
(1969, 1971). Their sequential procedure
ffi
PH i
nH2
H
separate OLS regressions–where
requires up to i¼0 2 i
i
H is the number of hinge points and n is the sample size. For the Galton data
(n = 928) with one unknown hinge point (two pieces), 1,851 separate OLS
regressions may be required with two unknown hinge points (three pieces),
1,709,401 separate OLS regressions may be required. Multicollinearity is known
to be present in some of these required OLS regression problems. Multicollinearity
can perhaps be overcome in these OLS issues by the use of singular value
decomposition (SVD). However, this may be incompatible with the sequential
procedure of Hudson (1966) which assumed the existence of various matrix
inverses. For moderate to large n, computation time also becomes a concern.
The main concern in this chapter is L1 and QR estimation of piecewise linear
regression with unknown hinge locations. Below, three charts in the case of L1
estimation are provided. The first two charts are based on the contrived data of
Hudson and the third is based on the real Galton data. For each chart, the minimum
sum
Pn of the absolute deviations (regression errors) is shown denoted as SAD or
yi j for fixed hinges found by exhaustive search; for each H, an LP
i¼1 jyi ^
problem was solved:
Figures 2.2, 2.3, and 2.4 exhibit some features common to L1 and QR estimation of piecewise linear regression functions. First, the SAD functions charted
are discontinuous at H = min(x) and H = max(x). Second, all derivatives of SAD
fail to exist at the distinct values H = x and other values as well. Third, SAD is
piecewise linear. Fourth, local maxima and minima can occur. Fifth, for any fixed
H, SAD is found in B1 s with standard LP software.
See Appendix 2 for the standard LP formulation with known hinges H. While
not present in Figs. 2.2, 2.3, and 2.4, it is possible for the global minimum of SAD
to occur at multiple values of H. Multiple optima are always possible in LP
problems. It must be emphasized that Figs. 2.2, 2.3, and 2.4 were obtained by
exhaustive search over all fixed values of H in the x range, requiring an LP
Fig. 2.2 Minimum SAD
versus H for Hudson data 1
SAD
Hudson Data 1
7
6
5
4
3
2
1
0
0
1
2
3
4
Hinge H
5
6
7
2 Piecewise Linear L1 Modeling
Fig. 2.3 Minimum SAD
versus H for Hudson data 2
21
Hudson Data 2
10
SAD
8
6
4
2
0
0
1
2
3
4
5
6
7
Hinge H
Fig. 2.4 Minimum SAD
versus H for Galton data
Galton Data
1180
SAD
1175
1170
1165
1160
60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77
Hinge H
solution for each H in the range of the x data values. This illustrates the problems
involved in finding the global minimum SAD with various optimum seeking
computer algorithms.
Searching for the global optimum H in such data is problematic with automated
search routines such as Gauss–Newton, Newton–Raphson, Marquardt, and many
hybrid techniques available for many years. See Geoffrion (1972) for a good
review of the standard techniques of this type. Such techniques fail when the
model being fit is piecewise linear due to the nonexistence of derivatives of SAD
at many points and/or the existence of multiple local minima.
There are search techniques which do not rely on SAD being well-behaved and
do not utilize any gradient information. These would include the ‘‘amoeba’’ search
routine of Nelder and Mead (1965), ‘‘tabu’’ search due to Glover (1986), simulated
annealing approaches, and genetic algorithm search techniques.2
A number of observations can be made. First, the algorithms of Hudson (1966)
and Hinkley (1969, 1971) solve only the OLS piecewise linear problem. Second,
they provide the maximum likelihood solution if the errors are normally distributed, but they are computationally expensive, are not easily extended to the
multiple regression case, are not practically extended to multiple hinges, and
software is not available for implementation.
2
The author has not applied any of these to piecewise linear estimation.
22
K. O. Cogger
2.3 A New Algorithm for L1 and QR Piecewise Linear
Estimation
L1 or QR multiple regression with no hinges (one piece) is an LP problem as
reviewed in Appendix 1. The choice of either L1 or QR is dictated by the choice of
a fixed parameter h 2 ½0; 1:
For piecewise multiple regression with one hinge, Eq. (2.1) may be converted to
0
b1 x; x H
^y ¼
ð2:2Þ
b02 x; x [ H
where the hinge H is defined as
0
x : b2 b01 x ¼ 0
ð2:3Þ
In simple linear piecewise regression, H will be a scalar value. In the piecewise
multiple regression case, H will be a hyperplane. Generally, for a continuous
piecewise linear fit,
^y ¼ max b01 x; b02 x OR ^y ¼ min b01 x; b02 x
ð2:4Þ
If H is known, an LP problem may be solved for L1 or QR with an appropriate
choice of h 2 ½0; 1: Multiple known hinges are also LP problems. See Appendix 2.
For L1 or QR piecewise multiple regression with one unknown hinge, there are
two possible linear predictors for each observation. This follows from Eq. (2.2).
For all observations, a Max or Min operator applies. This follows from Eq. (2.4).
Therefore, an overall continuous piecewise linear predictor must satisfy the
following:
^y1 ¼ b01 x; x 2 S1
^y ¼
ð2:5Þ
^y2 ¼ b02 x; x 62 S1
^y ¼ max b01 x; b02 x OR ^y ¼ min b01 x; b02 x
ð2:6Þ
where S1 is a chosen subset of x. The particular subset
chosen in Eq.
(2.5) can be
0; xi 2 S1
described by n binary decision variables B1i ¼
and can be
1; xi 62 S1
enforced as linear inequalities in MILP using the Big-M method. The single choice
in Eq. (2.6) can be described by a single binary decision variable B1 and can also
be enforced as linear inequalities using the Big-M method (Charnes 1952).
Combining the following 8n linear constraints enforce Eqs. (2.5) and (2.6) in any
feasible solution to the L1 or QR piecewise linear multiple regression problem
with one unknown hinge:
2 Piecewise Linear L1 Modeling
23
^yi ^y1i þ M B1; 8i
^yi ^y2i þM B1; 8i
^y1i ^yi þ M ð1 B1Þ; 8i
^y2i ^yi þ M ð1 B1Þ; 8i
^yi ^y1i þ M ð1 B1Þ þ M B1i ; 8i
^yi ^y2i þ M ð1 B1Þ þ M ð1 B1i Þ; 8i
^y1i ^yi þ M B1 þ M B1i ; 8i
ð2:7Þ
^y2i ^yi þ M B1 þ M ð1 B1i Þ; 8i
For large M, it is clear that the constraints in Eq. (2.7) result in the following
restrictions on any feasible solution to the L1 or QR piecewise linear multiple
regression with one unknown hinge (Table 2.1):
Estimating a piecewise multiple linear regression with L1 or QR and one
unknown hinge (two linear pieces) may therefore be expressed as the following
MILP problem:
min :
n
X
ðheiþ þ ð1 hÞei Þ
i¼1
Such that:
^y1i ¼ b01 xi ; 8i
^y2i ¼ b02 xi ; 8i
yi ^yi ¼ eiþ ei ; 8i
eiþ 0; 8i
ei 0; 8i
^yi ^y1i þ M B1; 8i
^yi ^y2i þ M B1; 8i
^y1i ^yi þ M ð1 B1Þ; 8i
^y2i ^yi þ M ð1 B1Þ; 8i
^yi ^y1i þ M ð1 B1Þ þ M B1i ; 8i
^yi ^y2i þ M ð1 B1Þ þ M ð1 B1i Þ; 8i
^y1i ^yi þ M B1 þ M B1i ; 8i
^y2i ^yi þ M B1 þ M ð1 B1i Þ; 8i
b1 ; b2
B1
B1i ; 8i
are decision vectors unrestricted in sign
is a binary decision variable
are binary decision variables.
24
Table 2.1 Results of
enforcing Eq. (2.7) for
large M
K. O. Cogger
B1
B1i
Result
0
0
1
1
0
1
0
1
^yi
^yi
^yi
^yi
¼ minð^y1i ; ^y2i Þ ¼ ^y1i
¼ minð^y1i ; ^y2i Þ ¼ ^y2i
¼ maxð^y1i ; ^y2i Þ ¼ ^y1i
¼ maxð^y1i ; ^y2i Þ ¼ ^y2i
This MILP problem has 5n ? 4 continuous decision variables, n ? 1 binary
decision variables, and 11n constraints. The choice between L1 and QR piecewise
linear regression is made by the choice of h 2 ½0; 1: From the notation, the b
vectors may be of arbitrary dimension, permitting piecewise multiple linear
regressions to be estimated with L1 or QR. If desired, some of the equality constraints may be combined to modestly reduce the number of constraints and
continuous variables, but in practice the computation time depends mostly on the
number of binary decision variables. M must be a large positive number suitably
chosen by the user. Too small a value will produce incorrect results; too large a
value will cause numerical instability in MILP software. M [ 2jyi j; 8i provides a
reasonable value for M.
The binary search space has 2nþ1 combinations to search and is a major factor
in execution time for MILP if the sample size is large. For the Hudson data, n = 6
and the binary search space has only 128 combinations; execution time is \0.1 s
for the MILP formulation. For the Galton data, n = 928 and the binary search
space has 4.538E ? 279 combinations to search; execution time takes several days
for the MILP formulation. To dramatically reduce execution time for large n, it is
wise to recognize any a priori restrictions on the two pieces. Often, such a priori
restrictions are quite weak.
The x values for the Galton data, for example, may be arranged in nondescending order. Below the estimated hinge, all cases must have one of the two
linear predictors; above the estimated hinge, all cases must have the other predictor.
This means that without loss of generality the constraint B1i B1i1 ; i ¼ 2 : n
may imposed which reduces the binary search space to size n ? 1 rather than 2nþ1 .
With this additional linear constraint added to the MILP, the Galton piecewise
linear estimation is solved in about 20 min rather than several days.
The MILP solution does not directly produce an estimate of the hinge. The
definition of the hinge in Eq. (2.3) may be used to produce this estimate from the
MILP solution. In the case where the two b vectors are of dimension two
(piecewise linear regression with a constant term and scalar x values), this solution
is a scalar value. Generally, when the two b vectors are of dimension p, the
solution of Eq. (2.3) will be a hyperplane of dimension p-1 as described by
Breiman (1993), there is no meaningful scalar hinge value for p [ 2 and the hinge
may be defined as any scalar multiple of the difference between the two b vectors.
2 Piecewise Linear L1 Modeling
Table 2.2 MILP on the
Hudson (1966) data
Estimates
b01
b11
b02
b12
SAD
Estimated hinge
Execution time (s)
25
x = (1,2,3,4,5,6)
y = (1,2,4,4,3,1)
(Hudson 1)
x = (1,2,3,4,5,6)
y = (1,2,4,7,3,1)
(Hudson 2)
10.000
-1.500
-0.500
1.500
1.000
3.500
0.043
14.500
-2.250
-0.500
1.500
2.250
4.000
0.037
At the time of the Hinkley (1969, 1971) and Hudson (1966) papers, linear
programming was in its infancy. Hillier and Lieberman (1970) noted at that time,
‘‘Some progress has been made in recent years, largely by Ralph Gomory, in
developing [algorithms] for [integer linear programming].’’ Even 6 years later,
Geoffrion (1972) observed, ‘‘A number of existing codes are quite reliable in
obtaining optimal solutions within a short time for general all-integer linear programs with a moderate number of variables—say on the order of 75 …… and
mixed integer linear programs of practical origin with [up to 50] integer variables
and [up to 2,000] continuous variables and constraints are [now] tractable.’’ It is
not surprising that Hudson and others in the mid-1960s were not looking at
alternatives to OLS.
At present, large MILP problems may be handled. Excellent software is
available that can handle MILP for problem sizes limited only by computer
memory limits. The next section reports some computational results. Appendix 3
shows how the MILP formulation is easily extended to more than one unknown
hinge.
2.4 Computational Results
Table 2.2 shows that solutions from the MILP formulation are correct for both
Hudson data sets. The recommended values of M from the second section were
used. There is complete agreement with Figs. 2.2 and 2.3 which were produced
with exhaustive manual search. The suggestion in the second section for reducing
the size of the binary search space to n ? 1 = 7 was also incorporated.
Table 2.3 shows that the MILP solution is correct for the Galton data. Again,
the recommended values of M from the second section were used and reduced the
binary search space to size n ? 1 = 929. There is complete agreement with
Fig. 2.4 which required exhaustive manual search.
26
Table 2.3 MILP on the
Galton data
K. O. Cogger
Galton
Estimates
b01
b11
b02
b12
SAD
Estimated hinge
Execution time (s)
67.5
0
33.9
0.5
1160
67.2
1339.28
2.5 Computer Software
Implementing the new algorithm depends on computer software. There is now
much available. Probably the most widespread is Solver in Excel on Windows and
Macs, but it is limited in the formulation to n = 48 cases unless upgrading to more
expensive versions.
LP_Solve is free and capable up to the memory limits of a computer; this
package may be imported into R, Java, AMPL, MATLAB, D-Matrix, Sysquake,
SciLab, Octave, FreeMat, Euler, Python, Sage, PHP, and Excel. The R environment is particularly notable for its large number of statistical routines. GAMS and
other commercial packages are also available for the OLS formulation. The
LP_Solve program has an easy IDE environment for Windows, not requiring any
programming skills in R, SAS, etc. All computation in this section was done using
LP_Solve.
2.6 Conclusions
Piecewise linear estimation is important in many studies. This chapter develops a
new practical MILP algorithm for such estimation which is appropriate for
piecewise linear L1 and QR estimation; it may be extended to OLS estimation by
MIQP by changing the objective function from linear to quadratic.
Software is widely available to implement the algorithm. Some is freeware,
some is commercial, and some is usually present on most users’ Excel platform
(Solver), but the latter is quite limited in sample sizes.
Statistical testing of piecewise linear estimators with L1 and QR is not discussed in this chapter but is an important topic for future research. It is plausible
that in large samples, the asymptotic theory of L1 and QR will apply.
Since the piecewise linear L1 and QR estimates will always produce better (or
no worse) fits than standard linear models, it is suggested that all previous studies
using standard linear models could be usefully revisited using the approach.
2 Piecewise Linear L1 Modeling
27
Appendix 1. Standard L1 or QR Multiple Regression
The estimation of a single multiple regression with L1 or QR is the following LP
problem:
min :
n
X
ðheiþ þ ð1 hÞei Þ:
i¼1
Such that:
yi b0 xi ¼ eiþ ei ; 8i
eiþ 0; 8i
ei 0; 8i;
b
unrestricted.
In this primal LP problem, the xi are known p-vectors and the yi are known
scalar values. b is a p-vector of decision variables. For L1, choose h = 0.5; for QR
choose any h 2 ½0; 1. This well-known LP formulation has 2n ? p decision
variables and n linear equality constraints. For this primal LP formulation, duality
theory applies and the dual LP problem is:
max :
n
X
ki yi :
i¼1
Such that:
X0k ¼ 0
h 1 ki h : 8i:
This LP problem has n decision variables, p linear equality constraints, and
n bounded variables, so it is usually a bit faster to solve for large n. Importantly,
the optimal values in k may be associated with important test statistics developed
by Koenker and Bassett.
Appendix 2. L1 or QR Piecewise Multiple Regression
with Known Hinges
With one known hinge, Eq. (2.2) describes the predictor and Eq. (2.3) defines the
hinge. Let x be a p-vector of known values of independent variables. Typically, the
28
K. O. Cogger
first element of x is unity for a constant term in the multiple regression. The hinge
given by Eq. (2.3)will then be a p-vector,
H, which is here assumed known. Define
x;
xH
the p-vector z ¼
with individual calculations for each element
x H; x [ H
of x and H. Since x and H are known, z has known values. This results in the LP
problem:
min :
n
X
ðheiþ þ ð1 hÞei Þ
i¼1
Such that:
yi b01 xi b02 zi ¼ eiþ ei ; 8i
eiþ 0; 8i
ei 0; 8i
b1, b2
unrestricted.
For more than one known hinge, this LP can be easily extended; simply add
additional b vectors and additional z vectors for each additional hinge to the
formulation.
Appendix 3. L1 or QR Piecewise Multiple Regression
with Unknown Hinges
The solution for H = 1 hinge and two pieces is clearly found with the MILP
formulation in the second section. Let this solution be denoted by ^yi ¼ ^yð1Þi ; 8i
[with notation changes to Eq. (2.7)] which chooses one of the linear pieces
ð^y1i ; ^y2i Þ as the regression for each i.
For H = 2 hinges, there are three possible
pieces ð^y1i ; ^y2i ; ^y3i Þ: This reduces to
a choice between one of two linear pieces ^y3i ; ^yð1Þi and a second set of binary
variables and constraints such as Eq. (2.7) (with notation changes) enforces this
choice to solve the problem for H = 2. This solution can be denoted by ^yð2Þi ; 8i:
This inductive argument can be continued for H = 3, 4, etc. For any number of
hinges, an MILP formulation can be created with H(n ? 1) binary variables, the
main determinant of computing time.
2 Piecewise Linear L1 Modeling
29
References
Bassett G, Koenker R (1978) Asymptotic theory of least absolute error regression. J Am Stat
Assoc 73:618–622
Breiman L (1993) Hinging hyperplanes for regression, classification, and function approximation.
IEEE Trans Inf Theory 39(3):999–1013
Charnes A (1952) Optimality and degeneration in linear programing. Econometrica
20(2):160–170
Charnes A, Cooper WW, Ferguson RO (1955) Optimal estimation of executive compensation by
linear programming. Manage Sci 1:138–151
Cogger K (2010) Nonlinear multiple regression methods: a survey and extensions. Intell Syst
Account Finance Manage 17:19–39
Galton F (1886) Regression towards mediocrity in hereditary stature. J Anthropol Inst Great Br
Irel 15:246–263
Geoffrion A (1972) Perspectives on optimization. Addison-Wesley, Reading
Glover F (1986) Future paths for integer programming and links to artificial intelligence. Comput
Oper Res 13:533–549
Hillier F, Lieberman G (1970) Introduction to operations research. Holden-Day, San Francisco
Hinkley D (1969) Inference about the intersection in two-phase regression. Biometrika
56:495–504
Hinkley D (1971) Inference in two-phase regression. J Am Stat Assoc 66:736–743
Hudson D (1966) Fitting segmented curves whose join points have to be estimated. J Am Stat
Assoc 61:1097–1129
Koenker R, Bassett G (1978) Regression quantiles. Econometrics 16:33–50
Nelder J, Mead R (1965) A simplex method for function minimization. Comput J 7:308–313
Taylor L, Houthakker H (2010) Consumer demand in the United States: prices, income, and
consumer behavior. Kluwer Academic Publishers, Netherlands
Wachsmuth A, Wilkinson L, Dallal G (2003) Galton’s Bend: a previously undiscovered
nonlinearity in Galton’s family stature regression data. Am Stat 57(3):190–192
Part II
Empirical Applications: Information and
Communication Technologies
Chapter 3
‘‘Over the Top:’’ Has Technological
Change Radically Altered the Prospects
for Traditional Media?
Robert W. Crandall
In the last few years, consumers have found it increasingly easy to access content
through the internet that they would previously have obtained from broadcasters,
through cable television or satellite television, or on CDs, DVDs, and printed
materials. In addition, they have access to a wide variety of user-generated content
and social-networking sites to occupy time that once might have been devoted to
traditional media. Despite these dramatic changes in access to media, however,
most people still obtain much of their entertainment and information from rather
traditional sources, such as cable and broadcast television networks, newspapers,
magazines, or their online affiliates. But the owners of these media are now justifiably worried that this traditional environment is about to be disrupted, perhaps
in a dramatic fashion.
Given the widespread diffusion of high-speed broadband internet services
through both fixed-wire and wireless networks, consumers can now bypass conventional distribution channels for receiving content—even video content. Thus,
the internet poses a threat not only to media companies but to the traditional video
distributors as well. In addition, copyright owners obviously fear that such bypass
creates the opportunity to engage in ‘‘piracy’’—the avoidance of paying for the
copyrighted material—but they are also concerned that a change in distribution
channels could redound to their disadvantage, as consumers narrowly target their
choices of entertainment and information and move away from traditional media
products.
The diffusion of broadband services has also stimulated the growth of new
media, such as blogs, social-networking sites, or messaging sites that allow consumers to exchange information, photos, video clips, and sundry other matter.
These new media are of recent origin, but they may have begun to compete
strongly with conventional media for the consumer’s limited time, thereby posing
R. W. Crandall (&)
The Brookings Institution, 39 Dinsmore Rd, PO Box 165 Jackson NH 03846, USA
e-mail: rcrandall@brookings.edu
J. Alleman et al. (eds.), Demand for Communications Services - Insights and Perspectives,
The Economics of Information, Communication, and Entertainment,
DOI: 10.1007/978-1-4614-7993-2_3, Springer Science+Business Media New York 2014
33
34
R. W. Crandall
a different threat to established media companies—namely ‘‘going around’’ their
products rather than obtaining these products for a fee ‘‘over the top’’ of conventional distribution channels or even illicitly through piracy over the internet.
This paper addresses the impacts of these changes on the participants in the
traditional media sector—firms that produce and distribute audio, video, and print.
Recent trends in consumer expenditures on electronic equipment and media are
reviewed. Empirical evidence on how changes in equipment and access to media
have affected consumers’ management of their time is studied. Finally, the effects
of all of these changes on the economic prospects of a variety of firms engaged in
traditional communications and media are examined. While the market participants may currently recognize the threats posed to traditional media companies,
the disruptions have been relatively modest thus far and have had little apparent
effect on the financial market’s assessment of the future of media companies.
3.1 Consumer Expenditures on Electronic Equipment
and Media
In their seminal study of consumer demand, Houthakker and Taylor (1966, 2010)
stressed the importance of dynamic considerations in empirical estimates of
consumer expenditures. They demonstrated the importance of psychological habit
formation in driving consumer expenditures on everything from food to professional services. An important issue in their study was the rate at which the ‘‘stock’’
of each category of consumption depreciates. Given the recent rate of technological change, particularly in the dissemination of information, the depreciation
rates on many categories of consumer expenditure may have accelerated markedly.
This is particularly likely for consumer electronics and complementary products,
such as software and media, as new devices and technologies replace old ones at a
staggering rate. On the other hand, if the improved devices are simply used to
access much of the same material more conveniently, rapidly, or with higher
quality results, the expenditures on complementary media may be much more
stable, that is, subject to a lower depreciation rate.
3.2 Consumer Expenditures
The cellular telephone was introduced to consumers in 1983, just after the introduction of the desktop personal computer. Both products are still purchased by
consumers, but no one would compare their current versions with those introduced
in the early 1980s. Along the way, cable television blossomed after being
3 Over the Top: Has Technological Change
35
Fig. 3.1 Real personal consumption expenditure-electronic equipment. Source BEA
deregulated successively in 1979 and 1986,1 and the commercial internet emerged
by 1994 as a major communications medium for households and businesses. The
widespread use of the internet allowed consumers to share music files, often over
peer-to-peer services that avoided copyright payments, to the detriment of compact
disks (CDs) music sales. Soon thereafter, much higher speed internet connectivity
began to be deployed, and the speed of internet connections through fixed and
mobile devices has continued to increase ever since. This increase in speed has led
to the development of much more data-intensive video services that can be
accessed over a variety of devices, including desktop computers, digital video
recorders (DVRs), and mobile devices—‘‘smart’’ cell phones, iPods, laptops, and
the new computer tablets.
3.2.1 Equipment
It is difficult to capture the stunning development of consumer electronics technology through simple counts of each type of device or annual sales revenues
because succeeding generations of each device—personal computers, laptop computers, cell phones, etc.—have provided dramatic improvements in the amount of
1
See Crandall and Furchtgott-Roth (1996) for estimates of the consumer benefits of cable
television deregulation.
36
R. W. Crandall
information that can be processed and the speed with which it is made available to
the consumer. However, a simple graphic can help. The Bureau of Economic
Analysis (BEA) estimates nominal and real personal consumer expenditures (PCE)
by product category. Their estimates of annual real consumption expenditures on
various type of electronic equipment are shown in Fig. 3.1. Clearly, real consumer
expenditures on computers and related products began to soar in the early 1990s, but,
more surprisingly, real expenditures on video equipment showed a similar, meteoric
rise beginning in the late 1990s. Both series increased approximately sevenfold
between 2000 and 2010, while real spending on telecom equipment, including cell
phones, increased ‘‘only’’ slightly more than 300 % in the same period,2 and real
spending on audio equipment rose by just 120 %.
The reason for the explosive growth in ",5,"cond1">real consumer expenditures on
these types of equipment is obvious: rapid technical change led to sharp reductions
in prices as captured in the PCE deflators for each category. Nominal spending on
video equipment actually increased only about 50 % between 2000 and 2010, and
nominal spending on telecommunications equipment rose by slightly more than
100 %. The PCE deflators for video and computer equipment fell by more than
80 % in this short period, reflecting the rapid changes in television and computer
screens, continuing improvements in processor speeds, and the increasing
sophistication of a variety of devices such as digital video recorders, hard drives,
and even printers. These improvements led to rapid obsolescence of equipment
bought in the 1990s or even the early 2000s, inducing consumers to replace their
outdated equipment and enjoy the new functionalities of the replacements.
3.2.2 Media
Some electronic devices cannot be used or fully enjoyed without access to various
media. Personal audio equipment requires access to recorded music or to events
broadcast over radio stations or satellites. The evolution of recorded music from
vinyl records to cassette tapes to compact disks greatly facilitated the distribution
of such music and arguably facilitated an improvement in the consumer’s listening
experience. In recent years, much of the recorded music played over digital audio
equipment has been downloaded over the internet, often without any compensation
paid to the owner of the copyrighted music or the performers.
A similar transformation has occurred in video media. Prior to the development
of the video cassette recorder in the late 1970s, most video was distributed to
households through off-air broadcasts or through coaxial cable. Rapid improvements in digital technology led to the introduction of DVDs around 1997 and more
recently (in 2006) to the introduction of high-definition DVDs and Blu-ray disks.
2
Some telecommunications equipment PCE is recorded in telecommunications services because
of the handset subsidies offered by carriers.
3 Over the Top: Has Technological Change
37
Fig. 3.2 Real personal consumption expenditure-media. Source: BEA
As broadband internet speeds increased, consumers also began to download video
over the internet, including full-length movies, through services such as Netflix.
Finally, the development of the digital camera and extremely inexpensive
storage capacity has essentially obliterated the photographic processing sector.
Since 1989, real consumer spending on photographic processing and finishing has
declined by more than 60 %.
It comes as no surprise that consumer expenditures on video media, including
video delivered over cable television, have soared as the new equipment has
dramatically improved the home-viewing experience. As Fig. 3.2 shows,
38
R. W. Crandall
Fig. 3.3 Share of real PCE: electronic equipment, telecom services, and media. Source BEA
consumer expenditures on video continued to increase through 2008 while audio
expenditures have flat-lined since the late 1990s.
Note, however, that real spending on video media has slowed dramatically
since 2008 in the wake of sharply lower economic growth. Indeed, total real
personal consumption grew only 0.3 % between 2007 and 2010. At the same time,
copyright owners have begun to worry that higher and higher broadband speeds
are facilitating much more video streaming and, potentially, video piracy. Whether
piracy or the recent recession is to blame, DVD rentals and purchases declined by
more than 15 % in nominal dollars between 2007 and 2010.3
The flattening out of spending on audio media occurred much earlier. As
Fig. 3.2 shows, the growth in real PCE on audio essentially stopped between 1999
and 2005. This abrupt change in the trajectory of consumer spending on audio
media occurred as broadband was beginning to spread rapidly and just before the
iPod was introduced. More than 200 million iPods were sold in less than seven
years as virtually everyone under the age of 50 began to use this device to listen to
music. Despite this successful innovation, there was no rebound in audio media
PCE until 2006; the likely culprit is music piracy over peer-to-peer networks on
3
It is not clear whether streaming revenues for Netflixare included in this category by BEA.
3 Over the Top: Has Technological Change
39
Fig. 3.4 Nominal PCE shares: electronic equipment, telecom services, and media. Source BEA
the internet. Liebowitz (2008) Surprisingly, growth resumed in 2006, as real audio
PCE stabilized in 2009 and then soared by 9 % in 2010.
Despite the continuing growth in PCE on video media (at least, through 2008),
it is clear that real spending on equipment has soared relative to spending on media
(see Fig. 3.3). Since 1995, real spending on media has also lagged real expenditures on telecom services. Of course, much of this startling divergence has been
due to the incredible decline in the price of video and computer equipment. If one
focuses on nominal spending shares, the pattern—shown in Fig. 3.4—is decidedly
different.
The share of nominal PCE devoted to spending on media has declined rather
steadily since 2001, perhaps an ominous portent for the large, traditional media
companies, but the share of nominal spending on equipment continued to rise until
the recent recession. Surprisingly, nominal spending on telecom services has risen
by more than spending on electronic equipment or media in the last 20 years. In
part, this is due to the fact that a substantial share of mobile phone expenditures are
buried in the PCE category of telecommunications spending because wireless
companies subsidize handset purchases. The similarity in the real and nominal
trends in telecommunications spending since 1991, as shown in Figs. 3.3 and 3.4,
reflects the fact that the BEA personal consumption deflator for telecommunications services is virtually unchanged since 1991, surely a strange result given the
40
R. W. Crandall
sharp reduction in the prices of cellular service and the substantial improvement in
landline broadband speeds in recent years.
3.3 Consumer Adaptation to New Media Opportunities
At this juncture, it would be difficult to estimate the effects of the new distribution
options presented by the internet on consumer allocations of time or spending. It
would certainly require detailed household data on the amount of time or income
spent listening, watching or reading various types of content delivered over traditional channels or delivered over the internet, principally by ‘‘streaming’’ content, the prices paid (if any) for such content, and the consumer’s own
transformation of this content. Because some or perhaps much of this activity may
involve copyright infringement, such data would be difficult to obtain at best.4 This
paper simply sketches the various possibilities of shifts in media consumption and
ask if firm, industry, or equity-market data support the existence of major changes
at this time.
3.4 The Internet’s Contribution to Changing Patterns
of Consumer Use of Media
The changes in consumer behavior induced by the recent wave of technological
changes that are likely to have a major impact on current media companies and
downstream distributors take the following forms:
Consumers use the internet to be more selective in targeting the information or entertainment content that they read, watch, or listen to, thereby perhaps reducing their use of
traditional media. This is often described as going ‘‘over the top’’ to access content that
had been delivered through traditional distribution channels. An example would be targeted streaming of video content through Netflix or Hulu, which may induce some consumers to drop their HBO or Showtime subscriptions or even to cancel their cable/satellite
subscriptions altogether.
Consumers shift from reading, listening, and viewing traditional media content to
developing their own content or otherwise interacting with others through various websites, blogs, tweets, and homemade videos that substitute for movies, CDs, and traditional
television programming.
Consumers find ways to use the internet to obtain copyrighted printed, audio and video
material without paying copyright fees, i.e., through piracy.5
4
For a first step in this direction, see Banerjee et al. (2011).
See Oberholzer-Gee and Strumpf (2007) and Liebowitz (2008) for studies of audio file sharing
that reach different conclusions.
5
3 Over the Top: Has Technological Change
41
3.5 Going ‘‘Over the Top’’
Using the internet for targeted purchases of media products is hardly new. Consumers have been able to use the internet to bypass newspapers for selected news
articles and ‘‘want ads’’ for many years. As a result, most newspapers have had to
grapple with the choice between offering their product over the internet to subscribers for a prescribed fee or simply providing all or part of it for free, using online advertising to defray its cost. Similarly, music record companies once faced a
choice of offering their products over the internet, through such entities as iTunes,
emusic.com, Amazon MP3, or mp3.com, or through other distribution channels,
but today virtually all companies attempt to do both.
Most attention is now focused on video content. As broadband speeds steadily
rise, consumers can increasingly view almost any video content on their traditional
television receivers, their desktop computers, or a variety of mobile devices.
Traditional video distributors, such as cable television companies and broadcast
satellite operators, have long had the option of offering arrays of channels for fixed
monthly fees, ‘‘a la carte’’ channels of specialized content, or motion pictures and
various live events on a pay-per-view basis. But now consumers have the option of
using broadband internet connections to access individual motion pictures or
television series episodes through such entities as Netflix or Hulu. As these entities
gain subscribers, the cable, satellite, and telecom video distributors are forced to
respond with competitive pay-per-view or internet streaming services.
Thus far, most over-the-top video involves content that was originally produced
for delivery through motion picture theaters, broadcast television, or cable television/satellite services. Entities, such as Google, Netflix, Apple, or Yahoo!, will
attempt to bypass the traditional distributors altogether and negotiate directly for
video content for delivery to consumers over the internet; indeed, this has already
begun. Alternatively, the large media companies, say Disney or NewsCorp, could
develop their own internet-based delivery services.
3.5.1 User-Generated Content
The widespread availability of high-speed internet connections has stimulated the
development of user-generated video content (UGC) over sites such as YouTube and
Facebook. Most of this content is not comparable with professionally produced
video entertainment, but it is a substitute for the more polished video productions
because it competes for the consumer’s limited leisure time. But, as Arewa (2010)
explains, it could also be a complement to this more professional material as users
download video clips from copyrighted material to include in UGC that they then
post on a variety of internet sites. This user-generated content may actually enhance
the value of the copyrighted material from which it is extracted.6
6
Liebowitz (2008) makes a related point in his analysis of online music downloads.
42
R. W. Crandall
3.5.2 Piracy
Finally, there is the issue that has occupied copyright owners for most of the last
decade—online piracy. This issue has been most acute for the audio-recording
industry, which has witnessed a rather severe decline since 1999. There are conflicting studies on whether this decline is mostly related to copyright infringing
uses of peer-to-peer sites or if it is simply a reflection of new products, such as
DVDs, replacing CDs.7 Piracy through illegal copying of DVDs had been a major
issue for motion picture companies even before the availability of broadband
internet connections allowed it to shift to online streaming.
A recent study undertaken for a major media company, NBC/Universal, concluded that nearly one-fourth of all internet traffic involves copyright infringement.8 However, even this study concluded that only 1.4 % of internet use
involves infringing use of video material.
3.6 The Changing Allocation of Consumers’ Time
Given the rapid changes in the electronic equipment purchased by consumers, one
might expect to find that consumers have made major changes in their daily lives
to adjust to the opportunities that this new equipment provides. There is a widespread perception that many are reducing their use of traditional media as they
increase the time they spend accessing the internet through laptops or mobile
devices, including the new tablets. Indeed, in a recent speech, FCC Chairman
Julius Genachowski offered the following startling observation:
An average American now spends more of their (sic.) free time online than they do
watching TV.9
If true, this would indeed suggest that traditional cable, satellite, and broadcast
television has declined remarkably, but the available data do not support Chairman
Genachowski—at least, not yet.
Bruce Owen (1999) addressed the question of whether the internet then posed a
threat to conventional television. He concluded that it did not for two reasons.
First, the technology of that era made it unlikely that video streaming over the
internet could be cost competitive with cable television and broadcasting. Owen
noted the limitations of the copper-wire and coaxial cable networks of the time, but
he allowed that technology could change sufficiently to ease this constraint. The
7
Oberholzer-Gee and Strumpf (2007) and Liebowitz (2008).
Envisional Ltd. (2011).
9
Speech given at the National Association of Broadcasting Convention, Las Vegas, NV, April
11, 2011, available at http://transition.fcc.gov/Daily_Releases/Daily_Business/2011/db0412/
DOC-305708A1.pdf.
8
3 Over the Top: Has Technological Change
43
Table 3.1 Average hours per week spent watching TV, using the internet, and watching videos
on mobile phones by age
Age
18–24 25–34 35–49 50–64 65+
Traditional TV
Time-shifted TV
Using internet (At home and work)
Video on internet
Video on mobile phones (per mobile subscriber)
26.5
1.5
5.5
0.8
0.25
30.5
3.2
8.5
1.0
0.2
36.4
3.2
8.5
0.6
0.1
44.9
2.8
7.3
0.4
0.03
49.3
1.7
3.9
0.2
\0.02
Source AC Nielsen (2011)
second reason that Owen offered to support his pessimistic view is that television
viewing is a ‘‘passive’’ activity. Couch potatoes do not want to have to struggle to
receive their entertainment over the internet; they prefer to select passively from a
menu offered by traditional video distributors. Of course, he reached these conclusions before Facebook or even Netflix and surely before 50 Mbs broadband
speeds.
More than a decade later, the available data appear to support Owen’s conjecture—Americans still spend far more time watching traditional TV than using
the internet. A Nielsen survey of households in the first quarter of 2011 found that
the average adult spends about five times as much time watching TV as using the
internet.10 Table 3.1 provides the details by age category. Note that video over the
internet and video over mobile phones constitute a small share of total video
viewing, but this share is rising rapidly. Nielsen reports that online video viewing
rose 34 % between the first quarter of 2010 and the first quarter of 2011, while the
number of persons accessing video over smartphones rose by 41 % in the same
period.
In the first quarter of 2011, 288 million people (age 2+) watched TV in the
home, 142 million watched at least some videos over the internet, but only
29 million watched video over a mobile phone. While watching video over nontraditional sources is still dominated by watching traditional TV, new forms of
video access is increasing rapidly. One research firm, SNL Kagan, estimates that
2.5 million households had substituted ‘‘over-the-top’’ video over internet for their
pay—cable, satellite, or telco-provided—television service and predicts that this
total will rise to 8.5 million households by the end of 2015.11
The Nielsen data on internet use would appear to be conservative if one
believes a variety of other sources, particularly those providing social-media
services. For instance, Facebook claimed 700 million active users in 2011, of
whom 225 million were in the United States.12 It also claimed that the average
10
The latest (2009) BLS Time Use Survey finds that the average adult (aged 15 and over)
watches television only 2.82 h per day or only 20 h per week.
11
‘‘Online Video Will Replace Pay TV in 4.5 Million Homes by Year-End,’’ The Hollywood
Reporter, July 20, 2011.
12
These data are found at www.facebook.com/press/info.php?statistics.
44
R. W. Crandall
user spends more than 15.5 h per month on the Facebook site. If this is an accurate
representation for U.S. users, it suggests that the average adult spent perhaps 40 %
of his or her time on the internet (as measured by Nielsen) on the Facebook site,
surely a remarkable share, if true.
The second most popular social-media site, Twitter, claims similarly staggering
usage. Twitter began in March 2006. Five years later, it claimed that the number of
‘‘tweets’’ sent over its site in a single day was 170 million.13 Apparently, only
about 7 % of Americans are considered active users of Twitter, or about 21 million overall. Nevertheless, Twitter claimed that an average of 460,000 new users
join every day. Another popular site, YouTube, claimed 490 million users
worldwide. These users spend an average of nearly 6 h per month on the site,
viewing an average of 188 pages per month.
3.7 The Effects on Industry Participants
What has been the effect of the changes in consumers’ use of electronic equipment
and media access traditional on the firms participating in the media/communications industries? Specifically, have these shifts in consumer behavior begun to
erode the values of cable television, satellite, and broadcasting companies or
traditional media producers—those producing motion pictures, television programming and music?14 In this section, the revenue growth and stock-market
performance of telecom, cable/satellite, and media firms are examined to see how
they have fared as the internet has fostered the development of new distribution
channels for traditional and new media. It focuses on whether the financial markets
have begun to reduce traditional media-company valuations as these changes
unfold.
3.7.1 Industry Revenue and Output
The Census Bureau collects annual data on revenues and costs for firms in service
industries. These data have been reported for each of the media and communications industries, based on the North American Industry Classification System
(NAICS) classifications, for 1998–2010 in the Service Annual Survey.15 The
revenue data are plotted in Fig. 3.5 for the various media industry categories and
13
These data are found on http://blog.tweeeter.com/2011/03/numbers.html.
Consumer electronics manufacturing is not addressed in this paper because the focus is on the
shifting use of electronic devices and its impact on media companies and distributors. Obviously,
consumer electronics companies have generally thrived, particularly those—such as Apple—who
have led in developing audio players, smart phones, laptops, and the new tablets.
15
Available at http://www.census.gov/services/.
14
3 Over the Top: Has Technological Change
45
Fig. 3.5 Media industry revenues. Source U.S. Census Bureau
in Fig. 3.6 for motion pictures and other video production as well as cable/satellite
television and broadcasting.
Several trends are immediately obvious. First, nominal audio production and
distribution revenues (record-company revenues) have not changed since 2000,
indicating that, not surprisingly, real record company revenues have declined
steadily over this period. On the other hand, video and motion picture revenues
grew at a remarkably steady rate from 1998 to 2008 before declining during the
recent recession. During this period, these video/motion picture company revenues
grew by 72 % while the price level, as measured by the Consumer Price Index
(CPI), rose by just 32 %. In 2010-2011, revenues rebounded substantially from
their depressed 2009 levels, suggesting that the 2009 decline was temporary and
not due to substitution of other products over the internet or even a sudden
acceleration of piracy.
By contrast, publishing revenues grew slowly from 1998 through 2005 and have
declined precipitously since 2007. Newspaper, book, and magazine publishers
have been particularly hard hit and have not recovered their 2008–2009 losses.
Since then they are likely in secular decline.
Figure 3.6 once again shows the trend in motion picture/video production
revenues, but this time they are shown with the growth in broadcasting and cable/
satellite company revenues. Note that cable/satellite firm revenues appear to have
grown even more rapidly than motion picture/video producer revenues, but
broadcasting revenues have clearly stagnated as viewers and listeners have shifted
to satellite, cable, and a variety of devices connected to the internet. Cable television’s revenue growth is undoubtedly the result of the increasing importance of
broadband and telephone revenues in cable company revenues, as they began to
offer ‘‘triple play’’ packages to their subscribers.
46
R. W. Crandall
Fig. 3.6 Video production and distribution industry revenues. Source U.S. Census Bureau
In short, the Census Bureau’s revenue data reveal that motion picture/video
companies’ revenues continued to grow steadily until the recent recession; record
companies have seen little nominal revenue growth over the last decade; and
publishers of books, magazines, and newspapers have suffered a precipitous
decline since 2007. These data, however, suffer from the possibility of double
counting since one company in the production/distribution chain can sell a substantial share of its output through others who are in the same three-digit NAICS
category. If the degree of vertical integration changes over time, this change can
affect the tabulation of revenues even though overall final sales do not change. To
address this possibility, it is useful to examine data on industry value added in
these sectors.
The Commerce Department’s Bureau of Economic Analysis (BEA) estimates
the value added originating in the Information sector which comprises Publishing
(including software), Telecommunications and Broadcasting, Motion Pictures and
Sound Recording, and Information and Data Processing Services.16 Figure 3.7
shows the growth of nominal value added in Motion Pictures and Sound Recording
and Broadcasting and Telecommunications relative to total GDP.
The share of GDP accounted for by broadcasting (including cable/satellite) and
telecommunications and motion picture/sound recording has been remarkably
stable. The slight decline in motion picture/audio revenues is likely due to the
16
The BEA data on publishing is ignored because print is combined with software in their valueadded data.
3 Over the Top: Has Technological Change
47
Fig. 3.7 Value Added/GDP In two information industry subsectors. Source BEA
relative decline in record-company sales, but it is impossible to separate motion
pictures from audio value added in these data.17
From these data, one would not deduce that a major upheaval is occurring in the
motion picture/video industries. The Census data, however, confirm the widespread impression that record companies and publishers of newspapers, books, and
periodicals are in decline, with the latter declining sharply in the last three years.
3.7.2 Telecommunications and Cable TV firms
In the last decade, telecommunications carriers and cable television companies
have begun to compete with one another in earnest. The telecom carriers have built
fiber farther out in their networks—in some cases all the way to the subscriber—so
as to be able to deliver video services. The cable companies, at a much lower
incremental cost, have been able to offer voice services–VoIP. Combined with the
steady expansion of high-powered satellite services, these trends have made the
distribution of video, data, and voice far more competitive than it was in earlier
decades.
Despite this substantial increase in competition, the carriers have done well.
Table 3.2 provides recent market capitalization information for the major telecom
and cable and satellite firms in the United States. The U.S. telecom companies
shown derive an overwhelming share of their income from operations in the U.S.,
17
It is possible that this stability is an artifice, driven by BEA’s estimation procedure between
benchmarks.
48
R. W. Crandall
Table 3.2 Market capitalization and equity betas for telecom and cable TV firms
Company
Market capitalization—8/31/2011
(Billion $)
Telecom
AT&T
Verizon
Century Link
Sprint-Nextel
Frontier
Windstream
Metro PCS
U.S. Cellular
Level 3
Time Warner Telecom
GCI
Leap Wireless
Clearwire
Cincinnati Bell
Shenandoah Telecommunications
Alaska Communications
Fairpoint Communications
Cable/satellite
Comcast (Includes NBC)
Time Warner Cable
Cablevision
Charter
Knology
DirecTV
Dish Network
Sirius/XM
333
169
102
21
11
7
6
4
4
3
3
0.4
0.7
0.8
0.7
0.3
0.3
0.2
140
59
21
5
5
0.5
32
11
7
Equity b
0.52
0.53
0.75
1.04
0.76
0.87
0.52
0.90
1.36
1.33
0.96
1.32
1.33
1.47
0.23
0.66
NA
1.08
0.73
1.61
NA
1.68
0.87
1.44
2.32
Source www.finance.yahoo.com
and most have substantial investments in infrastructure and little invested in media
(programming) despite their recent forays into video distribution. The two largest
U.S. telecom companies account for about 80 % of the industry’s market capitalization because they are the principal providers of both fixed-wire and mobile
(wireless) telecommunications. Despite more than a decade of public policy
designed to promote entry by new, fixed-wire competitors and years of regulatory
strife surrounding this policy, only a few of these entrants remain. Competition in
telecommunications is now focused on interplatform competition between the
large telecom companies and the cable/satellite companies as well as from a
number of independent wireless companies, such as Sprint/Nextel, Leap, and U.S.
Cellular. Note that despite the threat of increased competition posed by the internet
to traditional video distribution, the large telecom companies’ equities continue to
have low betas, suggesting that the markets have not yet perceived a major
increase in the risk of holding these equities.
3 Over the Top: Has Technological Change
49
Fig. 3.8 Major telecom company stock prices. Source www.finance.yahoo.com
The cable-satellite companies, on the other hand, have a much lower total market
capitalization even though they also have substantial infrastructure investments as
well as investments in video programming.18 (The five cable companies shown in
Table 3.2 account for approximately 70 % of the country’s subscribers.) Each of the
national cable television companies now offers video, data (high-speed internet
connections), and voice. The satellite providers are more specialized in video services, but they have grown substantially in recent years. Note that most of the cable/
satellite firms’ equities are viewed as more risky than are the telecom companies’
equities, perhaps because of their investments in media content.19
In past few years, the major telecommunications companies—AT&T and Verizon—have enjoyed strong stock-market performance. As Fig. 3.8 shows, both of
their common equities have substantially outperformed the S&P 500 since 2005 and
even since 2001, the peak of the technology stock market bubble. Figure 3.9 shows
that since 2005, two of the three major publicly listed cable companies’ equities have
also slightly outperformed the S&P 500, (Time Warner Cable is the exception.) The
most spectacular performer, however, has been DirecTV, a satellite company which
has wooed a substantial number of subscribers from traditional cable television
companies and derives little income from internet connections.
It is hardly surprising that telecom and cable television companies have performed well in the modern era of the internet. Neither group is now heavily
regulated, and both have benefitted from the public’s seemingly insatiable demand
for bandwidth as well as for voice and video services.
18
19
Comcast recently completed its acquisition of 51 % NBC/Universal for $13.8 billion.
Cox Cable and MediaCom are not included because they are now privately-held companies.
50
R. W. Crandall
Major Cable/Satellite Stock Prices
400
December 2005 = 100
350
Comcast
Cablevision
300
S&P
TimeWarner Cable
250
200
DTV
DISH
150
100
50
0
Fig. 3.9 Major cable/satellite company stock prices. Source www.finance.yahoo.com
The relatively strong performance of cable television and telecommunications
equities undoubtedly reflects the fact that their fixed-wire and wireless services are
needed to deliver the increasing amount of digital media to consumers, regardless
of whether the content is obtained legally from traditional media companies, is the
result of piracy, or derives from new media organizations.
3.7.3 The Media Companies
Far more relevant for this paper is the recent performance of the ‘‘media’’ industry,
that is, those firms that supply video and audio content to distributors, including
those that distribute over the internet. First, the ‘‘traditional’’ media which have
supplied video and audio products to consumers through a variety of distribution
channels for decades are examined. Then, the focus shifts to the new media.
3.7.4 Traditional Media
Table 3.3 lists the major U.S. media companies, including those that own the
major broadcast networks, motion picture companies, and most of the major cable
3 Over the Top: Has Technological Change
51
Table 3.3 Market capitalization and equity betas for major traditional media companies
Company
Market capitalization 8/31/2011 (Billion $)
Equity b
Traditional media
Disney
News Corp
Time Warner
Viacom
NBC/Universal
CBS
Discovery
Liberty Interactive
Scripps
Liberty Starz Group
Washington Post
DreamWorks
New York Times
Lions Gate Entertainment
Tribune Company
Media General
267
63
46
33
33
30 (e)
17
17
10
7
4
3
2
1
1
0
\0.1
1.24
1.55
1.33
1.10
N/A
2.40
0.76
2.56
1.15
N/A
1.07
1.11
1.57
0.61
NA
2.89
Source www.finance.yahoo.com
channels. An estimate of the value of NBC/Universal, the owner of a major motion
picture distributor and the NBC network and stations, which was recently purchased by Comcast, is included. Not included are the motion picture operations
(Columbia Pictures) or record businesses of Sony Corporation because their value
cannot be separate from the rest of Sony. Moreover, the other major record
companies are not included because, given the decline in the music business, most
of the other major record labels have been sold or otherwise transferred to companies other than the major media companies.20
Note the substantial market value of these major media companies, even in the
current environment. Most have relatively high equity betas, a reflection that they
are considerably riskier than the telecom carriers and many cable companies. It is
also notable that the value of the major newspaper companies—Washington Post,
New York Times, and the Tribune Company—is low. (The most successful U.S.
newspaper, the Wall Street Journal, is owned by News Corporation.)
The substantial market capitalization of the major media firms is obviously
reflected in the movement of their equity prices. As Fig. 3.10 shows, all of the
largest media companies except Time Warner have recently outperformed the
overall equity market as measured by the S&P 500 Index. Prior to 2005, these
companies’ equities more closely tracked the overall market, but in the past few
20
The major record labels have declined substantially. Sony retains the Sony-BMG label.
Warner Music Group was sold to Access Industries for $3.3 billion in 2011; Universal Music
remains with Vivendi, a French company; and EMI was acquired by Citigroup in a prepackaged
bankruptcy in 2011. Citigroup is now attempting to sell EMI for about $4 billion.
52
R. W. Crandall
Fig. 3.10 Leading media company stock prices
years, several of these companies’ equity prices have diverged substantially from
the S&P 500 Index. Nevertheless, there is certainly no evidence that investors are
abandoning these companies in favor of the new internet-based companies, such as
Google, Facebook, or Twitter. This suggests that not only have these companies’
revenues and profits not declined, but that the equity markets are not expecting
such a decline in the foreseeable future.
Given that their stock prices have performed well, it should be no surprise that
the major media companies’ revenues have also held up rather well. Total revenues for Disney, News Corp., Time Warner, Viacom, CBS, Liberty Media , and
NBC/Universal have tracked GDP for the past 10 years, totaling between 0.93 and
1.06 % of the nominal value of GDP. In the most recent year, 2010, these companies’ revenues were equal to 1.04 % of GDP. These companies’ revenues have
grown by 59 % since 2001. Census data show that video and motion picture
company revenues grew by 64 % in the same period. Thus, piracy, the development of new media, or the shift of advertising expenditures to nontraditional forms
of internet advertising have not yet combined to reduce these companies’ revenues
relative to overall economic activity.21
Nor has there been a noticeable shift in the largest companies’ sources of
revenues. In 2001, Disney, NewsCorp. Time Warner, Viacom, and CBS derived
75 % of their revenues from video products; in 2010, they derived 76 % from
21
Three of the major media companies, Disney NewsCorp., and NBC/Universal own Hulu, a
new media company that streams programming over the internet.
3 Over the Top: Has Technological Change
53
video.22 And it does not appear that these companies, with the exception of News
Corp, have diversified outside the United States. Those reporting revenue data by
geographical area continue to realize about 70–75 % of their revenues from North
America in the latest year.
Of all the major media companies, only Time Warner has exhibited declining
revenues over the last few years. This is an ironic result, given that Time Warner
was merged into one of the most successful new internet companies, AOL, in
January 2001. When the merger was announced in 2000, the antitrust authorities
were concerned that AOL/Time Warner could dominate the broadband ISP market. After nearly a year’s investigation, the Federal Trade Commission and AOL/
Time Warner entered into a consent decree that required AOL to offer, its services
over both cable modems and telco-based DSL connections. Within three years,
Time Warner’s total revenues (excluding Time Warner Cable) began to decline,
largely because of AOL’s declining revenues. Over the next five years, Time
Warner’s nominal revenues would decline by more than 25 %, while the other
media giants’ revenues rose by 27 %. In retrospect, it is now clear that Time
Warner did not need AOL to deliver broadband services over its cable networks;
AOL brought a large narrowband customer base, but most of these customers
could easily migrate to either DSL or cable modem service.
3.7.5 New Media
The newer, largely internet-based media companies that are listed publicly are
shown in Table 3.4. While the traditional media companies in Table 3.3 are largely focused on motion pictures, broadcasting, and cable television programming,
many are beginning to pursue a variety of new media (internet) options. The
companies listed in Table 3.4, however, are those whose origins are largely based
on the internet. Some—Google, Yahoo, and AOL—are largely internet portals that
rely heavily on advertising revenues from their search engines or other internet
sites. Others—such as Amazon and eBay—rely heavily on online commerce while
still others—Facebook, LinkedIn, eHarmony, and Twitter are best described as
social-networking sites.23 All rely heavily on advertising, draining potentially a
large share of advertising revenues from the traditional media companies. One
could even include Microsoft in Table 3.4 because of its substantial investment in
a search engine, online gaming, Skype, etc., but it would be difficult to determine
the share of its $225 billion in market cap attributable to its new media businesses.
22
AOL is excluded from Time Warner. Because segment data is not available for Liberty Media
and NBC/Universal for these years, they are excluded.
23
Note that these data are for 8/31/2011, substantially before the initial public offering of
Facebook stock in 2012.
54
R. W. Crandall
Table 3.4 Market capitalization and equity betas of new internet-related companies
Company
Market capitalization (Billion $)
Equity b
Google
Amazon.com
Facebook
eBay
Yahoo!
Netflix
Zynga
Twitter
Linked-In
Pandora
IAC/Interactive Corp.
AOL
eHarmony
Zillow
Linden Labs
175
98
75
40
17
12
10
8
8
2
3
2
0.5
0.4
0.2
0.91
0.90
Not traded
1.58
0.83
0.40
Not traded
Not traded
NA
NA
0.52
NA
Not traded
Not traded
Not traded
Source www.finance.yahoo.com
Note that these new internet-related companies’ total market capitalization is
even greater than the total market capitalization of the large traditional media
companies shown in Table 3.3. This suggests that the equity markets are assuming
that these companies can attract users, generate new media, and divert substantial
advertising revenues from traditional media companies. Nevertheless, this
expectation has not yet eroded the values of the traditional media companies, as we
have seen.
3.8 Prospective Impacts of Internet-Connected TV
Households
As broadband spreads to a large proportion of U.S. households, new commercial
opportunities arise for delivering video content to consumers. The main impediment to such delivery lies in the household’s television viewing architecture. I can
view a large amount of internet-delivered video material on the screens of my
desktop or laptop computer, but if I want to have a complete viewing experience,
I need to connect the internet to my large-screen, high-definition television
receiver. In fact, this is precisely what many households are now doing through
game consoles or directly connected television receivers. A recent report from
Morgan Stanley provides current data and forecasts on the share of these ‘‘connected households’’ (see Fig. 3.11).
Today, fewer than 25 % of broadband households—or fewer than 18 % of all
households—have a direct connection from their television sets to the internet, but
the share is likely to grow rapidly. As this share grows, new distribution patterns
3 Over the Top: Has Technological Change
55
Fig. 3.11 Percentage of broadband homes with direct TV connection to the internet
are likely to arise. The number of potential distributors will increase, creating
greater competition for content. But will these changes lead to a transformation of
video distribution from a cable/satellite model of a variety of tiers of program
channels (plus some a la carte pay-per-view options) to one that provides most
programs on a pay-per-view basis? This, in turn, could reduce the reliance of the
current video industry on advertising revenues, currently about 60 % of total
revenues, to one much more reliant on subscription revenues. Whatever the outcome, absent an increase in video content piracy, the media companies are likely
to gain at the expense of today’s major video distributors.
The shift from traditional television, delivered by cable, satellite, or (to a much
lesser extent) broadcast stations to internet delivery through large-screen home
television screens has its antecedents in the use of digital video recorders (DVRs).
Households have used home video recorders—beginning with video cassette
recorders—for years as devices to store or time-shift programming. The newer
DVRs provide for much easier storage and time shifting, and they also facilitate
the bypassing of commercial messages. Now, those interested in exploiting the use
of internet delivery of video, such as Google, are seeking to replace or supplement
these DVRs with new devices that facilitate streaming video content directly into
large family television screens.24
The effect of a shift to internet-connected television sets on distributors and
media companies is difficult to predict. There are a number of possibilities. First,
such a shift may allow consumers to bypass television advertising much more than
they have thus far because of the limited spread of DVRs. This would obviously
24
Google’s recent decision to buy Motorola Mobility is widely seen as an attempt to enter not
only the production of smartphones, but also an attempt to use Motorola technology to develop
the set-top boxes for its Google TV offering.
56
R. W. Crandall
erode the value of advertiser-supported programming to distributors. Second, this
shift may allow media content owners to offer their programming to subscribers on
a direct fee basis, further reducing their reliance on advertising. Third, the internet
connection will allow content viewers to interact with other sites as they view the
programming, thereby increasing advertising opportunities through search engines,
banner advertising, or other forms that are presented on the internet.
Clearly, the recent changes in the delivery of video media have been economically beneficial for the large media companies. Competition for programming
between satellite companies, cable television companies, and now telecommunications carriers has increased these media companies’ share of consumer expenditures on television subscriptions from 32 % in 2000 to 41 % in 2010, according
to Morgan Stanley.25 The shift of this programming to internet delivery will only
add to such competition, thereby further improving the media companies’ bargaining position.
The risk to traditional media that derives from directly connected television sets
is that it erodes their share of the households’ time because it permits viewers to
search for alternative content, even viewer generated content. Moreover, any closer
connection of households to the internet creates the threat of further erosion of
conventional advertising on television, in print, or elsewhere in favor of internet
advertising. internet advertising is now growing much more rapidly than advertising
on all other media, but it still accounts for only about 10–15 % of all advertising.26
At this juncture, the immediate threat posed to traditional media companies by
streaming of video directly from the internet to households’ television sets appears
to be rather minimal. This fact is reflected in the large media companies’ stock
market performance, reviewed above. The threat to the video distributors—cable
television, satellite, and telecom companies—may be a bit more severe. However,
two of these groups—cable television and telecom companies—are still necessary
for households to obtain the high-speed internet connections through which
internet content is delivered. Whether the revenues from increasing demand for
bandwidth to receive this programming offset any loss in revenues from these
firms’ traditional video services remains to be seen.27
3.9 Conclusions
There can be little doubt that the diffusion of high-speed internet services and the
remarkable advances in consumer electronics have combined to present a serious
challenge for traditional media companies and their downstream distributors. Yet,
25
Swinburne et al. (2011).
See, for example, http://www.businessinsider.com/chart-us-advertising-by-medium-2011-7.
27
Ultimately, these distributors could be bypassed by new wireless technologies that may be
deployed to allow the delivery of the requisite data directly to households.
26
3 Over the Top: Has Technological Change
57
despite these changes, consumers continue to obtain their video entertainment
through traditional channels, confirming the insight reflected in Houthakker and
Taylor’s theory of consumer demand.
On the other hand, there is no denying that the print media companies are in
serious decline and that record companies are suffering from a decade of stagnation. As a result, the leading newspaper and recording companies have seen their
market values decline to low levels.
But the traditional large motion picture and video media companies, such as
Disney, CBS, and Viacom, continue to thrive, and the equity markets have not yet
begun to mark them down in anticipation of any major changes in consumer
viewing habits. It may be that even when most consumers have connected their
large-screen television sets directly to the internet or, perhaps less likely, most of
us begin watching video over wireless devices, these media companies will continue to provide most of consumers’ viewing content with little loss of revenue.
What is perhaps most surprising, however, is that the traditional video distribution companies’ equities have performed equally well in the face of this
potential threat. It is possible, even likely, that they may lose programming revenues as subscribers shift to internet delivery of video, but the markets appear to
reflect the fact that these companies will offset any revenue declines from revenues
derived from the greater internet bandwidth demanded by consumers.
References
Arewa O (2010) YouTube, UGC, and digital music: competing business and cultural models in
the internet age. Northwest Univ Law Rev 104(2):75–431
Banerjee A, Alleman J, Pittman L’Tanya, Rappoport P (2011) Forecast of over-the-top video
demand, 31st International symposium on forecasting, Prague, June 26–29
Crandall RW, Furchtgott-Roth HW (1996), Cable TV: regulation or competition. The Brookings
Institution
Envisional Ltd (2011) Technical report: an estimate of infringing use of the internet, January
Houthakker HS, Taylor LD (1966) Consumer demand in the United Sates, 1929–1970. Harvard
University Press, USA
Houthakker HS, Taylor LD (2010) Consumer demand in the United Sates: Prices, Income, and
Consumption Behavior, Springer, USA
Liebowitz SJ (2008) File-sharing: creative destruction or just plain destruction? J Law Econ
49:1–24
Oberholzer-Gee F, Strumpf K (2007) The effect of file sharing on record sales: an empirical
analysis. J Political Econ, Vol 115, pp 1–42
Owen B (1999) The internet challenge to television. Harvard University Press, USA
Swinburne B et al (2011) Media and cable/satellite, next up in the evolution of media: the
connected TV. Morgan Stanley, New York
Chapter 4
Forecasting Video Cord-Cutting: The
Bypass of Traditional Pay Television
Aniruddha Banerjee, Paul N. Rappoport and James Alleman
4.1 Introduction
Following the substitution of mobile phones for fixed-line phones (‘‘voice cordcutting’’), a similar transition is now occurring for video services. In the United
States, traditional pay television service providers are experiencing some revenue
losses and slowdowns. Also, consumers are increasingly streaming or downloading long-form video programming (mainly movies and TV shows). This phenomenon—described as ‘‘video cord-cutting’’ or ‘‘over-the-top (OTT) bypass’’—
suggests that the business models of traditional TV service providers are under
threat. There is, however, considerable debate about the severity of that threat.
Some observers believe that, by 2014, OTT bypass revenue may reach
$5.7 billion globally, driven by streaming services like Netflix Watch Instantly,
Hulu Plus, and others. Whether OTT video and traditional pay TV will mostly be
consumed jointly or the former will replace the latter remains a matter of conjecture presently. Clearly, as this transformation of the communications-media
landscape proceeds, several future developments will need to be forecast. For
example, will the traditional pay TV model survive the OTT threat? Will OTT be
driven by ‘‘free’’ TV or will the subscription model catch on? How will device
manufacturers, content providers, and service providers respond to OTT bypass?
How will consumer choices and behaviors drive this transformation?
This chapter reports on efforts to forecast the effect of consumer choices on the
future of video cord-cutting. Based on a comprehensive tracking survey of
A. Banerjee (&)
Centris Marketing Science, 10 Chestnut Street, Acton, MA 01720, US
e-mail: abanerjee@centris.com
P. N. Rappoport
Department of Economics, Temple University, Philadelphia, PA, USA
J. Alleman
College of Engineering and Applied Science, University of Colorado—Boulder,
Boulder, CO, USA
J. Alleman et al. (eds.), Demand for Communications Services - Insights and Perspectives,
The Economics of Information, Communication, and Entertainment,
DOI: 10.1007/978-1-4614-7993-2_4, Springer Science+Business Media New York 2014
59
60
A. Banerjee et al.
households in the United States, the chapter presents evidence on household
ownership of OTT-enabling devices and subscription to OTT-enabling services,
and forecasts their effects on OTT. It also assesses how consumers’ OTT choices
are determined by household geo-demographic characteristics, device ownership,
and subscription history. Ordered logit regressions are used to analyze and forecast
future choices of devices and services, and estimate switching probabilities for
OTT substitution by different consumer profiles.
4.2 OTT Bypass or Video Cord-Cutting
Alternatives to traditional pay television services (Pay TV) have been evolving for
well over a decade. Starting with subscription-based, mail-delivered services
(which made available movies and TV shows on DVDs for a moderate monthly
fee), video-viewing alternatives have now ‘‘gone online,’’ that is, become internetbased. Both subscription-based and ‘‘free’’ services now offer either streamed or
downloaded video content.1 High-quality (often High Definition) video is now
being increasingly streamed as households upgrade to high-bandwidth internet
connections, and service/content providers are scrambling to build their video
libraries including, in some instances, first-run movies (See Netflix 2012).
The Pay TV model for distributing original and syndicated television programming and movies got a significant boost when new technologies (principally
based on cable, satellite, and fiber-based systems) greatly exceeded the reach of
over-the-air TV broadcasts, in terms of both content scope and image quality. For
at least three decades, the Pay TV model thrived as major improvements occurred
in both hardware (TV sets, set-top boxes, and other equipment) and content (with
the emergence of a wide range of programming genres). The high penetration rates
of Pay TV in the US-enabled resource-rich cable and other companies to also offer
reasonably priced and often bundled telephone and internet connection services to
their customers. Their entry into telephone markets—following enactment of the
Telecommunications Act of 1996—was accompanied by the unexpected popularity of their cable modem-based internet access services.2 Ironically, the success
of the carriers deeply invested in Pay TV also contained the seeds of the technology that would, in time, make OTT alternatives attractive at reasonable cost. It
is not hard to imagine that falling costs for internet access and the expansion of
1
Streamed video content is available in real time and is typically not storable on the consumer’s
media equipment. Downloaded content can be stored and viewed repeatedly, often an unlimited
number of times.
2
The United States is one of a handful of countries in which cable modems strongly outpace
DSL as the preferred delivery vehicle for internet access. The Federal Communications
Commission (FCC) reports that two in five residential broadband connections in the United States
are through cable modems, followed by about a third through wireless means, and about a quarter
via DSL. See Chart 13 in Federal Communications Commission (2011).
4 Forecasting Video Cord-Cutting
61
internet-capable devices can, through the proliferation of OTT, pose significant
challenges to the Pay TV model itself.
The rise of OTT is, however, not as linear and predictable as it might seem at
first glance. If it ever happens, large-scale pure substitution of streamed or
downloaded video for video content obtained through Pay TV—that is, the
complete supplanting of one form of video delivery technology by another (or
‘‘video cord-cutting’’)—is probably several years away. In that respect, the
experience with ‘‘voice cord-cutting,’’ that is, the substitution of mobile for fixed
telecommunications is instructive. Even today, after nearly two decades of
relentless expansion by mobile telecommunications, a significant percentage of
households in the United States (and other developed countries) use both mobile
and fixed-line telephone services. This trend of ‘‘co-consumption’’ may well prove
to be true of video content for a considerable number of years.
Another parallel from the voice cord-cutting experience is instructive: the role
of ‘‘first-timers’’ or consumers (usually young and newly independent) that opted
for mobile telephone service without even first trying out fixed-line telephone
service. Strictly speaking, such behavior is not substitution in the truest sense of
the term. However, it is a third possibility (besides co-consumption and pure
substitution) that must be considered seriously for the manner in which OTT
bypass may evolve. For the purposes of this chapter, three possible ‘‘OTT use
categories’’ are identified:
Co-consumption: the video content-viewing experience is a mix of Pay TV and
OTT alternatives like streaming or downloading. Neither, in and of itself, constitutes the entire viewing experience. Lifestyle factors and convenience may
determine when the consumer or the household uses one or the other, and what
form of video is viewed using either.3
Pure substitution (video cord-cutting): Pay TV is replaced completely by the
use of OTT alternatives. The individual consumer or household must make a
conscious decision to terminate any form of Pay TV being used to view video
content and adopt streaming or downloading methods instead.
First-timer behavior: at the first available opportunity, the individual consumer
or household chooses to use only OTT methods to view video content, without
ever trying out the Pay TV option.
It follows that any consumer or household that does not choose any one of these
three options is an ‘‘OTT nonuser.’’ For now, this category also includes
3
Co-consumption is sometimes characterized as ‘‘complementarity.’’ This can be misleading.
Complementarity has a specific meaning in economics, namely, the tendency for the consumption
of a product Y to increase (decrease) whenever the price of another product X falls (rises). The
classic example is that of razors and razor blades. Falling prices for razors may stimulate the
demand for razors and, in the process, also drive up the demand for razor blades. This is coconsumption of a particular form, one triggered by price change. In this chapter, we use the term
co-consumption more generally, so that the mixing of Pay TV and OTT use does not necessarily
mean a common trend in their demands or a phenomenon brought about by demand response to
price change.
62
A. Banerjee et al.
households that rely exclusively on over-the-air broadcasts for video content and
have never tried OTT alternatives, as well as the relatively few households that do
not seek out video content from any source.
Any debate over the significance of OTT and the likely extent of the threat to
the Pay TV model must first comprehensively define the ways in which OTT
bypass can occur. The three OTT use categories defined above for that purpose are
proposed. Second that debate must be informed by actual evidence (from either
stated or revealed preference data) about the extent to which the four types of
behavior (including nonuse of OTT) are occurring. Without such a structured
inquiry, no worthwhile inferences can be drawn about OTT.4
4.3 Evidence on OTT Use Categories
To answer several questions about the OTT phenomenon, Centris launched a
comprehensive research project called ‘‘Evolution of Video: Market Demand
Tracking Study’’ in November 2010. For this purpose, Centris surveys approximately 8,000 households from an internet panel every month on their videoviewing habits, ownership of video-viewing devices, subscription to or use of
video content services, viewership frequencies, willingness-to-pay for OTT-based
services, satisfaction with Pay TV services, etc. Measuring prospective behavior
(over the next six months) regarding video consumption is an important component of the survey. Demographic information on respondents (gender, household
status, household size, age, household income, employment status, and race or
ethnicity) is also collected. All responses are weighted in inverse proportion to the
probability with which each surveyed household is selected.
Based on survey data gathered for the first eight months of the research project,
the relative sizes of the three OTT use and the OTT nonuse categories are calculated. These calculations pertain to both aggregate households as well as
households classified by demographic categories such as age, household income,
and race/ethnicity. Specifically, the following proportions are calculated:
• Proportion of households with any form of Pay TV service that also use or
subscribe to any free or paid video-streaming or downloading service/website
(co-consumption).
4
In recent months, many research houses have rushed to publish ‘‘facts and figures’’ about OTT
that, on their face, seem to contradict each other. A meta-analysis of these publications cannot
yield useful insights about the extent of the OTT threat. We believe that different background
definitions about what constitutes OTT substitution mainly responsible for the confusion. Service
and content providers that have many millions of dollars at stake in this matter cannot extract
objective and useful information from these contradictory data.
4 Forecasting Video Cord-Cutting
63
Table 4.1 Proportion of households in OTT co-consumption category (all households and by
demographic category), June 2011 (Three-month moving average)
Percent of households in OTT co-consumption category
All
By age group
By income range
43.1
18–34
35–44
45–54
55–64
65+
$0–$25 K
$25–$50 K
$50–$75 K
$75–$150 K
$150 K+
59.7
54.4
41.4
29.8
19.1
By race/ethnicity
30.5
38.8
50.1
55.2
59.8
White
African–American
Asian–American
Hispanic
40.3
47.0
57.2
54.1
• Proportion of all households that have terminated their past use of any form of
Pay TV and currently use or subscribe to any free or paid video-streaming or
downloading service/website (substitution).
• Proportion of all households that never had any past use of any form of Pay TV
but currently use or subscribe to one or more free or paid video-streaming or
downloading services/websites (first-timers).
By definition, the base for calculating the proportion of households in the coconsumption category must be only the households that currently have any form of
Pay TV service. However, it is necessary to also calculate this proportion using all
households (with or without Pay TV service) as the base. Doing so makes it
possible to calculate the proportion of OTT nonusing households residually as the
proportion of all households that fall into none of the three OTT use categories.
Finally, because of monthly fluctuations in the estimated proportions, their
three-month moving averages instead are calculated. For eight consecutive months
of data, this yields six moving average estimates per OTT use category.
Table 4.1 shows the proportion of all households in June 2011 that comprise the
OTT Co-Consumption category, as well as the proportions of households in that
category when classified by demographic categories, such as age group, household
income range, and race/ethnicity.5 Similarly, Tables 4.2, 4.3, 4.4 show the proportions of households (aggregate and classified by demographics) that comprise
the pure OTT substitution, OTT first-timers, and OTT nonusers categories in June
2011.
The following trends emerge from Tables 4.1, 4.2, 4.3, 4.4:
• OTT bypass in the form of pure substitution or first-timer choice is presently a
nascent phenomenon that may not yet represent a significant threat to the Pay
TV model. As of mid-2011, pure substitution has occurred in 5 % of households, while slightly more than 5 % of households have made OTT a first-timer
choice.
5
Because of space limitations, only the proportions in the end of the eight-month study period
are shown. Full details are available upon request from the principal author.
64
A. Banerjee et al.
Table 4.2 Proportion of households in pure OTT substitution category (All households and by
demographic category), June 2011 (Three-month moving average)
Percent of households in pure OTT substitution category
All
By age group
5.0
18–34
35–44
45–54
55–64
65+
By income range
8.1
5.7
4.4
3.2
1.7
$0–$25 K
$25–$50 K
$50–$75 K
$75–$150 K
$150 K+
By race/ethnicity
7.7
5.0
4.5
3.4
2.3
White
African–American
Asian–American
Hispanic
4.6
5.5
8.5
6.1
Table 4.3 Proportion of households in OTT first-timers category (All households and by
demographic category), June 2011 (Three-month moving average)
Percent of households in OTT first-timers category
All
By age group
5.4
18–34
35–44
45–54
55–64
65+
By income range
8.3
5.7
4.4
4.1
2.5
$0–$25 K
$25–$50 K
$50–$75 K
$75–$150 K
$150 K+
By race/ethnicity
8.3
5.8
3.9
3.3
2.5
White
African–American
Asian–American
Hispanic
4.8
6.3
13.8
6.3
Table 4.4 Proportion of households in OTT non-users category (All households and by demographic category), June 2011 (Three-month moving average)
Percent of households in OTT nonusers category
All
By age group
By income range
46.5
18–34
35–44
45–54
55–64
65+
$0–$25 K
$25–$50 K
$50–$75 K
$75–$150 K
$150 K+
23.9
34.3
49.8
62.9
76.7
By race/ethnicity
53.5
50.3
41.6
38.1
35.4
White
African–American
Asian-American
Hispanic
50.3
41.3
20.5
33.4
• As of mid-2011, just over 43 % of households (or, approximately half of all
households with any form of Pay TV service) fall into the OTT co-consumption
category. This signifies that most such households interested in OTT options
mix the two ways to receive video content as their lifestyle circumstances
require. This parallels the situation with telephone service, where a significant
proportion of US households utilize both mobile and fixed-line telephones.
• Just under half of all households have, as of mid-2011, made no use of, or are
not interested in, OTT options. This statistic alone could mean that a major
threat to providers of Pay TV services is not imminent.
When measured by demographic categories, some interesting insights emerge:
4 Forecasting Video Cord-Cutting
65
• The youngest age group (18–34) is the vanguard segment for OTT use. This is
as expected because this age group was also responsible for leading developments in voice cord-cutting. As of mid-2011, only 24 % of households in this
age group fall into the OTT nonuser category, by far the lowest among all age
groups. In fact, interest in, and use of, OTT options is nearly monotonic with
age, falling with increasing age of the householder.
• When arrayed by household income segments, a more complex picture emerges.
The households in the lowest income segments (particularly those with annual
income up to $25,000) have the highest propensities for OTT substitution or
first-timer choice. With steadily declining costs of broadband access, streaming,
and downloading represent lower cost options for accessing video content than
the more expensive Pay TV services. OTT options also enable lower income
households to target specific forms of video content, thus avoiding the need to
subscribe to expensive Pay TV packages within which only a limited number of
channels may be of interest. At the same time, co-consumption actually
increases steeply with household income. That is to be expected as higher
income households are able to afford the luxury of having multiple options for
viewing video content. The OTT nonuser category declines monotonically with
household income.
• Within racial or ethnic categories, all three forms of OTT use are highest among
Asian-Americans, followed by Hispanics and African-Americans. As of mid2011, Asian-Americans are almost three times as likely as Whites and more than
twice as likely as African-Americans and Hispanics to fall into the OTT firsttimers category. Also, Asian-Americans are almost twice as likely as Whites,
56 % more likely than African-Americans, and 44 % more likely than Hispanics
to belong in the pure OTT substitution category. Curiously, a similar pattern is
observed for co-consumption as well. Asian-Americans are significantly more
likely than Whites and African-Americans and somewhat more likely than
Hispanics to co-consume. The reverse pattern is true among OTT nonusers.
The obvious conclusion from these findings is that the leading edge of OTT
substitution (specifically video cord-cutting) and OTT first-timer choice is formed
by the combination of young householders in the lowest income segments that are
ethnically Asian or Hispanic. These two ethnic groups combined represented less
than 15 % of US households in 2010.6 But, with faster growth projected in both
segments compared with the non-Hispanic White segment, steady growth in OTT
substitution and first-timer choice may be expected in the future.
6
See Day (1996).
66
A. Banerjee et al.
4.4 Forecasting the Probability of OTT Use by Consumer
Profile
The preceding section shows that demographic variations clearly influence patterns of OTT use among US households. In order to forecast the future demand for
video content by OTT means, it is important to account for those demographic
variations. In addition, we take into account (1) household ownership of internetenabled media equipment that facilitate OTT use and (2) household use of (or
subscribership to) either paid or free streaming or downloading services that
provide access to movies, TV shows, and other forms of video content. By doing
so, we build a full consumer (household) profile based on demographic, device
ownership, and OTT service use characteristics.
Unfortunately, the cross-currents among the three forms of OTT use make it
difficult to forecast future demand for any single one of those forms in isolation.
To work around this problem, we modeled solely the future probability of pure
OTT substitution in terms of consumer profiles constructed in the manner
described above. The Centris survey asks responding households about the likelihood of their adopting the pure OTT substitution option ‘‘within the next six
months.’’ Likelihood is measured on a five-point Likert scale: 1=‘‘Not at all
likely,’’ 2=’’Somewhat unlikely,’’ 3=’’Neither likely nor unlikely,’’ 4=’’Somewhat
likely,’’ and 5=‘‘Very likely.’’
We modeled responses to this question using the ordered logit regression
methodology. The dependent variable (likelihood of pure OTT substitution in the
next six months) is an ordered categorical variable and is, hence, a prime candidate
for this regression methodology.7
Let L be an unobservable variable representing the likelihood with which a
household would substitute OTT for Pay TV. The household then chooses
‘‘Very likely’’ if L [ u1
‘‘Somewhat likely’’ if u1 [ L [ u2
‘‘Neither likely nor unlikely’’ if u2 [ L [ u3
‘‘Somewhat unlikely’’ if u3 [ L [ u4
‘‘Not at all likely’’ if u4 [ L where u1-u4 are unobserved utility thresholds or
‘‘cutoff’’ points. Let x be a vector of observable household-specific variables that
affect the likelihood L, and e be random unobserved effects. Then, consider the
following relationship
L ¼ b0 x þ e:
ð1Þ
Assuming that e has a logistic distribution gives rise to an ordered log it
regression model that can be estimated by maximum likelihood methods. The
probability of each of the five ordered responses can then be recovered as the
7
See William H. Greene and David A. Hensher, Modeling Ordered Choices: A Primer, New
York: Cambridge University Press, 2010.
4 Forecasting Video Cord-Cutting
67
probability that L falls into the ranges defined by the thresholds above. Maximum
likelihood estimation applies jointly to the parameter vector b and the thresholds
u1-u4. The probabilities are calculated using these estimates.
For independent variables, we selected the following:
1. Demographic variables, including gender, household size, household status,
employment status, age, household income, and race/ethnicity. All except
household size were treated as categorical variables.8
2. Device ownership variables, for high definition TVs, 3D TVs, other types of
TV, desktop computers, laptop computers, tablet computers (e.g., the iPad),
smartphones, portable video players, Apple TV/ Roku, game consoles, and
other video devices. These were all treated as binary categorical variables.
3. Subscription/use variables, including those pertaining to mail-delivered DVD
rental services, paid subscription services such as Netflix and Hulu, and ‘‘free’’
streaming services available from various media network websites. These were
all treated as binary categorical variables.
We considered two versions of the ordered log it models built from these data,
one that includes the device ownership variables and another that excludes them. A
household’s decision to own certain types of devices intended specifically for
streaming can conceivably be made jointly with any decision to receive video
content solely through streaming. In econometric terms, modeling the ownership
of those devices as independent variables would then introduce an endogeneity
bias and require the use of instruments for meaningful model estimation.9 For the
moment, we avoided the possible endogeneity problem by estimating the second
version of the model that excluded device ownership variables, and left the use of
instrumental variables to a future study.10
We estimated separate monthly regression models for five consecutive months
(November 2010–March 2011) using the survey data. Maximum likelihood estimation was carried out using STATA-MP 12.0. Table 1 shows summarized
results. 11
8
The levels for these variables are gender (male/female), household status (head/member),
employment status (full-time, part-time, neither), age (18-34, 35-44, 45-54, 55-64, 65 and over),
household income ($0-$25,000, $25,000-$50,000, $50,000-$75,000, $75,000-$150,000, $150,000
and over, undisclosed), and race/ethnicity (White, African-American, Asian-American, Hispanic,
all other).
9
The endogeneity problem need not arise for all of the devices on the list. For example, the
various types of TV sets and computers could be purchased by households primarily for purposes
other than streaming video. If so, then the ownership of each device would be considered a
‘‘predetermined’’ variable and, hence, not be endogenous with any decision in favor of pure OTT
substitution in the next six months.
10
This is not an ideal solution because exclusion of a potentially non-endogenous device
ownership variable likely creates an omitted variables bias.
11
Because of some significant changes in the survey questionnaire, data for the months AprilJune 2011 were not used for modeling this issue
68
A. Banerjee et al.
Table 4.5 Ordered logit regression models for pure OTT substitution (with and without device
ownership variables), summary of results 12
Independent variables
Effect (in reduced
Effect (in full
model)
model)
Gender (Male = 1/Female = 0)
Household size
Household status (Head = 1/Member = 0)
Full-time employed (Yes = 1/No = 0)
Part-time employed (Yes = 1/No = 0)
Age 18–34 (Yes = 1/No = 0)
Age 35–44 (Yes = 1/No = 0)
Age 45–54 (Yes = 1/No = 0)
Age 55–64 (Yes = 1/No = 0)
White (Yes = 1/No = 0)
African–American (Yes = 1/No = 0)
Asian–American (Yes = 1/No = 0)
Hispanic (Yes = 1/No = 0)
Own HD TV (Yes = 1/No = 0)
Own laptop computer (Yes = 1/No = 0)
Own smartphone (Yes = 1/No = 0)
Own apple TV/Google TV/Roku (Yes = 1/No = 0)
Own other video device (Yes = 1/No = 0)
Subscribe to mail-delivered video rental service
(Yes = 1/No = 0)
Subscribe to Netflix (Yes = 1/No = 0)
Subscribe to Hulu (Yes = 1/No = 0)
Use free video-streaming websites (Yes = 1/No = 0)
Satisfaction with Pay TV (High = 1/Low = 0)
TV programming available on mobile device
(Yes = 1/No = 0)
+
+
+
+
+
+
+
+
+
–
±
+
±
+
+
+
+
+
+
+
+
+
+
–
±
+
±
+
+
+
+
+
+
+
+
+
–
+
+
+
+
–
+
Note 1 Only statistically significant effects (positive or negative) are shown. Most are statistically
significant at the 5 % level, while the rest are so at the 10 % level
Note 2 The evidence on the four race/ethnicity variables is weaker than for the other variables.
Also, the effects of the African–American and Hispanic categories, relative to the ‘‘Other Race/
Ethnicity’’ category, have varying signs across the months
The findings in Table 4.5 are a composite of the estimation results from the five
monthly regression models. Because almost all of the independent variables in the
regression models are categorical, some care is needed to interpret the findings
about positive or negative effects. As with all categorical variables, one level is
usually set as the default level and is excluded from the list of included independent variables. Then, the direction of the effect of any other level of that
variable is interpreted by reference to the default level.
12
To conserve space, details about estimates, confidence intervals, Wald statistics, and
goodness-of-fit statistics for the individual regressions are not reported in this paper. They are
available from the contact author upon request.
4 Forecasting Video Cord-Cutting
69
Some of the demographic variables, such as Age, Household Income, and Race/
Ethnicity are multi-level categorical variables. All levels of those variables except
for the default level are included among the independent variables. For example,
for the Age variable, the default level (held outside the model) is the age group 65
and over while four younger age groups are levels included among the independent
variables. As Table 4.5 shows, the effects of all included levels of the Age variable
are positive and statistically significant. This implies that, relative to the default
age group 65 and over, the effect of every other age group is positive, i.e.,
increases the likelihood of pure OTT substitution. Although not shown in Table
4.5, the estimated coefficients for the included levels of the Age variable actually
decline in magnitude with age, i.e., the age group 18-34 has the largest (and most
positive) effect on the dependent variable and the age group 55-64 has the smallest
(and least positive). From this it can be inferred that the lower the age group, the
greater, and more reinforcing, is the positive effect on the likelihood of pure OTT
substitution within the next six months.
All non-demographic variables (device ownership variables, subscription/use
variables, satisfaction with Pay TV, and availability of video on mobile devices)
are also categorical but strictly binary of the ‘‘Yes/No’’ kind. We set the No level
as the default level for these variables. Thus, we find from Table 1 that households
that already subscribe to (or use) OTT options presently have a higher likelihood
of seeking the pure OTT substitution option within the next six months. Also,
households that would like to see TV programming available on mobile devices
are more likely to opt for pure OTT substitution within the next six months.
Satisfaction with Pay TV service is a binary variable of the ‘‘High/Low’’ kind. We
set the ‘‘Low’’ level as the default. Thus, a negative effect means that households
highly satisfied with Pay TV service are less likely to opt for pure OTT substitution
than households that are not. This result is intuitively plausible and indicates that
dissatisfaction with conventional forms of video access may drive households to
cut the video cord in favor of streaming and downloading options.
To forecast the probability of pure OTT substitution by consumer profile, we
selected the estimated ordered logit regression model (reduced model version) for
December 2010, shown in Table 4.5.13
We constructed consumer profiles by retaining only the independent variables
with statistically significant effects in the regression model shown in Table 4.6.
These were the following categorical variables (shown with their levels):
•
•
•
•
Gender (Male/Female)
Full-time employment (Yes/No)
Part-time employment (Yes/No)
Age 18-34 (Yes/No)
13
Probability has a simple interpretation in this context. It is simply the proportion of
households that is expected to exhibit a certain behavior, e.g., pure OTT substitution in the next
six months.
70
A. Banerjee et al.
Table 4.6 Ordered logit regression model for Netflix subscription in next six months, December
2010
Variable
Coeff estimate
Robust std error
z-stat
Prob value
Gender
0.3381
Household size
0.0291
Household status
-0.0677
Full-time employ
0.2211
Part-time employ
0.2931
Age 18–34
0.6845
Age 35–44
0.5031
Age 45–54
0.3908
Age 55–64
0.2632
Income $0–$25 K
0.0451
Income $25 K–$50 K
0.1662
Income $50 K–$75 K
0.1624
Income $75 K–$150 K
0.2560
Income over $150 K
0.0892
White
-0.1479
African–American
0.2578
Asian–American
0.3319
Hispanic
0.0665
Subscribe to DVD rental
0.1069
Subscribe to Netflix
0.2340
Subscribe to Hulu
0.5695
Use free video website
0.4761
Satisfaction with Pay TV
-1.0197
TV Prog on mobile device
0.3378
No of observations = 4,964
Log pseudo-likelihood = -4048.88
Pseudo R2 = 0.1079
•
•
•
•
•
•
•
•
•
0.0745
0.0253
0.1208
0.0843
0.1019
0.1541
0.1504
0.1439
0.1393
0.1880
0.1717
0.1737
0.1718
0.2320
0.2566
0.2970
0.3235
0.3061
0.0758
0.0766
0.1020
0.0780
0.0700
0.0288
4.54
0.000
1.15
0.252
-0.56
0.575
2.62
0.009
2.88
0.004
4.44
0.000
3.35
0.001
2.72
0.007
1.89
0.059
0.24
0.810
0.97
0.333
0.94
0.350
1.49
0.136
0.38
0.700
-0.58
0.564
0.87
0.385
1.03
0.305
0.22
0.828
1.41
0.158
3.05
0.002
5.58
0.000
6.10
0.000
-14.58
0.000
11.72
0.000
Wald v 2(24) = 884.73
Prob [ v2 = 0.0000
Age 35-44 (Yes/No)
Age 45-54 (Yes/No)
Age 55-64 (Yes/No)
Subscribe DVD rental (Yes/No)14
Subscribe to Netflix (Yes/No)
Subscribe to Hulu (Yes/No)
Use free video websites (Yes/No)
Satisfaction with Pay TV (High/Low)
Want TV programming to be available on mobile devices (Yes/No)
14
Probability has a simple interpretation in this context. It is simply the proportion of
households that is expected to exhibit a certain behavior, e.g., pure OTT substitution in the next
six months.
4 Forecasting Video Cord-Cutting
71
Male, 65+
Female, 65+
12.2%
9.0%
Male, 55-64
Female, 55-64
15.4%
11.4%
Male, 45-54
Female, 45-54
17.5%
13.0%
Male, 35-44
19.8%
Female, 35-44
14.8%
Male, 18-34
22.4%
Female, 18-34
16.9%
Maximum Predicted Probability
Figure 4.1 Maximum Predicted Probabilities of (‘‘Very likely’’ + ‘‘Somewhat likely’’) for
Different Gender and Age Group Sets, Pure OTT Substitution in Next Six Months, December
2010
Unique combinations of these variables (and their levels) yielded 1,920 consumer profiles. For example, one such profile was:
Male, full-time employed, age 35-44, subscribes to DVD rental,subscribes to Netflix, does
not subscribe to Hulu, uses free video websites, low satisfaction with Pay TV, wants TV
programming tobe available on mobile devices
Recall that the dependent variable of interest was the likelihood of pure OTT
substitution within the next six months, measured on a five-point scale (‘‘Very
likely,’’ ‘‘Somewhat likely,’’ Neither Likely nor Unlikely,’’ ‘‘Somewhat unlikely,’’
and ‘‘Not at all likely’’). For every consumer profile,we computed the predicted
probability of each of these five levels.15 Confidence intervals for the predicted
probabilities were computed using the delta method.
We then determined the highest, lowest, median, and mean predicted probabilities for all fivelevels and identified the specific consumer profile corresponding
to each. In order to makeinference easier, we collapsed the two top likelihood
levels (‘‘Very likely’’ and ‘‘Somewhat likely’’) and added their respective predicted probabilities, and did the same for the two bottom likelihood levels (‘‘Not at
all likely’’ and ‘‘Somewhat unlikely’’). Again, we identified theconsumer profiles
corresponding to the summed predicted probabilities for the top two levels and the
bottom two levels.
15
See Kenneth E. Train, Discrete Choice Methods with Simulation, New York: Cambridge
University Press, 2003, especially pp. 163-167, for the technical details on predicting these
probabilities. We used a routine in STATA to estimate the probabilities and their confidence
intervals.
72
A. Banerjee et al.
Male, 65+
97.3%
Female, 65+
98.1%
Male, 55-64
96.5%
Female, 55-64
97.5%
Male, 45-54
96.0%
Female, 45-54
97.1%
Male, 35-44
95.4%
Female, 35-44
Male, 18-34
Female, 18-34
96.7%
94.6%
96.1%
Maximum Predicted Probability
Figure 4.2 Maximum Predicted Probabilities of (‘‘Not at all likely’’ + ‘‘Somewhat unlikely’’)
for Different Gender and Age Group Sets, Pure OTT Substitution in Next Six Months, December
2010
Figures 4.1 and 4.2 provide a convenient way to summarize the highest predicted probabilities ofboth the top two and the bottom two levels of the dependent
variable.
As Table 4.6 shows, males are more likely than females, and younger age
groups are more likelythan older age groups to consider pure OTT substitution in
the next six months. Figures 4.1 and4.2 confirm this in terms of the predicted
probabilities. In Figure 4.1, the highest probability of being ‘‘Very likely’’ or
‘‘Somewhat likely’’ to consider pure OTT substitution in the next monthsis 22.4%
for a consumer profile within the (male, age 18-34) set. That is, more than one in
fivemale consumers in the 18-34 age group with this profile is leaning towards
such substitution inthe near future.
Figure 4.2 shows that the highest probability of being ‘‘Not at all likely’’ or
‘‘Somewhat unlikely’’to consider pure OTT substitution in the next six months is
98.1% for a consumer profile in the (female, age 65+) set. Very few consumers
with this profile are even thinking of OTT substitution in the near future.
Figures 4.1 and 4.2 also show the converses. In Figure 4.1, the lowest probability of being ‘‘Very likely’’ or ‘‘Somewhat likely’’ to consider pure OTT substitution in the next six months is 9.0%for a consumer profile in the (female, age
65+) set. Similarly, in Figure 4.2, the lowest probability of being ‘‘Not at all
likely’’ or ‘‘Somewhat unlikely’’ to consider pure OTTsubstitution in the next
months is 94.6% for a consumer profile in the (male, age 18-34) set.
4 Forecasting Video Cord-Cutting
73
The specific consumer profile corresponding to the maximum predicted probability of being‘‘Very likely’’ or ‘‘Somewhat likely’’ (or, simply ‘‘Very likely’’) to
consider pure OTT substitutionin the next months is shown in Table 3.
Whether for the combined likelihoods (e.g., ‘‘Very likely’’ + ‘‘Somewhat
likely’’) or the individual likelihoods, we computed predicted probabilities for all
1,920 consumer profiles. 16Therefore, for any specific consumer profile, a predicted probability is available for all fivelevels of the ‘‘likelihood of pure OTT
substitution within next six months’’ variable. From these,we also identified the
consumer profiles most closely associated with the median levels of thosepredicted
probabilities. Table 4 shows the consumer profiles associated with the median
predicted probability of pure OTT substitution in the next six months.
The median predicted probability of a household being very or somewhat likely
to drop Pay TV service altogether in favor of OTT options in the next months is
quite a bit lower than the maximum predicted probability. The findings that (1)
only about 5% of households have substituted OTT for Pay TV already and (2) the
median predicted probability of substitution in the next six month is very small
together imply that, on average, the present OTT threat to Pay TV in the US is
nascent at best.
16
These are too numerous to reproduce here, but may be requested from the contact author,
subject to appropriate data disclosure agreements.
17
Founded in 1997, Netflix Inc. is a Los Gatos, California, based provider of streamed and maildelivered video services (including movies and TV shows). Launched in late 2010, ‘‘Netflix
Watch Instantly’’ is a streaming service that can play videos on a variety of media devices,
including standard and high definition TVs, computers (desktop, laptop, and tablet), smartphones,
portable video players, Blu-ray players, game consoles, and specialized direct feed devices like
Apple TV and Roku. Until recently, the monthly subscription charge presently was $7.99 and, for
$2 a month more, consumers had the additional option of receiving DVDs from Netflix by mail.
As of September 1, 2011, all Netflix customers must pay $15.98 to bundle the streaming and mail
offerings, although the price of the streaming subscription only remains at $7.99. Hulu is a joint
venture of several major media companies and studios, and began website-based and advertisingsupported streaming service in 2008. Under the label ‘‘Hulu Plus,’’ it now offers a commercialfree streaming video service that competes with Netflix Watch Instantly and is also priced at
$7.99 a month. Hulu Plus can also be received on several media platforms.
18
In early September 2011, news broke that Starz (a content provider and supplier to Netflix of
over 2,500 movie titles from two major studios, Sony Pictures and Disney) was terminating its
contract with Netflix in February 2012. The loss of Starz content can be difficult for Netflix’s
prospects unless it is able to find an equal or better content provider, particularly for premium
content. Coming on the heels of the substantial price increase for its bundled streaming and maildelivered subscription service, the outlook for Netflix does not look as promising presently as it
did before either of these events occurred.
74
Table 4.7 Maximum
Predicted Probabilities of
(‘‘Very
likely’’ ? ‘‘Somewhat
likely’’) for Different Age
Groups, Netflix Subscription
in Next Six Months,
December 2010
A. Banerjee et al.
Maximum predicted probability
Age
18–34
35–44
45–54
55–64
65+
58.1
70.9
70.1
70.9
58.1
%
%
%
%
%
4.5 Forecasting the Probability of OTT Use: Case of Paid
Streamed Video Services
In the United States, there is rising excitement about the recent forays made by
Netflix (and, to a lesser extent, by Hulu) into the paid subscription-based videostreaming business.17 Both Netflix and Hulu have large video libraries, including
first-run movies and current TV shows.18 In this environment, is supply creating its
own demand? To test this proposition, we modeled the Centris survey data for
household interest in Netflix’s paid subscription streaming service.19
The Centris survey asks responding households about the likelihood of their
subscribing to a Netflix OTT service within the next six months. For this too,
likelihood is measured on a five-point Likert scale, ranging from ‘‘Very likely’’ to
‘‘Not at all likely.’’ As with the likelihood of pure OTT substitution in the next six
17
Founded in 1997, Netflix Inc. is a Los Gatos, California, based provider of streamed and
mail-delivered video services (including movies and TV shows). Launched in late 2010, ‘‘Netflix
Watch Instantly’’ is a streaming service that can play videos on a variety of media devices,
including standard and high definition TVs, computers (desktop, laptop, and tablet), smartphones,
portable video players, Blu-ray players, game consoles, and specialized direct feed devices like
Apple TV and Roku. Until recently, the monthly subscription charge presently was $7.99 and, for
$2 a month more, consumers had the additional option of receiving DVDs from Netflix by mail.
As of September 1, 2011, all Netflix customers must pay $15.98 to bundle the streaming and mail
offerings, although the price of the streaming subscription only remains at $7.99. Hulu is a joint
venture of several major media companies and studios, and began website-based and advertisingsupported streaming service in 2008. Under the label ‘‘Hulu Plus,’’ it now offers a commercialfree streaming video service that competes with Netflix Watch Instantly and is also priced at
$7.99 a month. Hulu Plus can also be received on several media platforms.
18
In early September 2011, news broke that Starz (a content provider and supplier to Netflix of
over 2,500 movie titles from two major studios, Sony Pictures and Disney) was terminating its
contract with Netflix in February 2012. The loss of Starz content can be difficult for Netflix’s
prospects unless it is able to find an equal or better content provider, particularly for premium
content. Coming on the heels of the substantial price increase for its bundled streaming and maildelivered subscription service, the outlook for Netflix does not look as promising presently as it
did before either of these events occurred.
19
The results in this section pertain to the period before Netflix initiated a major price increase
and saw Starz terminate its contract for content provision. Significant changes in these results
may be expected in the aftermath of these events.
4 Forecasting Video Cord-Cutting
75
Table 4.8 Consumer profiles associated with the median predicted probability of the combined
top two levels of the likelihood of Netflix subscription in next six months
Likelihood of pure OTT substitution: very likely + somewhat likely Median predicted
probability = 3.8 %
Consumer profile
Gender
Female
Male
Age group
Full-time employed
Part-time employed
Subscribes to DVD rental service
Subscribes to Netflix
Subscribes to Hulu
Uses free video websites
Satisfaction with Pay TV
Wants TV programming on mobile devices
18–34
No
No
Yes
Yes
Yes
Yes
High
No
18–34
No
No
No
No
Yes
Yes
High
No
months, responses to the question about Netflix subscription using the ordered
logit regression methodology was modeled. The dependent variable (likelihood of
Netflix subscription in the next six months) is also an ordered categorical variable
with the same five levels. The same cohort of independent variables was retained
as before.
Using STATA, separate monthly regression models for each of the five months
(November 2010—March 2011) for which survey data are available were estimated. Table 4.5 shows summarized results.
Not surprisingly, the independent variables with statistically significant
effects—and the directions of those effects—were largely similar to those in the
models for pure OTT substitution (in Table 4.5). However, the role of one independent variable in particular—Subscribe to Netflix currently—raised several
questions. Not only was it dominant enough to swamp the effects of other independent variables, it also obscured the real appeal of Netflix’s streaming service in
particular. It is hardly surprising that households currently subscribing to Netflix
remain strongly inclined to continue doing so ‘‘in the next six months.’’ A more
interesting question to us was whether households that are not current Netflix
subscribers would consider becoming subscribers in the near future, perhaps
attracted by the streamed offering Netflix Watch Instantly. To answer this question, we extracted the sub-sample consisting only of non-Netflix subscribing
households, dropped the Subscribe to Netflix currently variable, and re-estimated
the ordered logit regression models.
To forecast the probability of future Netflix subscription by consumer profile for
this sub-sample, we selected the estimated ordered logit regression model for
December 2010 in Table 4.6.
20
A subscription to Netflix could be purely for the streaming service Netflix Watch Instantly or,
for a small additional monthly charge, also include mail-delivered DVDs.
76
A. Banerjee et al.
Profiles of 640 consumer profiles are constructed from the levels of these
independent variables. As before, the predicted probabilities of ‘‘Very
likely’’ ? ‘‘Somewhat likely’’ and ‘‘Not at all likely’’ ? ‘‘Somewhat unlikely’’
responses and their associated confidence intervals (using the delta method) were
computed.
The highest, lowest, median, and mean predicted probabilities for these
responses and identified the specific consumer profile corresponding to each is
determined. Table 4.7 summarizes the highest predicted probabilities by age
group.
As expected, the two youngest presently non-subscribing household cohorts
(age 18–44) have the highest predicted probability of Netflix subscription in the
near future, while non-subscribing households in the oldest age group (65 and
over) have the highest predicted probability of not subscribing to Netflix in the
near future.
The consumer profiles most closely associated with the median level of the
predicted probability of Netflix subscription in the next six months (among
presently non-subscribing households) are shown in Table 4.8.
The profiles of households with the median probability of starting Netflix
subscriptions in the near future are somewhat similar in some ways and profoundly
different in others. Neither set of households falls into the extreme age ranges, high
or low. They also both own some of the facilitating devices for video viewing and
streaming, such as Apple TV/Roku etc., DVD players, and Blu-ray players.20
Finally, they both make considerable use of mail-delivered or pickup DVD rentals.
However, they differ in other important respects, such as with respect to the use of
free websites that stream video and how satisfied they are with traditional Pay TV.
The 45–54 age group has high satisfaction with Pay TV but also would like to see
TV programming (such as that from Netflix) made available on mobile devices. In
contrast, the 55–64 age group appears to favor streaming by Netflix because it is
not satisfied with Pay TV, rather than because of any compelling desire to receive
TV programming on other screens such as mobile devices.
The 15.2 % median probability of presently nonsubscribing households considering subscribing in the next six months is considerably higher than the 3.8 %
median probability (see Table 4.4) of households considering pure OTT substitution in the next six months. However, even then, the urge to switch to, or add on,
21
It is important to remember that the median probability of future Netflix subscription pertains
only to the profiles of households that presently do not subscribe. In December 2010, just under
three in four (72.7 %) households were Netflix nonsubscribers. Of these households, only 6.3 %
indicated seriously considering subscribing to Netflix in the next six months. In contrast, of the
slightly more than a quarter of households that were already Netflix subscribers, an astonishing
75.4 % indicated a willingness to continue subscribing in the next months. The story is similar for
subscribers and nonsubscribers for Hulu or free video websites as well. Conceivably, the high
rates of co-consumption of Pay TV and OTT observed in Table 4.1 are largely driven by
households that subscribe to Netflix or Hulu or use free video-streaming websites.
22
Also, from April 2011 onward, we can track WTP only for the Netflix paid streaming
subscription service because of changes in the wording of those questions.
4 Forecasting Video Cord-Cutting
77
Table 4.9 Summary Statistics from WTP Distributions for Netflix and Hulu Plus, Monthly from
November 2010 to March 2011
Netflix
Hulu
Nov 2010
Dec 2010
Jan 2011
Feb 2011
Mar 2011
Apr 2011
May 2011
Jun 2011
Obs
Median
Mean
Std Dev
Obs
Median
Mean
Std Dev
1,346
1,847
2,445
1,228
2,171
244
323
313
$10
$10
$10
$10
$10
$10
$10
$10
$11.15
$11.62
$10.90
$11.04
$10.97
$11.64
$11.64
$12.41
$6.74
$6.39
$6.06
$6.14
$5.92
$7.80
$7.11
$7.53
1,570
1,389
2,688
1,405
2,511
$4
$1
$5
$5
$5
$5.30
$4.66
$5.48
$5.22
$5.39
$7.46
$7.20
$6.91
$7.04
$7.01
Netflix as a source of video programming is still tepid at this time. Much of the
demand for Netflix in the near future will come from households that co-consume
Pay TV and streamed video, rather than from those interested in cutting the video
cord.21
4.6 Willingness-to-Pay for Netflix and Hulu: Do Present
Prices Maximize Revenues?
What households are willing to pay for specific services is often a powerful
indicator of the popularity of and, more concretely, demand for that service. For
example, in many willingness-to-pay (WTP) surveys, a nontrivial fraction of
respondents indicate an unwillingness-to-pay anything at all for the product or
service in question. Others indicate amounts greater than zero, with a few outliers
proposing to pay unrealistically large amounts. Service providers frequently rely
on WTP surveys to get a fair indication of sustainable price points for their
services and the revenues that may be expected at those prices.
The Centris survey included questions about household WTP for Netflix and
Hulu Plus (the paid subscription service offered by Hulu). From November 2010 to
March 2011, these were asked of both present subscribers and nonsubscribers.
From April 2011 onward, the question has been asked solely of nonsubscribers.22
We analyzed the WTP data at two levels: (1) constructing summary statistics of
the WTP distribution and (2) estimating underlying pseudo-demand curves from
which price elasticities can be calculated at various price points.23
23
See the qualification in fn. 10 supra.
Histograms of the WTP distributions revealed that non-trivial proportions of households had
WTPs that were at least five times the median value.
25
The WTP data were mildly left-censored. Tobit (or censored) regression is explained in a
number of econometrics texts. See, e.g., Greene (2003), especially pp. 761–768.
24
78
A. Banerjee et al.
Willingness to Pay for Netflix Online, Dec 2010 (All Respondents)
700
120%
600
100%
Frequency
500
80%
400
60%
300
40%
200
20%
100
R² = 0.9551
0%
0
2
4
6
8
10
12
14
16
18
20
22
24
26
28
30
32
34
36
38
40
42
44
46
48
50
52
54
More
0
Willingness to Pay
Frequency
Cumulative %
Survival %
Poly. (Survival %)
Figure 4.3 Constructing a Pseudo-Demand Curve for Netflix from Willingness-to-Pay Data
(AllRespondents), December 2010
Table 4.10 Estimated ThirdOrder Polynomial
Relationship Between
Survival Function and WTP
for Netflix Using Robust OLS
and Tobit Regression
Methods, December 2010
Maximum predicted probability
Age
18–34
35–44
45–54
55–64
65+
(%)
94.4
94.3
95.9
95.8
97.6
Summary statistics (after omitting outliers in the right tail) for all eight months
of survey data are shown in Table 4.9.
Even after omitting the most egregious outliers, the WTP distributions are still
clearly right-skewed—modestly for Netflix but more pronouncedly for Hulu
Plus.24 Moreover, the proportions of households with zero WTP vary dramatically
between Netflix (3.9–4.5 %) and Hulu Plus (38–47 %). The higher median WTP
and lower proportion of zero WTP for Netflix makes it, at least presently, a more
desirable or popular video-streaming service than Hulu Plus.
A more formal analysis on the WTP data is conducted as follows. First, the
cumulative distribution function (CDF) of the WTP data is constructed, from
which its complement is recovered, the survival function (SF). The SF of the WTP
shows the proportion of households that are willing to pay at least a designated
level of the price. The mirror image of this interpretation is that it measures the
26
Details are available from the principal author.
-1.98
-1.82
-1.99
-1.97
-2.01
-1.94
-1.88
-1.78
-2.02
-1.85
-2.02
-2.00
-2.00
-1.97
-1.91
-1.81
Tobit
$6.95
$7.25
$6.95
$7.00
$6.90
$7.02
$7.11
$7.30
Robust OLS
Tobit
$6.90
$7.20
$6.90
$6.90
$6.90
$6.97
$7.05
$7.22
Price at Unitary
Price Elasticity
-0.70
-0.71
-0.72
-0.72
-0.67
Robust OLS
Price Elasticity
at Median WTP
-0.72
-0.73
-0.73
-0.73
-0.69
Tobit
$6.23
$6.20
$6.12
$6.15
$6.45
Robust OLS
Tobit
$6.18
$6.12
$6.08
$6.07
$6.40
Price at Unitary
Price Elasticity
Median WTP for Hulu service is less than $5 in November and December 2010 samples, while it is $5 in the following three months. For comparability, we
calculate the price elasticity at the $5 price point in November and December 2010
Nov 2010
Dec 2010
Jan 2011
Feb 2011
Mar 2011
Apr 2011
May 2011
Jun 2011
Robust OLS
Price Elasticity
at Median WTP
Table 4.11 Price Elasticity at Median WTP and Revenue-Maximizing Price Point for Netflix and Hulu Plus, Monthly November 2010 – March 2011
Netflix
Hulu Plus
4 Forecasting Video Cord-Cutting
79
80
A. Banerjee et al.
most number of subscribers (‘‘demand’’) at that level of the price. Hence, an SF of
the WTP can be imagined as reflecting a pseudo-demand curve. This pseudodemand curve can be estimated by fitting a polynomial function of the appropriate
order in WTP to the survival function. ‘‘Price elasticities’’ can then be calculated
as the ratio of the percentage change in the fitted SF to the percentage change in
(first-order) WTP.
For each monthly household sample, regression techniques to estimate the SF
as a function of WTP are used. Two techniques were used: ordinary least squares
with robust standard errors and Tobit regression appropriate for censored data.25 A
polynomial of the third order was appropriate in all instances.
Figure 4.3 shows an example of how monthly pseudo-demand curves for
Netflix were constructed, in this instance from WTP data for December 2010. The
WTP histogram is shown by green columns and the CDF of the WTP data is
depicted by the yellow curve. The SF (shown in red) is calculated as 100 %—
CDF. A third-order polynomial in WTP fitted to the SF is depicted by the dashed
red line. Table 4.10 presents the estimated relationship using the two estimation
techniques. Censoring of the WTP data does not appear to be a major factor as the
coefficient estimates vary little between the two techniques.
This fitted line was interpreted as the pseudo-demand curve and elasticities
were calculated at different ‘‘price’’ (or WTP) points. These estimated price
elasticities were robust, with little variation over either months or estimation
techniques.26 Table 4.10 provides information on price elasticity in two ways.
First, it reports the price elasticity at the median WTP level. Second, it shows the
price point at which price elasticity is unitary (in absolute value). This price point
is also known as the revenue-maximizing price since revenue neither increases nor
decreases for small departures from that price.
These results have the following interesting implications:
• If Netflix were to set its price at the median of $10, demand would be quite
elastic. A lower price would, in fact, increase revenue. In contrast, if Hulu were
to set the Hulu Plus price at the median of around $5, demand would be
inelastic. A higher price would, in fact, increase revenue.
• Netflix currently charges $7.99 a month for its pure streaming Netflix Watch
Instantly service and $7.99 more to bundle that with its traditional mail-delivered DVD rental service. Table 4.11 suggests that Netflix may still have overpriced its service somewhat, that is, if maximizing revenue is its goal. A price in
the neighborhood of $7 would be closer to optimal.
• Hulu currently charges $7.99 a month for its pure streaming Hulu Plus service.
This price matches that set by Netflix and is, perhaps, a competitive response.
However, as noted earlier, consumer interest and their WTP for Hulu Plus
content are not at the same level as those for Netflix. From Table 4.11, it appears
that pricing at the median of $5 would not be revenue-maximizing. Rather, the
price should be somewhat more than $6. It appears that Hulu may have overpriced its Hulu Plus service to a greater degree than has Netflix.
4 Forecasting Video Cord-Cutting
81
• A revenue-maximizing price does not necessarily maximize profits as well.
However, in the absence of publicly available cost information, the WTP survey
data best provide a tool for selecting the revenue-maximizing price. Pricing ‘‘too
high’’ or ‘‘too low’’ leaves unexploited revenue opportunities on the table.
4.7 Conclusion
Centris’ survey research provides useful insights into the burgeoning OTT phenomenon and, in particular, the move to streamed video (either by itself or in
combination with traditional Pay TV). This research indicates that the onset of
OTT is still at a nascent stage and does not yet represent a substantial threat to Pay
TV service providers in the United States. However, the proliferation of platforms
and devices through which video programming can be streamed or downloaded
may mean that it is only a matter of time before OTT becomes a serious competitor
to Pay TV. Much will depend on how content is created and distributed in the
future—the rise of hybrid entities that both create and distribute content over
low-cost, high-bandwidth broadband connections can mark an important turning
point.
Apart from these prognostications, this chapter also attempts to rigorously
define and measure the various forms of OTT, not all of which represent a
replacement of traditional Pay TV. Any failure to make these distinctions can lead
to seemingly contradictory and confusing forecasts of the future of OTT video.
The term ‘‘video cord-cutting’’ is now coming into vogue, following by a decade
or so the form of ‘‘voice cord-cutting’’ that emerged from the rapid diffusion of
mobile telecommunications in the United States and other developed countries.
The nature—and implications—of video cord-cutting are more complex. For video
cord-cutting to advance, a significant variety of devices and platforms must be
available, as must more powerful and versatile internet connections. In the United
States, some of the largest providers of video service are also those of internet
service. Therefore, the extent to which those service providers will resist OTT in
order to protect their Pay TV business or embrace OTT in order to fortify their
internet connections business will determine the evolution of OTT in a major way.
For now, the threat to the core Pay TV business looks manageably small.
References
Day JC (1996) Projections of the number of households and families in the United States:
1995–2010, U.S. bureau of the census, current population reports, p 25–1129, U.S.
government printing office, Washington
Federal Communications Commission (2011) Internet access services: status as of June 30, 2010,
Wireline Competition Bureau, Industry Analysis and Technology Division
82
A. Banerjee et al.
Greene WH, Hensher DA (2010) Modeling ordered choices: a primer. Cambridge University
Press, New York
Greene WH (2003) Econometric Analysis, 5th edn, Upper Saddle River, Prentice Hall (Pearson
Education, Inc.), NJ
Netflix (2012) In http://en.wikipedia.org/wiki/Netflix
Train KE (2003) Discrete choice methods with simulation. Cambridge University Press, New
York
Chapter 5
Blended Traditional and Virtual Seller
Market Entry and Performance
T. Randolph Beard, Gary Madden and Md. Shah Azam
5.1 Introduction
The past decade has produced much research investigating online market entry,
both by blended traditional and by virtual sellers. A primary focus of these
analyses is the identification of factors that determine entry and survival (e.g.,
Dinlersoz and Pereira 2007; Nikolaeva 2007; respectively). Some strategic reasons
explaining why firms enter online markets include the following: cost reduction
(Garicano and Kaplan 2001; Lucking-Riley and Spulber 2001); market experimentation and expansion (e.g., Lieberman 2002 examines potential first-mover
advantages); quality of service improvement (Litan and Rivlin 2001); and preemption of rival entry (Dinlersoz and Pereira 2007). By contrast, there is a paucity
of empirical work on the impact of firm entry on performance. When postentry
performance is assessed, available data typically limit analysis to a specific
industry, and more importantly, do not enable analysts to adequately ‘‘match’’
performance with entry strategy (e.g., DeYoung 2005).
The MIT Center for Digital Business and the Columbia Institute for Tele-Information provided
support during the time this paper was revised. Helpful comments were provided by
participants in seminars at Columbia University and Curtin University. Bill Greene was
generous in his assistance on the econometric method. We are grateful to Warren Kimble and
Aaron Morey for excellent research assistance. The authors are responsible for all remaining
errors.
T. Randolph Beard (&)
Auburn University, 232, Marion Circle, Auburn, AL 36830, USA
e-mail: beardtr@auburn.edu
G. Madden
Curtin University, 65 Redfern Street, North Perth, WA 6006, Australia
e-mail: G.Madden@curtin.edu.au
Md. S. Azam
University of Rajshahi, 4/53 Chapman Road, Bentley, WA 6102, Australia
e-mail: mdshah.azam@yahoo.com.au
J. Alleman et al. (eds.), Demand for Communications Services - Insights and Perspectives,
The Economics of Information, Communication, and Entertainment,
DOI: 10.1007/978-1-4614-7993-2_5, Springer Science+Business Media New York 2014
83
84
T. Randolph Beard et al.
To address this shortcoming, this study considers firms’ post entry revenue and
cost performance using data from a unique sample of 1,001 Australian small
blended and virtual online firms. These survey data contain information on firm
and industry control variables, web site investment and (strategic) reasons for
entry. In particular, through interaction variables, the study examines whether
performance varies systematically by industry and firm size. While online markets
appear to provide small firms an opportunity to pioneer new and innovative
business models, entry is skewed toward larger firms. For instance, the Business
Council of Australia highlights the relatively low online business activity by Small
and Medium Enterprises (SMEs) compared with large companies (Dunt and
Harper 2002).
Furthermore, the analysis considers whether learning effects (location-based,
network-based or enterprise-based), scale effects, or strategic action effects impact
performance. These data also suggest several related questions: How do traditional
blended and virtual sellers differ in their performance? What can be said about the
relationship between postentry performance and the strategic motivation for online
market entry?
Importantly, the analysis recognizes that the online market entry decision is in
part based on the expected revenue and cost responses. Under such circumstances
particular ‘‘reason for entry’’ variables (depending on whether revenue or cost
responses are being modeled) are potentially endogenous, that is, not independent
from the disturbances of the response (performance) functions. Allowing for
endogeneity is especially important for cross section data in which the use of a
fixed effects model with individual-specific effects is not possible.
The empirical approach used here differs from previous work: potential endogeneity of the ‘‘reason for entry’’ variables in the response functions is accommodated by using an ordered probit response function with an endogenous ordered
regressor, rather than the conventional Heckman (1978, 1979) two-step method.
The nonlinearity of the probit model is an essential difficulty for the two-step
correction which often makes the bias worse (Freedman and Sekhon 2008).
The paper proceeds as follows. The next section describes strategic reasons
for online market entry by both traditional blended and virtual sellers. Aside
from strategy (reason for entry) variables, the manner in which performance may
be impacted by experience and/or scale effects are considered. Next, the sample
data are described, and variables used in the empirical analysis are defined. The
following section presents the statistical models, and the results from the
regressions are then provided. A penultimate section explores firm online performance with respect to the impact of various arguments. The last section
concludes.
5 Blended Traditional and Virtual Seller Market Entry and Performance
85
5.2 Strategic Entry and Postentry Performance
Much of the empirical literature analyzing market turbulence (entry, exit, and
survival) focuses exclusively on firm population cohort data (e.g., Segarra and
Callejón 2002; Disney et al. 2003). A consistent finding is that entry is relatively
easy, but survival is not. The most palpable consequence of entry is exit. As most
entry attempts ultimately fail, and as most entrants require up to ten years to
become capable of competing on a par with incumbents, incumbents find costly
attempts to deter entry unprofitable.
Furthermore, entry is often a vehicle to introduce an innovation, particularly in
the early phases of industry evolution (Geroski 1995: 436). At some point, consumer preferences become reasonably well formed and coalesce around a subset of
products. At this stage, competitive rivalry shifts from competition between or
among product designs to competition based on prices and costs of a particular
design (Geroski 1995: 437).
Studies of survivor performance report that small firms often effectively compensate for scale and other size-related disadvantages. For instance, Reid and
Smith (2000) find that small entrant firm growth (in employment, rate of return,
and productivity) is higher than for large entrants. Interestingly, Audretsch (1995)
argues that survivor employment growth is systematically greater in highly
innovative industries relative to that in less innovative industries.
Since the mid-1990s, researchers have sought to understand why firms enter
online markets, and in particular, whether early entry provides sustainable firstmover gains (including an enhanced prospect for survival). Dinlersoz and Pereira
(2007) indicate that efficiency improvement, customer loyalty and market
expansion (via an additional marketing channel and wider geographic reach) are
the principal economic reasons for entry. The roles of transaction, sales, inventory
and distribution cost savings are identified by Garicano and Kaplan (2001), Litan
and Rivlin (2001) and Lucking-Riley and Spulber (2001). First-mover advantage is
tied to firm or brand loyalty in online markets by Smith and Brynjolfsson (2001)
while the extension of geographic markets and product mix is modeled by
Dinlersoz and Pereira (2007).
However, it is important to recognize these entry motivations are often quite
mixed. For instance, blended and virtual environments exhibit different market
structures, demand curves, and customer segments. Therefore, price competition is
probably stronger in online markets, and the corresponding price elasticity of
demand higher. To compensate, the size and reach of the online market must be
greater so that adequate returns are realized. As a result, firms entering online
markets often intend to augment their geographic customer reach to gain a larger
demand pool. However, to do so, firms must improve their efficiency and reduce
prices relative to those dominating the blended market. Schmalensee (1982)
establishes that early movers can benefit from customer uncertainty over product
quality. What is unresolved for online markets is the durability of any such firstmover advantage.
86
T. Randolph Beard et al.
5.3 Data and Variables
The present analysis explicitly allows for potential endogeneity in the statistical
modeling through systems estimation. Additionally, the consideration of strategic
entry is broadened to include meeting the goals of market expansion, cost
reduction, introduction of a new good, and anticipation of rival entry, customer
requests, and supplier requirements. Naturally, the source of endogeneity varies by
performance goal. Finally, several size-related hypotheses are empirically
examined.
First, various size metrics are introduced through arguments related to firm size
(number of employees), market size (geographic location), and network size
(number of stores). Second, interactions are included in both the strategic and
performance response equations to allow for the presence of learning effects, scale
effects (location, network size, and enterprise size), and strategic anticipation and
supplier effect variables. In particular, learning (years established online) is
interacted with initial investment (commitment) and firm orientation (retail and
business) variables. These commitment and orientation variables are also interacted with the above measures of internal and external sources of scale economies.
Anticipating entry and suppler requirement variables are similarly interacted.
A unique profile of Australian small and medium enterprises (SME) online
market activity was obtained from an Australian Research Council funded survey.1
In this survey, the manager of each enterprise is interviewed by telephone. Sampling is exogenously stratified through screening questions that require that the
firm employs less than 200 persons, and conduct online business via a web site.
The 1,001 sample observations thus obtained are comprised of firms located in
Melbourne (201), Sydney (201), Brisbane (101), Adelaide (100), Perth (99),
Canberra (50), Darwin (50) and Hobart (50), and the regional centers of AlburyWodonga (50), Townsville (50) and Newcastle (49).
Information collected includes: whether the firm conducts only online activity
or is a blended firm (BLENDED); elapsed years the firm has conducted online
activity (ESTAB); geographic location of the head office and number of branches (LOCATION, STORES); number of fulltime employees (SMALL);
industry classification, viz., whether primarily RETAIL or BUSINESS
1
The survey field work is conducted by the Interviewer Quality Control Australia (IQCA)
quality accredited market research firm McGregor Tan Research. The telephone interview
software used to initiate the contact with respondents is the CATI (computer aided telephone
interviewing system). The sample units are selected at random from the Telstra Yellow Pages.
Three screening questions are asked prior to the conduct of the survey. Funding for survey is
provided by Australian Research Council Large Grant No. A00105943. The questionnaire
contains 59 questions. The questionnaire is comprised of the sections: (a) Respondent and Firm
Profile; (b) Reasons for Entering Online Markets and Initial Investment; (c) Initial Online Market
Outcomes; and (d) Online Market Futures.
5 Blended Traditional and Virtual Seller Market Entry and Performance
87
Table 5.1 Firm characteristics
ANZSIC single-digit division
Retail trade
Accommodation, cafes, and restaurants
Property and business services
Personal and other services
Manufacturing
Transport and storage
Cultural and recreational services
Finance and insurance
Wholesale trade
Construction
Other
Full-time equivalent employees
1–4
5–19
20–99
100–199
Sample (%)
ABS (%)
17.7
16.8
10.5
9.2
6.7
6.6
6.2
5.6
5.1
4.7
10.9
14.2
4.1
24.0
7.4
12.5
4.5
6.2
2.5
9.1
9.4
6.1
57.7
29.7
11.8
0.8
63.9
29.3
6.2
0.6
Note Auxiliary office location applies to firms with more than one office. Source ABS (2009)
Table 2.1, businesses with web presence. ABS (2010) Table 3.4, employer size group by
industry division
orientated2; and the initial web site investment (INITIAL). The reasons for entry
are also sought. Managers are asked whether entry is to introduce a new good
(NEWGOOD); respond to customer requests (CUSTOMER); respond to supplier
requirements (SUPPLIER) or anticipates rival entry (ANTICIPATE). Importantly, these entry reasons are treated as exogenous to subsequent REVENUE
and COST performance. Conversely, information collected to determine whether
entry is intended to increase sales (EXPAND) or reduce costs (EFFICIENCY) is
treated as potentially endogenous to firms’ REVENUE and COST performance,
respectively. Finally, REVENUE and COST performance data are collected for
firm activity during the previous 12 months.
Table 5.1 profiles the sampled firms by Australian and New Zealand Standard
Industrial Classification (ANZSIC) single-digit division. The distribution of firms
is similar to that of the Australian Bureau of Statistics (ABS) Businesses with a
web Presence at 2008 (ABS 2009, Table 2.1). Casual inspection, however, suggests an under-sampling of the ‘‘Manufacturing’’ and ‘‘Property and Business
Services’’ categories, and oversampling of the ‘‘Accommodation, Cafes and
Restaurants’’ category occurred. ‘‘Retail Trade’’ is the most represented industry
(17.7 %). Further, the distribution by fulltime employees is similar to that reported
by the ABS for 2001 (ABS 2010: Table 3.4).
2
BUSINESS is comprised of the ANZIC single-digit divisions: ‘‘Finance and Insurance’’;
‘‘Property and Business Services’’; and ‘‘Wholesale Trade.’’
88
Table 5.2 Response and strategic equation dependent variables
Variables
Definition
Response variables
REVENUE
Revenues since entry; = 4 (increased [ 10 %); = 3
(increased 6–10 %), = 2 (increased 2–5 %); = 1
(stable ± 1 %); = 0 (decreased)
COST
Unit costs during the past year; = 4
(increased [ 10 %); = 3 (increased 6–10 %), = 2
(increased 2–5 %); = 1 (stable ± 1 %); = 0
(decreased)
Strategic variables
EXPAND
Entered online market to increase sales; = 4 (most
important); = 3 (somewhat important), = 2 (some
consideration); = 1 (slight consideration); = 0 (not
relevant)
EFFICIENCY Entered online market to reduce costs; = 4 (most
important); = 3 (somewhat important), = 2 (some
consideration); = 1 (slight consideration); = 0 (not
relevant)
T. Randolph Beard et al.
Mean Standard
deviation
2.11
1.31
2.13
1.07
2.51
1.52
1.72
1.50
Tables 5.2 and 5.3, respectively, present the definition, mean and standard
deviation of the dependent and independent variables used in the regressions.
Answers to survey questions are mostly coded either binary (0, 1) or categorical (0,
…, 4). The exception is ESTAB. Table 5.2 contains the REVENUE and COST
performance response variables to be modeled.3 Furthermore, EXPAND is treated
as a potentially endogenous argument in the REVENUE performance equation,
while EFFICIENCY is treated in a similar manner for the COST response equation. These paired response and strategy equations are to be estimated as a system.
The independent variables contained in Table 5.3 are classified variously as:
describing the firm or industry; measuring the commitment to online market entry
(initial web site investment); and reasons for entry. These entry reasons are treated
as exogenous strategic variables in the response equations and were designed to
align with the motivations identified by the literature. The inclusion of ESTAB in
Table 5.3 is intended to allow for the presence of standard learning effects, that is,
that performance improves with experience.
3
These variables are often considered by economists as objectives of agent optimization.
Steinfield et al. (2002) argue that innovation is based on the search for synergistic opportunity.
That is, aligning goals across physical and virtual channels suggests that the ‘‘parent’’ firm
benefits from sales stemming from either channel. Higher revenues can arise from geographic and
product market extension, thus adding revenue streams otherwise not feasible from physical
outlets. Synergistic benefits also arise from lower costs (savings may occur through improved
labor productivity, and reduced inventory, advertising and distribution costs).
5 Blended Traditional and Virtual Seller Market Entry and Performance
Table 5.3 Independent variables, levels
Variables
Definition
Firm
BLENDED
ESTAB
SMALL
LOCATION
STORES
Industry
BUSINESS
RETAIL
web site
INITIAL
Entry reason
NEWGOOD
89
Mean Standard
deviation
1, if operate in physical and online market; = 0, otherwise
Years since online market entry
1, if firm employs less than 20 persons; = 0, otherwise
1, if head office in Sydney or Melbourne; = 0, otherwise
1, if more than one store; = 0, otherwise
0.89
3.85
0.87
0.40
0.27
0.31
3.00
0.33
0.49
0.44
1, if ‘‘Business Oriented Services’’; = 0, otherwise
1, if ‘‘Retail’’; = 0, otherwise
0.21
0.17
0.40
0.38
1, if initial web site investment exceeded $20,000; = 0,
otherwise
0.11
0.31
0.40
0.49
0.44
0.49
0.23
0.42
0.42
0.49
1, if firm entered online market to introduce a new
product; = 0, otherwise
CUSTOMER 1, if firm entered on-line market in response to customer
request; = 0, otherwise
SUPPLIER
1, if firm entered online market in response to supplier
requirement, = 0, otherwise
ANTICIPATE 1, if firm entered online market in response to rival entry
threat; = 0, otherwise
Note ESTAB is a continuous variable measured in years. All other independent variables are
coded binary or categorical. Costs of entry (INITIAL) in Australian dollars
In Table 5.4, several interactions are included to test whether either commitment-based (INIT*EST) or firm orientation-based (RET*EST and BUS*EST)
learning effects are present.4 Additionally, with scale potentially an important
reason to enter online markets, SMALL (number of employees), LOCATION
(large city location), and STORES (number of outlets) are all included in
Table 5.3 to provide for the alternative sources of scale economies. Table 5.4 also
includes the scale interactions INIT*LOC, RET*LOC, and BUS*LOC to allow for
location-based scale effects. Similarly, network-based (INIT*STOR, RET*STOR,
BUS*STOR) and enterprise-based (INIT*BIG, RET*BIG and BUS*BIG) scale
effect interaction arguments are included in Table 5.4.
Furthermore, Table 5.4 contains several strategic variable interaction arguments. Specifically, INIT*ANTI, RET*ANTI, and BUS*ANTI are intended to test
whether commitment-based or firm orientation-based entry driven by the firm’s
anticipation of rival entry matters for subsequent performance. If firms enter the
online marketplace in an attempt to ‘‘front-run’’ a potential rival, what
4
Although the firms comprising the sample are ‘‘small’’, the potential for scale economies arises
as the employee range is [1, 200]. Also, 400 firms are located within (the large cities) of
Melbourne and Sydney. Finally, only 72.4 % of the sample firms operate a single site.
90
T. Randolph Beard et al.
Table 5.4 Independent variables, interactions
Variables
Definition
Learning effects
INIT*EST
INITIAL*ESTAB
RET*EST
RETAIL*ESTAB
BUS*EST
BUSINESS*ESTAB
Location scale effects
INIT*LOC
INITIAL*LOCATION
RET*LOC
RETAIL*LOCATION
BUS*LOC
BUSINESS*LOCATION
Network scale effects
INIT*STOR
INITIAL*STORES
RET*STOR
RETAIL*STORES
BUS*STOR
BUSINESS*STORES
Enterprise scale effects
INIT*BIG
INITIAL*BIG
RET*BIG
RETAIL*BIG
BUS*BIG
BUSINESS*BIG
Strategic anticipation effects
INIT*ANTI
INITIAL*ANTICIPATE
RET*ANTI
RETAIL* ANTICIPATE
BUS*ANTI
BUSINESS* ANTICIPATE
Strategic supplier effects
INIT*SUP
INITIAL*SUPPLIER
RET*SUP
RETAIL*SUPPLIER
BUS*SUP
BUSINESS*SUPPLIER
Mean
Standard deviation
0.47
0.59
0.87
1.71
1.63
2.20
0.05
0.06
0.10
0.22
0.23
0.31
0.05
0.05
0.08
0.21
0.22
0.27
0.03
0.01
0.03
0.17
0.12
0.18
0.05
0.08
0.09
0.22
0.27
0.29
0.03
0.03
0.05
0.17
0.18
0.23
Note BIG = 1 - SMALL, viz., = 1, if firm employs more than 20 persons; = 0, otherwise
Table 5.5 REVENUE and COST sample frequencies
Frequency
REVENUE
Increased by [ 10 %
Increased by 6–10 %
Increased by 2–5 %
Remained stable at ± 1 %
Decreased
COST
Increased by [ 10 %
Increased by 6–10 %
Increased by 2–5 %
Remained stable at ± 1 %
Decreased
Total
Percent
232
158
166
381
64
23.1
15.8
16.6
38.1
6.4
126
239
305
298
33
1,001
12.6
23.9
30.4
29.8
3.3
100.0
consequences accrue for the subsequent performance? Finally, the arguments
INIT*SUP, RET*SUP, and BUS*SUP allow a similar effects based on supplierdriven entry.
5 Blended Traditional and Virtual Seller Market Entry and Performance
Table 5.6 Conditional performance probabilities
Probability
Revenue (% change)
PðjNEWGOOD ¼ 1Þ
PðjCUSTOMER ¼ 1Þ
PðjSUPPLIER ¼ 1Þ
PðjANTICIPATE ¼ 1Þ
91
Cost (% change)
[10
6–10
2–5
±1
Fall
[10
6–10
2–5
±1
Fall
0.10
0.10
0.06
0.10
0.08
0.07
0.04
0.08
0.07
0.08
0.04
0.07
0.14
0.17
0.09
0.15
0.03
0.03
0.01
0.03
0.05
0.05
0.03
0.06
0.10
0.11
0.05
0.10
0.12
0.14
0.07
0.13
0.12
0.13
0.06
0.13
0.01
0.01
0.01
0.02
Table 5.5 reports REVENUE and COST sample frequencies. The responses
concern firm performance by category since entry. The firms’ revenue and cost
performances fall into five mutually exclusive and exhaustive categories: increased
substantially, increased modestly, increased slightly, remained steady (unchanged), or decreased. The reported frequencies suggest online market entry is
associated with improved or steady REVENUE (93.6 %) and COST (33.1 %)
performance. However, COST increases are reported by 66.9 % of sampled firms.
The conditional performance probabilities reported in Table 5.6 show postentry
REVENUE and COST performance variations by entry reason. Interestingly, the
implied REVENUE and COST performance responses are identical whether entry
is to introduce a new good (NEWGOOD), comply with customer requests
(CUSTOMER), or anticipate rival entry (ANTICIPATE). Not surprisingly, the
REVENUE increase is smaller when entry occurs to comply with supplier
requirements (SUPPLIER). A similar pattern is reported for the COST performance responses.
5.4 Bivariate-Ordered Probit Model
The bivariate probit model with endogenous dummy variables belongs to the
general class of simultaneous equation models with both continuous and discrete
endogenous variables introduced by Heckman (1978). Maddala (1983) lists the
model among recursive models of dichotomous choice. The recursive structure is
comprised of structural performance and reduced form (for the potentially
endogenous dummy) equations:
I1jo ¼ bT1 x1j þ e1j
I2jo ¼ bT2 x2j þ e2j ¼ d1 I1j þ bT2 x1j þ bT3 z2j þ e2j
ð5:1Þ
where I1jo and I2jo are latent variables, and I1j and I2j are discrete variables.5 The
polychotomous observation mechanism for I1jo is the result of complete censoring
of the latent dependent variable with the observed counterpart:
5
Maddala (1983: 122) states that the parameters of the second equation are not identified if there
are no exclusion restrictions on the exogenous variables. Wilde (2000) demonstrates, for multiple
92
T. Randolph Beard et al.
I1j ¼ 0 if I1j0 \l0 ;
¼ 1 if l0 \I1j0 l1 ;
¼ 2 if l1 \I1j0 l2 ;
¼3
¼4
ð5:2Þ
if l2 \I1j0 l3 ;
if I1j0 [ l3 :
The potentially endogenous polychotomous variable I2j is observed following the
rule:
I2j ¼ 0 if I2j0 \k0 ;
¼ 1 if k0 \I2j0 k1 ;
¼ 2 if k1 \I2j0 k2 ;
ð5:3Þ
¼ 3 if k2 \I2j0 k3 ;
¼ 4 if I2j0 [ k3 :
where x1 are the included exogenous regressors, z2 are the excluded exogenous
regressors (instruments), and I2 is the endogenous dummy regressor. l, b1 , b2 and
b3 are parameter vectors, and d1 is a scalar parameter.6 The error terms are
assumed to be independently and identically distributed as bivariate Normal:
!
" # "
#!
e1j
0
1q
IIDN
;
:
ð5:4Þ
0
q1
e2j
The bivariate model is analogous to the seemingly unrelated regressions (SUR)
model for the ordered probit case with the equations linked by Corðe1j ; e2j Þ ¼ q
(Greene 2008: E22-78). In this setting, the exogeneity condition is stated in terms of
the polychoric correlation coefficient q, which can be interpreted as the correlation
between the unobservable explanatory variables in the equations. When q ¼ 0, I1j
and e2j are uncorrelated, and I2j is exogenous for the second equation of (5.1).
Conversely, q 6¼ 0 implies that I1j is correlated with e2j and therefore is endogenous.
5.5 Empirical Results
Estimation is conducted via LIMDEP version 9.0. Full efficiency in estimation and
an estimate of q are achieved via full information maximum likelihood estimation.
LIMDEP’s implementation of the model uses Full Information Maximum
(Footnote 5 continued)
equation probit models with endogenous dummy regressors, that no restrictions are needed if
there is sufficient variation in the data, viz., each equation contains at least one varying exogenous
regressor.
6
For all probabilities to be positive requires 0 \l0 \l1 \l2 \l3 and 0\k0 \k1 \k2 \k3 :
5 Blended Traditional and Virtual Seller Market Entry and Performance
93
Likelihood (FIML), rather than Generalized Method of Moments (GMM). The
Lagrange multiplier test for heteroskedasticity is applied to the individual equations that comprise the bivariate probit system. In all tests the null hypothesis is
rejected, so the error process is heteroskedastic.
Marginal effects are defined as the probability-weighted average of the effect of
the variable on the joint probability for the observed outcome. Terms for common
variables are the sums of two effects. The LIMDEP bivariate probit model does not
compute the standard errors for the partial effects.7 Standard errors for the partial
effects are obtained with single equation estimates by using the bivariate-ordered
probit coefficients and thresholds as starting values for each of the single equation
estimators, while adding the options MAXIT = 0 and MARGINS to the commands. Importantly, there is no cross effect in the partial effects; that is, they would
be computed an equation at a time anyway.
For all model specifications, the estimated polychoric correlation coefficients
between I1j and I2j are significantly different from zero (see Table 5.7). With a
nonzero q in place, the strategic argument EXPAND is endogenous. Furthermore,
the correlations between the REVENUE and EXPAND equation errors are positive
suggesting, for example, that unobservable factors which increase the probability
of the EXPAND motive also increase the probability of higher REVENUE postentry. Finally, all threshold parameters are significantly different from zero and
satisfy the order conditions.
Firms form their strategies based on the expected responses from customers
postentry. Under such circumstances, the strategy variables are endogenous, that
is, they are not independent of the disturbances in the response function. In particular, firms that decide to enter an online market to expand sales, necessarily
form an assessment of future sales. Furthermore, managers typically allocate better
or more resources to an online market for which they expect higher sales. Models
that ignore this endogeneity will likely overestimate the effects of the resources on
subsequent performance.
The results for the ordered probit-relating entry into online markets to expanded
sales (EXPAND) contain few surprises: see Table 5.8. As expected, when the
strategic objective is to expand the market, competing strategic objectives that do
not align well with this have a negative impact on EXPAND, viz., NEWGOOD,
CUSTOMER, and ANTICIPATE. The negative impact of SUPPLIER appears only
in the F ? S specification and vanishes thereafter with the introduction of learning
(L), scale (SL, NS, ES), and strategic interaction (SA, SS) variables. Firms that are
orientated toward retail rather than business services appear less likely to enter
online markets to expand sales. This may reflect a belief that, ultimately, online
entry leads to the cannibalization of other channel sales. The network (stores) and
enterprise (employees) scale interactions are positive and commitment (initial
investment) based. The anticipatory interaction has identical characteristics.
7
The standard errors of the coefficients for the bivariate model are not correct because of the
scaling effect.
1.012
(13.24)
1.382
(14.25)
1.867
(13.85)
0.410
(8.24)
1.184
(17.36)
1.915
(24.57)
0.345
(14.24)
1.011
(13.35)
1.382
(14.39)
1.866
(14.99)
0.414
(8.25)
1.189
(17.36)
1.921
(24.65)
0.344
(14.10)
0.346
(14.33)
0.412
(8.25)
1.185
(17.32)
1.918
(24.65)
1.001
(13.28)
1.380
(14.34)
1.865
(13.97)
0.345
(14.12)
0.413
(8.23)
1.186
(17.34)
1.919
(24.65)
1.009
(13.29)
1.380
(14.31)
1.863
(13.95)
F ? S + B ? ES
0.344
(14.16)
0.416
(8.17)
1.194
(17.00)
1.927
(24.20)
1.012
(13.24)
1.383
(14.28)
1.927
(13.90)
F ? S + B ? SA
0.345
(14.22)
0.412
(8.25)
1.184
(17.27)
1.914
(24.56)
1.001
(13.24)
1.378
(14.26)
1.861
(13.88)
F ? S + B ? SS
Note t ratios in parentheses. *significant at 10 %. **significant at 5 %. Effect: F firm, S strategic, B blended, L learning, SL location scale, NE network scale,
ES enterprise scale, SA strategic anticipation, SS strategic supply
Response equation
1.047
l1
(12.41)
1.427
l2
(12.98)
1.916
l3
(12.35)
Strategic equation
0.285
k1
(7.99)
0.943
k2
(16.48)
1.654
k3
(23.87)
Disturbance correlation
q
0.323
(9.74)
Table 5.7 REVENUE Equation threshold parameters and disturbance correlations
Variable
F?S
F?S+B?L
F ? S + B ? SL
F ? S + B ? NS
94
T. Randolph Beard et al.
INIT*EST
EST*EST
ANTICIPATE
SUPPLIER
CUSTOMER
NEWGOOD
INITIAL
RETAIL
BUSINESS
STORES
LOCATION
SMALL
ESTAB
BLENDED
-0.00127
(0.56)
0.00292
(0.27)
-0.00715
(1.06)
0.01175
(1.48)
0.00284
(0.34)
-0.02493**
(2.78)
0.02472**
(2.33)
-0.05348**
(7.50)
-0.05732**
(8.23)
-0.01782**
(2.15)
-0.05633**
(8.05)
0.00016
(1.13)
-0.33949
(0.00)
0.00107
(0.91)
-0.00327
(0.69)
-0.00278
(0.90)
0.00521
(1.44)
0.00699
(1.11)
0.00149
(0.22)
0.01276
(1.56)
-0.01805**
(5.73)
-0.01781**
(5.75)
-0.00473
(1.31)
-0.01876**
(6.04)
0.00004
(0.48)
-0.00131
(0.86)
-0.34612
(0.00)
0.00008
(0.07)
-0.00412
(0.87)
-0.00314
(0.78)
0.00491
(1.37)
-0.00169
(0.33)
-0.00873*
(1.75)
0.00906
(1.35)
-0.01807**
(5.74)
-0.01764**
(5.71)
-0.00502
(1.39)
-0.01850**
(5.98)
0.00003
(0.41)
Table 5.8 EXPAND equation estimates, partial effects
Variables
F?S
F ? S + B ? L F ? S + B ? SL
F ? S + B ? NS
-0.37240
(0.00)
0.00014
(0.12)
-0.00312
(0.61)
-0.00295
(0.89)
0.00580
(1.14)
0.00319
(0.64)
-0.00975**
(1.96)
0.00065
(0.09)
-0.01967**
(5.81)
-0.01914**
(5.74)
-0.00509
(1.31)
-0.01999**
(5.99)
0.00004
(0.47)
F ? S + B ? ES
-0.34461
(0.00)
0.00012
(0.11)
-0.00031
(0.05)
-0.00347
(1.13)
0.00472
(1.32)
0.00172
(0.42)
-0.01056**
(2.49)
0.00221
(0.39)
-0.01813**
(5.79)
-0.01779**
(5.77)
-0.00487
(1.36)
-0.01819**
(5.89)
0.00003
(0.46)
F ? S + B ? SA
-0.39390
(0.00)
0.00017
(0.14)
-0.00513
(0.96)
-0.00278
(0.79)
0.00528
(1.29)
0.00040
(0.07)
-0.01624**
(2.59)
-0.00541
(0.70)
-0.02008**
(5.59)
-0.02023**
(5.73)
-0.00547
(1.33)
-0.02705**
(5.83)
0.00003
(0.35)
F ? S + B ? SS
(continued)
-0.32838
(0.00)
0.00008
(0.08)
-0.00390
(0.83)
-0.00288
(0.94)
0.00479
(1.34)
-0.00002
(0.01)
-0.01021**
(2.26)
0.00908
(1.57)
-0.01809**
(5.76)
-0.01757**
(5.68)
-0.00498
(1.05)
-0.01851**
(5.98)
0.00003
(0.44)
5 Blended Traditional and Virtual Seller Market Entry and Performance
95
BUS*ANTI
RET*ANTI
INIT*ANTI
BUS*BIG
RET*BIG
INIT*BIG
BUS*STORE
RET*STORE
INIT*STORE
BUS*LOC
RET*LOC
INIT*LOC
BUS*EST
RET*EST
Table 5.8 (continued)
Variables
F?S
F?S+B?L
-0.00326**
(1.98)
-0.00164
(1.32)
-0.00266
(0.28)
-0.00378
(0.44)
0.00465
(0.61)
F ? S + B ? SL
0.02074**
(1.96)
-0.00302
(0.31)
-0.00985
(1.13)
F ? S + B ? NS
0.02116**
(1.96)
0.01160
(0.83)
-0.00742
(0.73)
F ? S + B ? ES
0.03374**
(3.10)
0.01244
(1.35)
-0.00020
(0.02)
F ? S + B ? SA
(continued)
F ? S + B ? SS
96
T. Randolph Beard et al.
-1293.1
48.9 %
-1170.6
49.9 %
F?S+B?L
-1171.1
51.1 %
F ? S + B ? SL
-1172.0
50.2 %
F ? S + B ? NS
-1170.7
50.2 %
F ? S + B ? ES
-1169.5
50.4 %
F ? S + B ? SA
F ? S + B ? SS
-0.00418
(0.40)
0.00276
(0.27)
0.00129
(0.15)
-1171.1
50.3 %
Note t ratios in parentheses. *significant at 10 %. ** significant at 5 %. Effect: F firm, S strategic, B blended, L learning, SL location scale, NE network scale,
ES enterprise scale, SA strategic anticipation, SS strategic supply
Log likelihood
Predicted correctly
BUS*SUP
RET*SUP
INIT*SUP
Table 5.8 (continued)
Variables
F?S
5 Blended Traditional and Virtual Seller Market Entry and Performance
97
INIT*EST
EST*EST
ANTICIPATE
SUPPLIER
CUSTOMER
NEWGOOD
INITIAL
RETAIL
BUSINESS
STORES
LOCATION
SMALL
ESTAB
BLENDED
0.00096
(0.33)
0.00935
(0.64)
0.01903**
(2.05)
0.00187
(0.17)
-0.01200
(1.05)
-0.01320
(1.07)
0.01410
(0.96)
-0.05245**
(5.13)
-0.04708**
(4.71)
-0.01573
(1.39)
-0.04019**
(3.97)
0.00007
(0.40)
-0.05796**
(5.31)
0.00171
(0.86)
0.00197
(0.22)
0.01138**
(2.04)
0.00142
(0.22)
-0.00536
(0.48)
-0.00062
(0.05)
0.01113
(0.77)
-0.02748**
(4.48)
-0.02250**
(3.73)
-0.00755
(1.12)
-0.01944**
(3.19)
0.00000
(0.01)
-0.00125
(0.45)
-0.05484**
(5.39)
0.00097
(0.59)
0.00244
(0.30)
0.00903
(1.32)
0.00139
(0.23)
-0.00346
(0.40)
-0.01160
(1.38)
0.00237
(0.21)
-0.02523**
(4.41)
-0.02106**
(3.76)
-0.00674
(1.07)
-0.01829**
(3.23)
0.00001
(0.13)
Table 5.9 REVENUE response equation estimates, partial effects
Variables
F?S
F ? S+ B ? L F ? S+ B ? SL
F ? S ? B ? NS
-0.05651**
(5.43)
0.00093
(0.56)
0.00242
(0.29)
0.01068**
(2.01)
0.00157
(0.19)
-0.00018
(0.02)
-0.01052
(1.28)
-0.00238
(0.22)
-0.02672**
(4.57)
-0.02177**
(3.79)
-0.00698
(1.08)
-0.01897**
(3.28)
0.00001
(0.15)
F ? S+ B ? ES
-0.05737**
(5.36)
0.00099
(0.58)
0.00345
(0.31)
0.01047*
(1.96)
0.00099
(0.15)
-0.00532
(0.73)
-0.00739
(0.98)
0.00016
(0.02)
-0.02708**
(4.50)
-0.02232**
(3.78)
-0.00746
(1.12)
-0.01895**
(3.18)
0.00001
(0.15)
F ? S+ B ? SA
-0.05902**
(5.45)
0.00100
(0.58)
0.00085
(0.10)
0.01140**
(2.07)
0.00079
(0.12)
-0.00866
(0.97)
-0.01926**
(1.98)
-0.00849
(0.74)
-0.02663**
(4.38)
-0.02258**
(3.78)
-0.00740
(1.10)
-0.02915**
(3.78)
0.00001
(0.08)
F ? S+ B ? SS
(continued)
-0.05852**
(5.37)
0.00104
(0.60)
0.00172
(0.20)
0.01112**
(2.00)
0.00106
(0.16)
-0.00820
(1.03)
-0.00887
(1.08)
0.00372
(0.37)
-0.02753**
(4.50)
-0.02256**
(3.76)
-0.01051
(1.18)
-0.01959**
(3.23)
0.00001
(0.11)
98
T. Randolph Beard et al.
BUS*ANTI
RET*ANTI
INIT*ANTI
BUS*BIG
RET*BIG
INIT*BIG
BUS*STORE
RET*STORE
INIT*STORE
BUS*LOC
RET*LOC
INIT*LOC
BUS*EST
RET*EST
Table 5.9 (continued)
Variables
F?S
F ? S+ B ? L
-0.00199
(0.66)
-0.00061
(0.28)
0.00685
(0.42)
0.01454
(1.01)
-0.00721
(0.56)
F ? S+ B ? SL
0.02351
(1.39)
0.01203
(0.77)
-0.02169
(1.55)
F ? S ? B ? NS
0.02407
(1.22)
0.00385
(0.15)
-0.01310
(0.71)
F ? S+ B ? ES
0.03498**
(2.03)
0.02808**
(1.96)
0.00245
(0.18)
F ? S+ B ? SA
(continued)
F ? S+ B ? SS
5 Blended Traditional and Virtual Seller Market Entry and Performance
99
0.07858**
(21.29)
-1458.7
38.6 %
0.05057**
(20.51)
-1451.8
39.3 %
F ? S+ B ? L
0.04716**
(20.53)
-1450.3
40.0 %
F ? S+ B ? SL
0.04859**
(20.66)
-1452.3
39.9 %
F ? S ? B ? NS
0.04980**
(20.58)
-1453.0
39.5 %
F ? S+ B ? ES
0.05059**
(20.65)
-1449.3
39.7 %
F ? S+ B ? SA
F ? S+ B ? SS
0.00934
(0.49)
0.00759
(0.42)
0.00310
(0.20)
0.05063**
(20.61)
-1452.3
39.6 %
Note t ratios in parentheses. *significant at 10 %. ** significant at 5 %. Effect: F firm, S strategic, B blended, L learning, SL location scale, NE network scale,
ES enterprise scale, SA strategic anticipation, SS strategic supply
Log likelihood
Predicted correctly
EXPAND
BUS*SUP
RET*SUP
INIT*SUP
Table 5.9 (continued)
Variables
F?S
100
T. Randolph Beard et al.
5 Blended Traditional and Virtual Seller Market Entry and Performance
101
The results for the corresponding ordered probit model predicting REVENUE
responses are reported in Table 5.9. First, we discuss the effect of strategic entry to
EXPAND markets on sales and revenue. The influence of entry for other strategic
purposes is then discussed. Finally, the results for the control variables are discussed.
Does strategic entry to expand sales increase revenue? Based on the results in
Table 5.9, the answer is an emphatic ‘‘yes.’’ The term correcting for endogeneity
bias in strategic entry to expand sales has the expected positive sign with the
coefficient significant at the 1 % level. Compared with the magnitudes of other
variables, the size of the payoff is quite large.
In a manner similar to what we find in the strategic entry equation, strategic
objectives that do not necessarily align with increasing revenue (i.e., NEWGOOD,
CUSTOMER, and ANTICIPATE) are negative in their impact. Unsurprisingly, the
SUPPLIER objective has no impact on REVENUE, consistent with the earlier
finding.
Contrary to expectations, traditional blended seller status appears to have a
negative impact on revenue, other things equal. This is consistent with the possibility that cannibalization of demand occurred upon entry. Furthermore, while
several potential sources of scale economies are considered, only location-based
scale effects are significant. Positive effects appear in all specifications except
F ? S ? B ? SL, which is probably due to the inclusion of location interaction
variables.
Interestingly, none of the learning or scale interaction variables is significant.
There are several possible explanations for this finding. The potential sources of
scale effects considered here are major city location (i.e., Sydney or Melbourne),
the number of outlets, and the number of employees. All of these potential drivers
of cost savings are implicitly based on traditional factors (i.e., physical sources of
scale economies). Perhaps the appropriate focus should be on ‘‘virtual,’’ rather
than physical, economies, viz., potential virtual or blended market reach. A second
possibility is that the ranges of variation of the interaction terms are simply too
compressed due to the SME status of the sampled firms for substantial scale
economies to be available.
For all COST and EFFICIENCY specifications, the estimated polychoric correlation coefficients are positive (see Table 5.10). With a nonzero q, the strategic
argument EFFICIENCY is endogenous. Finally, all threshold parameters are significantly different from zero and satisfy the order conditions.
The results for the ordered probit predicting strategic efficiency-based entry are
contained in Table 5.11. In a finding similar to that noted for the EXPAND
equation, competing strategic objectives that do not align with the efficiency
motive negatively influence EXPAND.
INIT*LOC and IBIT*STOR are the only significant interaction variables. Their
reported negative signs suggest that potential scale economies are not sufficient to
overcome the negative influence of the required financial commitment for entry.
The results for the ordered probit COST response models are detailed in
Table 5.12. Somewhat surprisingly, the answer to the question of whether strategic
entry to improve efficiency reduces costs is ‘‘no’’! The terms that correct for
1.164
(13.58)
1.883
(16.27)
2.758
(16.78)
0.440
(11.77)
1.056
(20.14)
1.762
(26.72)
0.276
(7.40)
1.170
(13.69)
1.893
(16.38)
2.772
(16.93)
0.439
(11.81)
1.054
(20.22)
1.758
(26.67)
0.274
(7.32)
0.277
(7.64)
0.442
(11.79)
1.061
(20.31)
1.768
(26.87)
1.164
(13.74)
1.886
(16.51)
2.760
(17.09)
0.275
(7.42)
0.439
(11.23)
1.056
(20.24)
1.761
(26.79)
1.166
(13.68)
1.887
(16.41)
2.763
(16.98)
F ? S + B ? ES
0.277
(7.57)
0.440
(11.80)
1.055
(20.26)
1.760
(26.75)
1.167
(13.68)
1.887
(16.39)
2.765
(16.92)
F ? S + B ? SA
0.275
(7.42)
0.439
(11.82)
1.054
(20.13)
1.758
(26.78)
1.163
(13.60)
1.881
(16.32)
2.758
(16.91)
F ? S + B ? SS
Note t ratios in parentheses. *significant at 10 %. **significant at 5 %. Effect: F firm, S strategic, B blended, L learning, SL location scale, NE network scale,
ES enterprise scale, SA strategic anticipation, SS strategic supply
Response equation
1.150
l1
(12.66)
1.859
l2
(14.57)
2.720
l3
(14.69)
Strategic equation
0.397
k1
(11.71)
0.989
k2
(19.94)
1.688
k3
(26.65)
Disturbance correlation
q
0.279
(6.80)
Table 5.10 COST equations threshold parameters and disturbance correlations
Variables
F?S
F?S+B?L
F ? S + B ? SL
F ? S + B ? NS
102
T. Randolph Beard et al.
INIT*EST
EST*EST
ANTICIPATE
SUPPLIER
CUSTOMER
NEWGOOD
INITIAL
RETAIL
BUSINESS
STORES
LOCATION
SMALL
ESTAB
BLENDED
-0.00142
(0.62)
0.00277
(0.26)
-0.00106
(0.15)
-0.00778
(0.97)
0.01275
(1.50)
-0.00098
(0.11)
-0.00803
(0.75)
-0.04210**
(5.94)
-0.04389**
(6.26)
-0.02549**
(3.11)
-0.04597**
(6.57)
0.00013
(0.87)
-0.49219
(0.00)
-0.00090
(0.51)
-0.00313
(0.44)
-0.00047
(0.10)
-0.00593
(1.10)
0.00564
(0.59)
-0.00276
(0.27)
-0.01793
(1.47)
-0.02021**
(4.32)
-0.01952**
(4.19)
-0.01353**
(2.54)
-0.02183**
(4.71)
0.00004
(0.35)
0.00192
(0.84)
-0.49750
(0.00)
-0.00057
(0.34)
-0.00414
(0.58)
0.00066
(0.11)
-0.00666
(1.23)
0.00376
(0.48)
-0.00016
(0.02)
0.00146
(0.15)
-0.02098**
(4.47)
-0.01948**
(4.18)
-0.01305**
(2.43)
-0.02210**
(4.76)
0.00005
(0.48)
Table 5.11 EFFICIENCY equation estimates, partial effects
Variables
F?S
F ? S + B ? L F ? S + B ? SL
F ? S + B ? NS
-0.47708
(0.00)
-0.00080
(0.50)
-0.00401
(0.58)
-0.00126
(0.28)
0.00134
(0.20)
0.01134*
(1.65)
0.00089
(0.13)
0.00540
(0.57)
-0.02035**
(4.48)
-0.01946**
(4.30)
-0.01277**
(2.46)
-0.02093**
(4.65)
0.00006
(0.52)
F ? S + B ? ES
-0.48757
(0.00)
-0.00061
(0.38)
0.00715
(0.79)
-0.00050
(0.11)
-0.00613
(1.12)
0.00263
(0.42)
-0.00240
(0.38)
-0.00946
(1.11)
-0.02008**
(4.36)
-0.01949**
(4.25)
-0.01354**
(2.57)
-0.02164**
(4.74)
0.00006
(0.54)
F ? S + B ? SA
-0.49559
(0.00)
-0.00072
(0.44)
-0.00323
(0.46)
-0.00037
(0.08)
-0.00627
(1.17)
0.01077
(1.35)
0.00507
(0.60)
-0.01484
(1.42)
-0.02017**
(4.32)
-0.01950**
(4.21)
-0.01378**
(2.59)
-0.01980**
(3.28)
0.00006
(0.57)
F ? S + B ? SS
(continued)
-0.48765
(0.00)
-0.00063
(0.38)
-0.00273
(0.39)
-0.00070
(0.15)
-0.00592
(1.10)
0.00533
(0.78)
0.00309
(0.45)
-0.01019
(1.16)
-0.02001**
(4.29)
-0.01923**
(4.15)
-0.01325*
(1.89)
-0.02187**
(4.74)
0.00006
(0.51)
5 Blended Traditional and Virtual Seller Market Entry and Performance
103
BUS*ANTI
RET*ANTI
INIT*ANTI
BUS*BIG
RET*BIG
INIT*BIG
BUS*STOR
RET*STOR
INIT*STOR
BUS*LOC
RET*LOC
INIT*LOC
BUS*EST
RET*EST
Table 5.11 (continued)
Variables
F?S
F?S+B?L
0.00111
(0.46)
0.00028
(0.15)
-0.02465*
(1.68)
0.00294
(0.23)
0.00552
(0.48)
F ? S + B ? SL
-0.03403**
(2.35)
-0.00020
(0.02)
-0.01239
(1.04)
F ? S + B ? NS
0.00057
(0.04)
0.03231
(1.54)
0.02384
(1.58)
F ? S + B ? ES
0.01139
(0.79)
-0.00750
(0.62)
-0.00879
(0.77)
F ? S + B ? SA
(continued)
F ? S + B ? SS
104
T. Randolph Beard et al.
-1439.8
38.5 %
-1366.2
38.7 %
F?S+B?L
-1365.4
39.2 %
F ? S + B ? SL
-1362.5
39.7 %
F ? S + B ? NS
-1364.7
38.4 %
F ? S + B ? ES
-1365.5
38.8 %
F ? S + B ? SA
F ? S + B ? SS
0.00087
(0.06)
-0.00981
(0.68)
0.00405
(0.32)
-1365.6
38.3 %
Note t ratios in parentheses. *significant at 10 %. **significant at 5 %. Effect: F firm, S strategic, B blended, L learning, SL location scale, NE network scale,
ES enterprise scale, SA strategic anticipation, SS strategic supply
Log likelihood
Predicted correctly
BUS*SUP
RET*SUP
INIT*SUP
Table 5.11 (continued)
Variables
F?S
5 Blended Traditional and Virtual Seller Market Entry and Performance
105
INIT*EST
EST*EST
ANTICIPATE
SUPPLIER
CUSTOMER
NEWGOOD
INITIAL
RETAIL
BUSINESS
STORES
LOCATION
SMALL
ESTAB
BLENDED
-0.00265*
(1.69)
-0.01001
(1.27)
0.00814*
(1.64)
-0.00264
(0.45)
0.00972
(1.58)
0.01065*
(1.65)
0.00038
(0.05)
-0.01644**
(3.06)
-0.02348**
(4.44)
-0.01499**
(2.46)
-0.01095**
(2.07)
0.00014
(1.44)
-0.03220**
(4.33)
-0.00243*
(1.65)
-0.01062*
(1.65)
0.00788**
(1.96)
-0.00300
(0.62)
0.01706**
(2.04)
-0.00082
(0.09)
-0.00414
(0.38)
-0.00935**
(2.07)
-0.01490**
(3.32)
-0.01050**
(2.07)
-0.00549
(1.23)
0.00013
(1.61)
0.00072
(0.35)
-0.03298**
(4.44)
-0.00197
(1.51)
-0.01048
(1.59)
0.00938*
(1.72)
-0.00239
(0.49)
0.01104
(1.57)
0.00925
(1.38)
0.00090
(0.10)
-0.01016**
(2.25)
-0.01499**
(3.35)
-0.01074**
(2.11)
-0.00494
(1.11)
0.00009
(1.20)
Table 5.12 COST response equation estimates, partial effects
Variables
F?S
F ? S + B ? L F ? S + B ? SL
F ? S + B ? NS
-0.03218**
(4.43)
-0.00204*
(1.65)
-0.01034*
(1.65)
0.00732*
(1.81)
-0.00428
(0.68)
0.00948
(1.55)
0.00546
(0.88)
-0.00319
(0.39)
-0.00991**
(2.25)
-0.01464**
(3.34)
-0.01105**
(2.22)
-0.00489
(1.12)
0.00009
(1.21)
F ? S + B ? ES
-0.03247**
(4.39)
-0.00194
(1.49)
-0.00195
(0.23)
0.00720*
(1.74)
-0.00230
(0.48)
0.00557
(1.01)
0.00926*
(1.65)
-0.00634
(0.85)
-0.00988**
(2.20)
-0.01533**
(3.44)
-0.01130**
(2.23)
-0.00475
(1.07)
0.00009
(1.20)
F ? S + B ? SA
-0.03239**
(4.45)
-0.00194
(1.52)
-0.01066*
(1.66)
0.00727*
(1.80)
-0.00254
(0.53)
0.00576
(0.88)
0.00021
(0.03)
-0.00964
(1.14)
-0.00952**
(2.16)
-0.01492**
(3.41)
-0.01073**
(2.16)
-0.01202**
(2.16)
0.00009
(1.15)
F ? S + B ? SS
(continued)
-0.03302**
(4.42)
-0.00206
(1.57)
-0.01008
(1.53)
0.00709*
(1.71)
-0.00225
(0.46)
0.00819
(1.38)
0.01067*
(1.74)
-0.00565
(0.74)
-0.01016**
(2.24)
-0.01511**
(3.36)
-0.01224*
(1.83)
-0.00493
(1.10)
0.00009
(1.24)
106
T. Randolph Beard et al.
BUS*ANTI
RET*ANTI
INIT*ANTI
BUS*BIG
RET*BIG
INIT*BIG
BUS*STOR
RET*STOR
INIT*STOR
BUS*LOC
RET*LOC
INIT*LOC
BUS*EST
RET*EST
Table 5.12 (continued)
Variables
F?S
F?S+B?L
0.00333
(1.48)
-0.00230
(1.40)
-0.00542
(0.42)
0.00245
(0.21)
-0.00769
(0.75)
F ? S + B ? SL
0.00527
(0.41)
0.01522
(1.27)
-0.00552
(0.52)
F ? S + B ? NS
0.02024
(1.35)
0.00674
(0.35)
0.01335
(0.94)
F ? S + B ? ES
0.01972
(1.56)
0.02254**
(2.11)
0.00388
(0.39)
F ? S + B ? SA
(continued)
F ? S + B ? SS
5 Blended Traditional and Virtual Seller Market Entry and Performance
107
0.03322**
(18.22)
-1426.0
32.7 %
0.02754**
(17.58)
-1423.9
33.1 %
F?S+B?L
0.02771**
(17.66)
-1424.8
32.3 %
F ? S + B ? SL
0.02715**
(17.64)
-1422.5
31.9 %
F ? S + B ? NS
0.02757**
(17.64)
-1423.9
32.8 %
F ? S + B ? ES
0.02725**
(17.77)
-1422.0
32.2 %
F ? S + B ? SA
F ? S + B ? SS
0.01646
(1.14)
-0.00275
(0.20)
-0.00193
(0.17)
0.02786**
(17.69)
-1425.2
30.7 %
Note t ratios in parentheses. *significant at 10 %. **significant at 5 %. Effect: F firm, S strategic, B blended, L learning, SL location scale, NE network scale,
ES enterprise scale, SA strategic anticipation, SS strategic supply
Log likelihood
Predicted correctly
EFFICIENCY
BUS*SUP
RET*SUP
INIT*SUP
Table 5.12 (continued)
Variables
F?S
108
T. Randolph Beard et al.
5 Blended Traditional and Virtual Seller Market Entry and Performance
109
endogeneity bias exhibit unexpected positive signs, with the coefficients significant
at the 1 % level. Also, the magnitudes of the coefficients are quite large. Perhaps,
respondents find greater difficulty in identifying cost reductions as opposed to
revenue increases. Another explanation is that the small firms in the sample are
really ‘‘too small’’ to reap any efficiency-induced cost reductions suggested by the
literature to arise from entry.
Interestingly, entry for other strategic reasons (i.e., NEWGOOD, CUSTOMER,
SUPPLIER, and ANTICIPATE) are seen to reduce COSTS. In some cases, the
explanation is not hard to find: sales of a digital equivalent good will almost
always reduce costs compared to its physical equivalent. Similarly, customer
requests are often posited to lower selling costs, whereas supplier-driven entry
clearly could produce cost efficiency gains. The negative sign on ANTICIPATE
could derive from a brand effect, although this is unclear.
Traditional blended seller status appears to have a negative influence on costs,
other things equal. These cost advantages may arise through synergies that are
realized on entry. For example, costs savings may occur through improved labor
productivity, or reduced inventory, advertising and distribution costs. Additionally, while several sources of scale economies are considered, only location-based
scale effects are significant. We observe unintuitive positive signs for the effect of
urban location on costs. One rationalization for this finding is that major urban
centers typically report higher costs, and any efficiency gains are overwhelmed by
the general trend in input prices for which we lack adequate controls.
5.6 Conclusions
The modeling approach employed in this study is based on the premise that firms
enter online markets with a view to pursue specific strategic goals. In particular,
this study addresses the questions: How do virtual firms differ in their online
market entry? In what type of environments is post online market entry performance by virtual and established firms likely to succeed? What can be said about
the relationship between postentry performance and the reasons for entry?
Importantly, the study focus on smaller firms allows us to assess whether the
purported entry gains identified in the previous literature apply to this important
class of enterprises.
The shortest answer to these questions is that the reasons for entry matter for
performance, but the effects vary by the type of performance measured. In particular, strategic entry to expand the market increases sales revenue. Indeed, the
payoff is relatively quite large. Traditional blended sellers do not appear to have
any inherent advantage in revenue growth after entry. Demand channel cannibalization is proposed as a partial explanation for this finding. Furthermore, only
location-based scale effects are positive. First-mover status (years established
online) provides no source of advantage in terms of revenue response to entry.
110
T. Randolph Beard et al.
Also, none of the learning or scale interaction variables appears to matter
statistically.
For the cost response models, we find that strategic entry intended to improve
efficiency does not typically reduce costs! Perhaps the small firms contained in the
sample are really ‘‘too small’’ to reap any of the efficiency-induced cost reductions
cited in the literature. However, entry for other strategic reasons is associated with
lower costs postentry. Evidently, entry does reduce costs so long as that is not its
ostensible purpose. Interestingly, blended sellers enjoy cost advantages arising
through synergies that are realized on entry. Finally, we find no evidence of
significant scale effects, except that cost increases are associated with urban
locations. Apparently the vaunted locational economies are overwhelmed by the
more usual phenomenon of high urban prices.
A limitation of the analysis is that only the mapping from the stated purpose of
entry to the observed success of entry is analyzed. No structural model which
identifies and allows measurement of the actual mechanisms by which revenues
and costs change is feasible given this data. In particular, a more thorough analysis
might consider potential impacts on employment, prices, and the sources of cost
improvement, for example, whether via advertising, inventory or distribution cost
reductions. The analysis might also have addressed firms’ initial web site capability, whether online market performance cannibalized B&M store sales, and the
empirical magnitude and pattern of market expansion.
References
ABS (2009) Business use of information technology, 2007–2008, Catalogue 8129.0, Australian
Bureau of Statistics, Canberra
ABS (2010) Small business in Australia, Catalogue 1321.0, Australian Bureau of Statistics,
Canberra
Audretsch D (1995) Innovation, growth and survival. Int J Ind Org 13:441–557
DeYoung R (2005) The performance of internet-based models: evidence from the banking
industry. J Bus 78:893–947
Dinlersoz E, Pereira P (2007) On the diffusion of electronic commerce. Int J Ind Organ
25:541–547
Disney R, Haskel J, Heden Y (2003) Entry, exit and establishment survival in UK manufacturing.
J Ind Econ 51:91–112
Dunt E, Harper I (2002) E-Commerce and the Australian economy. Econ Record 78:327–342
Freedman D, Sekhon J (2008) Endogeneity in Probit Response Models, Working Paper,
University of California, Berkeley
Garicano L, Kaplan S (2001) The effects of business-to-business E-commerce on transactions
costs. J Ind Econ 49:463–485
Geroski P (1995) What do we know about entry? Int J Ind Organ 13:421–440
Greene W (2008) Econometric analysis, 6th edn. Prentice Hall, New Jersey
Heckman J (1978) Dummy endogenous variables in a simultaneous equation system.
Econometrica 46:931–959
Heckman J (1979) Sample selection bias as a specification error. Econometrica 47:153–196
5 Blended Traditional and Virtual Seller Market Entry and Performance
111
Lieberman M (2002) Did first mover advantage survive the Dot.com crash?’. Working Paper,
University of California, Los Angeles
Litan R, Rivlin A (2001) Beyond the Dot.coms: The economic promise of the internet. Brookings
Internet Policy Institute
Lucking-Riley D, Spulber D (2001) Business-to-business electronic commerce. J Econ Perspect
15:55–68
Maddala, GS (1983) Limited-Dependent and Qualitative Variables in Economics. Cambridge
University Press, New York
Nikolaeva R (2007) The dynamic nature of survival determinants in E-commerce. J Acad Mark
Sci 35:560–571
Reid G, Smith J (2000) What makes a new business start-up successful? Small Bus Econ
14:165–182
Schmalensee R (1982) Product differentiation advantages of pioneering brands. Am Econ Rev
72:349–365
Segarra A, Callejón M (2002) New firms’ survival and market turbulence: new evidence from
Spain. Rev Ind Organ 20:1–14
Smith M, Brynjolfsson E (2001) Consumer decision-making at an internet shopbot: brand still
matters. J Ind Econ 49:541–558
Wilde J (2000) Identification of multiple Equation probit models with endogenous dummy
regressors. Econ Lett 69:309–312
Chapter 6
How Important is the Media and Content
Sector to the European Economy?
Ibrahim Kholilul Rohman and Erik Bohlin
6.1 Introduction
Many studies have been conducted to investigate the relationship between technology development and economic performance and to demonstrate the increasingly important role of information and communication technology (ICT) sectors.
These studies have been carried out at a macrolevel where technology is seen as an
important factor in supporting economic growth through capital accumulation and
increased productivity (Bresnahan and Trajtenberg 1995; Steindel and Stiroh
2001), and at meso-level where the relationship between technology and the firm
efficiency has been recognized (Chacko and Mitchell 1998). More importantly,
there have been a substantial number of studies that discuss the role of specific ICT
devices, such as telecommunications (Cronin et al. 1991; Madden and Savage
1998; Dutta 2001), and computer technology and the broadband (Jorgenson and
Stiroh 1995; Brynjolfsson 1996). In general, these studies indicate that the relationship between technology, in generic forms, and ICT products and services is
important to achieve better performance and economic growth.
The European countries, therefore, are also aware of the need to develop the
ICT economy in the region. It is denoted by the introduction of the Lisbon Strategy
which was set out in 2000 and followed by the Barcelona meeting in 2002. Both
initiatives aimed to achieve a significant boost to research and development
(R&D) activities in the European region and, ultimately, to obtain the target: ‘‘by
2010 the European region would become the most competitive and dynamic
knowledge-based economy in the world, capable of sustainable economic growth
with more and better jobs and greater social cohesion.’’ A particular emphasis of
this agenda is the goal of increasing gross expenditure on R and D to 3 % of the
EU’s gross domestic product (GDP), with the business sectors projected to contribute two-thirds of this financing scheme. Nonetheless, it is generally conceived
I. K. Rohman (&) E. Bohlin
Chalmers University of Technology, Gothenburg, Sweden
e-mail: ibrahim.rohman@chalmers.se
J. Alleman et al. (eds.), Demand for Communications Services - Insights and Perspectives,
The Economics of Information, Communication, and Entertainment,
DOI: 10.1007/978-1-4614-7993-2_6, Springer Science+Business Media New York 2014
113
114
I. K. Rohman and E. Bohlin
that the agenda has not yet met the target (Creel et al. 2005; Zgajewski and Hajjar
2005; Duncan 2009).1
In relation to this, Van Ark et al. (2008), for instance, present the analysis on the
productivity gap between the European countries and the United States (US)
before and after 2000s. The study reveals that different degrees of technology used
in both regions have led to different levels of productivity. The labor productivity
(GDP per hour of work) in the US accelerated from 1.2 % in 1973–1995 to 2.3 %
in 1995–2006. In contrast, the productivity growth in the 15 European countries
decreased from 2.4 % in 1973–1995 to 1.5 % in 1995–2006. The study shows that
the slowdown is attributable to a slower emergence of the knowledge economy
driven by the lower growth contributions from investment in ICT in Europe, the
relatively small share of technology-producing industries, and the slower multifactor productivity growth that is viewed as a proxy for advances in technology
and innovation.
Therefore, to stimulate further economic growth, the role of media and content
sector is important to develop as part of the ICT sectors (OECD 2008, 2009).
Lamborghini (2006) stated that the digital and on-line content is becoming the real
center of the new scenario and the main term of reference to achieve the information society due to two main reasons (1) the positive effects the sector produces
for everyday lives, and (2) the fact that the sectors are the main growth driver of
the electronic communications sector, with enormous potential in terms of growth
and employment. However, while studies on the other ICT devices, for instance,
the internet, broadband and mobile phone have been examined extensively, only a
few studies are devoted to investigating the role of the media and content sector in
the European economy with a precise definition of the sectors. To fill the gap, this
study aims to answer three questions:
• How much has the media and content sectors contributed to the output of the
European countries’ economy during the 1995–2005 period?
The measurement is conducted by calculating the multiplier coefficient that
shows how the change in final demand in the media and content sectors contributes
to the enlargement of the economic output.
• What are the sources of growth in the media and content sectors in the European
economy?
The change in output of a particular sector can be decomposed into four
sources: the domestic final demand effect, export effect, import substitution effect,
and technology coefficient effect.
• How will the continuous reduction in price in the media and content sectors
affect the overall economy?
1
http://www.euractiv.com/priorities/sweden-admits-lisbon-agenda-failure/article-182797
6 How Important is the Media and Content Sector to the European Economy?
115
The price in the media and content sectors has been driven by technological
innovation and has declined significantly (e.g., Edquist 2005; Bagchi, Kirs and
Lopez 2008; Haacker 2010; Oulton 2010). The scenario analysis is, therefore,
performed to forecast the impact of the price reduction on the GDP. The measurement is obtained by calculating the sensitivity of the GDP with respect to the
price of the media and content sector.
6.2 ICT Economy in the European Region
The OECD (2009), p. 14 stated that, by nature, ICTs are seen as general purpose of
technologies (GPT) that can be used for a broad range of everyday activities.
Consequently, new modes of individual behavior have emerged, including new or
modified means of personal communication and interaction. In line with this
argument, Kramer et al. (2007) also stress that continual reporting on (information)
technology has helped raise awareness of the importance of ICT diffusion to
overall competitiveness. Thus, they also refer to the role of the ICT sectors in
explaining the technology sector.
With regard to the performance of ICT sectors in the European economy, IPTS
(2011) delivered an extensive report by first identifying and classifying the ICT
sectors into two groups: ICT manufacturing and services.2 Based on this classification, it is reported that in 2008, the value added by the ICT sectors was as much
as 4.7 % of GDP (or equivalent to 574 billion Euros). Besides this, the sectors
have also been able to generate 3.6 % of the total employment (8.2 million jobs)
with job creation strongly oriented toward ICT services and accounting for
6.2 million jobs. In terms of R and D, the ICT sectors also lead by contributing
25 % of the total business expenditure on R and D.
Nevertheless, if the performance is being compared with that of other countries,
it is found that the European region is now lagging behind the US and other
emerging countries. With the proportion of relative economic weight of the ICT
sectors of GDP, approximately 4.7 % in 2008, the proportion is smaller than that
in China (6.6 %), Japan (6.9 %), Korea (7.2 %), and Taiwan (10.5 %). This takes
into account that, in general, ICT manufacturing in Asia is greater than that in the
EU. The R and D intensity (measured by the ratio of R&D expenditure to value
added by the sector) in Europe is also lower than that in the United States and
emerging countries. The EU recorded 6.2 %, which is lower than in the United
States (11.2 %), Japan (12.8 %), Korea (16.5 %), and Taiwan (12.3 %).
2
ICT manufacturing consists of IT equipment, IT components, telecom and multimedia
equipment, telecom equipment (e.g., network equipment, mobile phones), multimedia equipment
(e.g., TVs, DVD players, and video game consoles) and measurement instruments, whereas ICT
services consist of telecom services (e.g., fixed line, mobile telecommunications) and computer
services and software (e.g., consultancy, software, the internet).
116
I. K. Rohman and E. Bohlin
Fig. 6.1 ICT subsector R and D intensities, 2007, as a percentage of the added value Source
IPTS, EC (2010)
In terms of subsector of ICT, Fig. 6.1 explains the R and D intensities in the
ICT sectors, comparing the European region and other leading ICT countries.
The R and D intensities, as shown in Fig. 6.1, vary between the ICT subsectors.
Components, telecommunication, and multimedia equipment have the highest
percentages of added value in the EU, the United States, and Korea. Telecommunication services, on the other hand, have the lowest percentage of R and D to
added value in each country and region. It can be concluded that the intensities of
R and D in the EU are generally lower than in the United States, whereas the
component, telecommunication, and multimedia equipment recorded the highest
contribution in the ICT subsector in terms of R and D intensities in the European
countries.
From the end-user and consumption side, the gradual decline in communication
consumption also indicates why this recession afflicts most of the ICT industry in
the European countries. The annual data on the ratio of expenditure on communication to GDP shows that the ratio has declined gradually. Even though the ratio
dropped only slightly from 3 % (2006) to 2.9 % (2008) throughout the EU (15
countries), it has dropped substantially for major and leading ICT countries like
Germany, the Netherlands, Italy, Norway, and Finland. For instance, Germany
dropped from 2.9 % (2006) to 2.6 % (2008), and the Netherlands has continued to
decline from 2.7 % (2006) to 2.4 % (2008). It is not surprising that the investment
in this sector is also affected by the recession. From the data, which comprises 33
European countries, the average annual growth in telecommunication investment
during 2000–2006 was -6 % compared with 16.2 % during the period
1995–2000. Figure 6.2 shows the decline in telecommunication investment, which
began in earnest in 2001.
From industry perspective, the surveys conducted in some European countries
on broadcasting media discovered that the largest channels in each country are also
suffering a decline in their ratings. Although the aggregate revenue still increases
6 How Important is the Media and Content Sector to the European Economy?
117
Fig. 6.2 Telecommunication
investments in the European
countries (MEUR) Source
Eurostat (2010)
across Europe during 2006–2008, the public broadcasting sector has seen a drop of
more than four percentage (4 %) points in its total market share, while the commercial sector (both radio and TV financed by advertising) has grown modestly
(Open Society Institute 2009). It is also predicted that the advertising revenues of
traditional channels are not likely to grow significantly over the next decade. Using
two different econometric models, it is estimated that the gradual decline is around
0.2–0.5 % (OFCOM 2010).
6.3 Methodology
Throughout this analysis, the Input–Output (IO) methodology will serve as the
main tool of analysis. The advantage of the IO method is its ability to capture
direct and indirect impacts as well as to assess the impacts at both macro- and
meso-level (industry level). A close relationship between the firm and industry
data is explained in the IO table, as the intermediate transaction in quadrant I
consists of the data gathered from an industry survey (Yan 1968: 59–60; United
Nations 1999: 3; Miller and Blair 2009: 73). The relationship between IO and the
Fig. 6.3 Input–output (IO) table
118
I. K. Rohman and E. Bohlin
macro-variable can also be explained that the primary inputs (the summation of
wages, salary, and operating surpluses) in quadrant III also reflecting the measurement of GDP from the income approach, whereas, the summation of consumption, investment, government spending, and net export in quadrant II
reflecting the GDP calculation from final demand approach. Figure 6.3 presents
the IO table and how to operationalize the method.
From Fig. 6.3, the transaction flow in the IO table is explained by system
Eq. (6.1) below. Suppose we have four sectors in the economy:
x11 þ x12 þ x13 þ x14 þ c1 ¼ x1
ð6:1Þ
x21 þ x22 þ x23 þ x24 þ c2 ¼ x2
x31 þ x32 þ x33 þ x34 þ c3 ¼ x3
x41 þ x42 þ x43 þ x44 þ c4 ¼ x4
From Eq. (6.1), xij denotes the output from sector i used by sector j as an
intermediate input (or, in other words, it measures the input from sector i used for
further production processes in sector j). In the IO table, these values are located in
quadrant I. Moreover, ci (i = 1…4) refers to the total final demand of sector i,
whereas xi refers to the total output of sector i. Introducing the matrix notation, we
can modify Eq. (6.1) to obtain the following matrix column:
0 1
0 1
x1
c1
B .. C
B .. C
x ¼ @ . A; c ¼ @ . A
ð6:2Þ
x4
c4
From Eq. (6.2), x denotes the column matrix of output and c the column matrix
of final demand. The following matrices, I and A, are the identity matrix and
technology matrix, respectively, and they are used to further measure the
multiplier.
1
0
3
2
1 0 0 0
a11 a14
B0 1 0 0C
6 .
.. 7
..
C
I¼B
ð6:3Þ
. 5
.
@ 0 0 1 0 A; A ¼ 4 ..
a
a
41
44
0 0 0 1
Matrix I is the identity matrix: a diagonal matrix whose off-diagonals are zero,
whereas A is the technology matrix, which consists of the ratio of intermediate
demand to total output,
xij
x
Hence, a14 , for instance, explains the ratio of output from sector 1, which is
used to produce the output by sector 4 divided by the total output from sector 1.
6 How Important is the Media and Content Sector to the European Economy?
119
The equilibrium of the equation for demand and supply in Eq. (6.1) can be
modified as follows:
Ax þ c ¼ x
ðI AÞx ¼ c
x ¼ ðI AÞ1 c:
ð6:4Þ
The first row of Eq. (6.4) is the general form of Eq. (6.1). The multiplier is
defined as the inverse Leontief matrix, ðI AÞ1 , which can also be symbolized as
matrix L (consisting of l in the matrix element). Thus, the multiplier measures the
change in equilibrium output of the aggregate economy caused by a unit change in
the final demand of the industry sector. Throughout this study, the IO table has
been transformed into constant terms in order to be appropriate for growth measurement. Therefore, since the IO is calculated on the basis of current prices, the
GDP deflator is used to change all the values into constant terms.3
6.4 Multiplier Analysis
Referring back to Fig. 6.1, the multiplier measures how total output changes as a
result of the change in final demand (quadrant II). The measurement largely
depends on the Leontief matrix Eq. (6.4) reflected in quadrant I. This study corresponds to the method of simple multiplier measurement and the domestic
transaction model. Hence, in calculating the multiplier, only goods and services
produced domestically affect the value of the multiplier. Furthermore, the multiplier uses the open IO instead of the closed one based on the previous study by
Grady and Muller (1988). The study shows that the use of a closed IO table usually
yields exaggerated estimates of the impact.
6.5 Decomposition Analysis
Skolka (1989) explains that decomposition analysis can be defined as the method
of distinguishing major shifts in the economy by means of comparative static
changes in key sets of parameters. Blomqvist (1990) cited Fisher (1939) and Clark
(1940), among others, who were the first to introduce the concepts of decomposition analysis concerning the classification of primary, secondary, and tertiary
3
The more thorough estimation on deflating the IO table is explained, for instance, by Celasun
(1984) in the case of Turkish structural change of economy. The same method using the sectorial
producer price index and import price index can be seen in Zakariah and Ahmad (1999) on the
Malaysian economy. This study only uses the GDP deflator to obtain the constant value of IO
table. A similar method can be found in Akita (1991).
120
I. K. Rohman and E. Bohlin
sectors, which are still widely used today. Nevertheless, Chenery (1960) was the
one who employed the method to identify the source of structural change and
industrial growth. One of the conclusions of the mentioned study was that differences in factor endowment, especially in the variation of import and domestic
production, create the greatest variations between countries in terms of industry
output (e.g., machinery, transport equipment, and intermediate goods).
This study uses the decomposition analysis adopted by Roy et al. (2002) and
investigates the contribution of the information sectors to the Indian economy. The
derivation of the model is explained below.
xi ¼ ui ðdi þ wi Þ þ ei
ð6:5Þ
In Eq. (6.5), xi denotes the total output of the economy and ui is the domestic
supply ratio defined by ðxi ei Þ=ðdi þ wi Þ: From this equation, di and wi denote
the domestic sources affecting change of output, where di is the domestic final
demand, and wi is the total intermediate demand. In addition, ei is the total export
and thus plays as an international source affecting the change of output. Thus, from
Eq. (6.5), the change of output is affected by domestic factor (d and w) and
international source ðeÞ.
x¼b
ud þ b
u Ax þ e
ð6:6Þ
Eq. (6.6) substitutes the total intermediate demand ðwÞ for the multiplication of
the technical coefficient ðAÞ and total output ðxÞ: Then, by introducing identity
matrix l, Eq. (6.6) can be transformed into Eq. (6.7):
x ¼ ðI b
u AÞ1 ðb
u d þ eÞ
ð6:7Þ
Substituting R ¼ ðI b
u AÞ1 , the above equation can be represented in Eq.
(6.8) below.
x ¼ Rðb
u d þ eÞ
ð6:8Þ
Based on Roy et al. (2002), the decomposition of the change in output of the
ICT sectors between two periods of time is summarized in Table 6.1 below using
Eq. (6.8). A matrix z is introduced showing a diagonal matrix composed of one
and zeros. The one appears in the cell of the IO table corresponds to telecommunication and all the other elemants of the matrix are zeros.
Table 6.1 Decomposition of the change on economic output
Factor
Equation
Change in ICT output
Domestic final demand effect
Export effect
Import substitution effect
Technology coefficient effect
u 0 d0 þ e0 Þ
bz ðx1 x0 Þ ¼ bz ½R1 ðb
u 1 d1 þ e1 Þ R0 ðb
bz R1 b
u 1 ðd1 d0 Þ
bz R1 ðe1 e0 Þ
u1 b
u 0 Þðd0 þ w0 Þ
bz R1 ðb
u 1 ðA1 A0 Þx0
bz R1 b
6 How Important is the Media and Content Sector to the European Economy?
121
Fig. 6.4 Decomposition
analysis
It can be verified, as shown in Table 6.1, that any change in economic output
between two periods of time can be decomposed, part by part, from the elements
built into the output calculation. Thus, the table allows us to trace the change in
output as a result of domestic final demand, export, import substitution and
technology coefficient effects. Roy et al. (2002) define the composition factor as
follows:
• The domestic final demand effect occurs when the increased economic output is
used to fulfill the needs of the domestic market.
• The import substitution effect is calculated from the changes arising in the ratio
of imports to total demand. This implicitly assumes that the imports are perfect
substitutes for domestic goods, since the source of supply constitutes an integral
part of the economic structure.
• The export effect occurs when the growth in output is driven by export-oriented
demand (foreign demand).
• The technological effect represents the widening and deepening of the interindustry relationship over time brought about by changes in production technology as well as substitutions for various inputs.
To explain this analysis more clearly, Fig. 6.4 shows how the decomposition
analysis is conducted from its outputs.
The advantage of employing decomposition analysis is explained by Bekhet
(2009): the method overcomes many of the static features of IO models and hence
is able to examine changes over time in the technical coefficient and sectorial mix.
The data in this study are taken from the IO tables published by Eurostat,
comprising the following publications (Table 6.2).
122
I. K. Rohman and E. Bohlin
Table 6.2 Selected European country and IO table availability
No
Country
IO publication
1
2
3
4
5
6
7
8
9
10
11
12
Austria
Belgium
Denmark
Finland
France
Germany
Italy
The Netherlands
Norwaya
Spain
Sweden
United Kingdom
1995
2000
2005
4
4
4
4
4
4
NA
4
NA
4
4
4
4
4
4
4
4
4
4
4
4
4
4
NA
4
NA
4
4
4
4
4
4
4
4
4
NA
Source Eurostat
Given the limited data for the United Kingdom and due to decomposition analysis
requiring at least two time periods to enable investigation, the country is excluded
from investigation. In terms of the countries investigated, this study continues
the coverage by Gould and Ruffin (1993), van Ark, O’Mahony and Timmer (2008),
and Eichengreen (2008) of twelve selected European countries that are believed
to have been experiencing an advanced level of technological development.
However, given the limited data for some countries in a particular year and due
to constraints imposed by decomposition analysis, which requires at least two time
periods to for an investigation, the complete analysis could only be done for eight
countries that have complete sets of data for 1995, 2000, and 2005.
6.6 The Impact of Price Changes
Heng and Tangavelu (2006) investigate the impact of the information economy on
the Singaporean economy. While investigating the impact in terms of the multiplier, they also proposed the following model when determining the magnitude of
price changes. Based on the rationale behind the IO calculation, it can be shown that
GDP ¼ Gross Output Intermediate Inputs Primary Inputs
ð6:9Þ
GDP ¼ Py Y ¼ PQ Q PN N PZ Z PF F
Where Y denotes the quantity of GDP real, Py is the price of Y, Q denotes the
quantity of the output, PQ is the price of the output, N denotes the quantity of the
media and content product, PN is the price of the media and content products,
Z denotes the quantity of non-media and content product, PZ is the price of non-
6 How Important is the Media and Content Sector to the European Economy?
Fig. 6.7 Impact of price
reduction on the GDP
(percentage)
123
0.30
0.25
0.20
0.15
0.10
1995
2000
0.05
2005
0.00
media and content products, and F denotes the primary inputs, PF is the price of
the primary inputs.
The study makes assumptions that all transactions are carried out in a competitive environment. By definition, firms maximize their profit subject to a given
technological constraint, factor endowments, and relative input prices. Following
Kohli (1978), it can be derived that the GDP calculation is an optimization of a
maximization problem.
GDPðPQ ; PN ; PZ ; F Þ ¼ maxQ; J; Z fPQ Q PN N PZ Z PF F : f ðN; Z; FÞ Qg
ð6:10Þ
From Eq. (6.10), it can be inferred that the GDP function is a function of the
price of the inputs, output, and the factor endowments. Applying the duality
theory, the measurement of profit maximizes the demand for media and the content
products can be obtained by the following Sheppard Lemma in Eq. (6.11):
d
GDP
¼ NðPQ ; PZ ; PN ; FÞ:
dPN
ð6:11Þ
Multiplying both sides by PN =GDP, the following formula can be obtained:
d
GDP
PN =GDP ¼ NPN =GDP:
dPN
ð6:12Þ
From Fig. 6.7, it can be concluded that, in general, a reduction in the price of
media and content contributes to a higher GDP growth. The average European
countries recorded a small elasticity coefficient for each 1 % decrease in media and
content products over time. On average, a 1 % reduction in price contributes to an
increase in growth of the GDP from approximately 0.17 % during the last three
observations. The results vary between countries where France, Germany, Norway,
and the Netherlands recorded a higher elasticity compared with other countries.
The left-hand side of Eq. (6.12) reflects the price elasticity of the GDP, which
can be calculated as the ratio of the value of the input to the GDP. In other words,
NPN =GDP will identify 9% change in the GDP as the results of the change in the
price of the media and content. The investigation employs the input/output table
for the years 1995, 2000, and 2005.
124
I. K. Rohman and E. Bohlin
6.7 Data
The European System of Accounts (ESA 95) established a compulsory transmission
of the tables of the IO framework by the European Member States (European
Commission, Eurostat 2010). This obligation applied as of the end of 2002, and it
demands that every country construct annual supply and use tables, on the one
hand, and five-year symmetric IO tables, symmetric IO tables of domestic production, and symmetric IO tables of imports. It is known as ESA 95, as the
collection of data generally covers the period from 1995 onwards. The ESA 95 has
59 sectors that are uniform between the European countries.4
This study is motivated by the previous study by Van Ark et al. (2008), which
compares the productivity gap between European countries and the United States.
In addition, the list of countries below is also a list of the countries that have had a
rapid technology transfer, according to Eichengreen (2008: 26, Table 2.6). It is
therefore relevant to measure the impact of the ICT sectors in these countries in the
sense that the countries are identified as having long histories of R and D activities
and technology transfer.
6.8 Sector Definition
When exploring the manual for measuring the Information Economy, the OECD
(2008) presented the definition of media and content as part of ICT sectors in the
following definition.
Content corresponds to an organized message intended for human beings published in
mass communication media and related media activities. The value of such a product to
the consumer does not lie in its tangible qualities but in its information, educational,
cultural or entertainment content (OECD 2008).
In accordance with the ISIC category, the definition of media and content
products above enclaves the following sector in Table 6.3. Moreover, to enable the
impact assessment employing the IO method, the sectors in Table 6.3 are matched
with the sectors corresponding to the IO category. The media and content sectors
are then found in the following IO sector shown in Table 6.4.
Table 6.4 shows that there are four media and content sectors among the 59
sectors in the European IO table. Thus, the economic impact and contribution of
the media and content sectors in this study correspond to these four sectors. As the
aggregation level of ISIC categories is more detailed than the IO categories, some
media and content products are consequently aggregated in a particular IO sector.
4
The 59 sectors category on the European Input-Output (IO) table based on ESA 95 is available
from the principal author.
6 How Important is the Media and Content Sector to the European Economy?
125
Table 6.3 Classification of content and media
No
ISIC
Definition
1
5811
2
6010, 6020
3
4
5
6
5911, 5912
5820
5812
7310, 6391
Printed and other text-based content on physical media
and related services
Motion picture, video, television and radio content,
and related services
Music content and related services
Games software
On-line content and related services
Other content and related services
Source OECD (2009)
Table 6.4 Classification of
the media and content sectors
based on the European 59
sectors IO table
Sector
Sector name
16
43
49
51
Printed matter and recorded media
Post and telecommunications services
Computer and related services
Other business services
6.9 Results
The following analysis investigates the output multiplier for the media and content
sectors and compares the value with the average multiplier of all economic sectors
within the European economy. Table 6.5 presents the comparison between the two
groups in the investigated European countries.
As explained in the previous section on methodology, the output multiplier
measures a change in output as 1 unit value of change in the final demand. For
example, based on Table 6.5, for each one euro (1 €) spent in the media and
content sectors’ final demand increased the economic output by as much as
1.70 euro in 1995, in Sweden. Table 6.5 also indicates that, in general, the output
multiplier of the media and content sector is smaller than for the average sectors.
Of the European countries, the Scandinavian countries, Sweden, Finland, Denmark, and Norway, recorded continuous performance, with the media and content
sectors contributing more than the average economic sectors. Apart from the
Scandinavian region, the Netherlands also achieved the same characteristics of a
stronger media and content sectors. In the rest of the European region, the sectors
contribute to a lower multiplier for the economy.
Figures 6.5,5 6.6 explain the decomposition analysis for the media and content
sectors.
5
The original data are all in EUR, except for that of Sweden and Denmark. The transformation
to EUR data uses the average exchange rate for 1995–2005. 1 EUR = 7.43 DKK = 8.96 SEK.
Data retrieved 28 April 2010, from http://stats.oecd.org/Index.aspx?DataSetCode=CSP2009.
126
I. K. Rohman and E. Bohlin
Table 6.5 Multiplier effect of the media and content sector
No Country
1995
2000
1
2
3
4
5
6
7
8
9
10
11
Sweden
Denmark
Austria
France
Belgium
Spain
Norway
Germany
Italy
Finland
The Netherlands
2005
Media
content
Average
sectors
Media
content
Average
sectors
Media
content
Average
sectors
1.70
1.53
1.34
1.60
1.48
1.52
–
1.41
–
1.66
1.44
1.56
1.51
1.44
1.71
1.55
1.62
–
1.64
–
1.60
1.51
1.54
1.39
1.53
1.67
1.54
1.62
1.80
1.55
1.69
1.65
1.50
1.54
1.40
1.56
1.77
1.59
1.72
1.65
1.62
1.74
1.62
1.50
1.56
1.42
1.55
1.68
–
1.67
1.72
1.52
1.71
1.61
1.58
1.53
1.43
1.56
1.72
–
1.74
1.62
1.56
1.76
1.58
1.56
Fig. 6.5 Decomposition of output change in MEUR (1995–2000)
The decomposition in Fig. 6.5 shows that the output of the media and content
sectors in 1995–2000 is heavily influenced by the domestic demand and the export
effect. Correlated to the size of the economy, the domestic demand and the export
effect are associated with the population size and the GDP. Hence, countries like
Germany, France, and Spain show a higher domestic final demand effect. The
technological change effect is generally positive for the countries, indicating the
strong impact of the media and content sectors for the other sectors.
Figure 6.6 presents the characteristics of the media and content sector in the
second half of the observation.
During the second half of the observation (2000–2005), the change in the media
and content sectors was mainly driven by the technological change effect,
6 How Important is the Media and Content Sector to the European Economy?
127
Fig. 6.6 Decomposition of output change in MEUR (2000–2005)
especially in Germany, France, Italy, the Netherlands, and Spain. The most
interesting result during this period is the evidence that the export effect decreased,
with the media and content sectors in Germany recording substantial negative
impacts. This means that, in general, the comparative advantage of media and
content products exported to the rest of the world has decreased. Furthermore,
most of the countries investigated show a positive import substitution effect, which
means that these countries are now playing more passively, letting other countries
and regions in the rest of the world penetrate the media and content market. The
technological effect remains positive in some countries but with a lower value.
The last section estimates the impact of price reduction from the sector. The
price of ICT products tends to fall over time, as concluded by many studies
(Bagchi et al. 2008; Haacker 2010; Oulton 2010). It is crucial to investigate the
impact of a reduction in the ICT sectors, in particular media and content, on the
economy. Figure 6.7 is examined through the elasticity of GDP with respect to
price, as stated in Eq. (6.12).
The next question worth addressing is the identification of sectors that enjoy the
reduction in media and content price. The impact of the price reduction varies
between sectors, depending on the intensity of use on the media and content
products as intermediate inputs. The difference in structure of production and input
characteristics leads to a different impact. Table 6.6, 6.7, and 6.8 shows the impact
of sectorial GDP as a result of a 1 % decrease in the media and content sectors’
price.6
6
The dashed sectors correspond to media and content.
128
Table 6.6 Impact on the rest
of the economy as a result of
a 1 % reduction in the media
and content sector, 1995
(percentage)
Table 6.7 Impact on the rest
of the economy as a result of
a 1 % reduction in the media
and content sector, 2000
(percentage)
I. K. Rohman and E. Bohlin
Sector
Elasticity
Manufacture of office machinery and computers
Computer and related activities
Publishing, printing, and reproduction of recorded
media
Financial intermediation, except insurance, and
pension funding
Other business activities
Post and telecommunications
Manufacture of coke, refined petroleum products,
and nuclear fuels
Air transport
Manufacture of tobacco products
Activities auxiliary to financial intermediation
Renting of machinery and equipment without
operator and of personal and household goods
0.62
0.45
0.45
0.35
0.34
0.33
0.32
0.27
0.25
0.25
0.22
Sector
Elasticity
Publishing, printing, and reproduction of recorded
media
Insurance and pension funding, except compulsory
social security
Mining of coal and lignite; extraction of peat
Other business activities
Manufacture of office machinery and computers
Computer and related activities
Manufacture of tobacco products
Post and telecommunications
Manufacture of radio, television and communication
equipment and apparatus
Manufacture of chemicals and chemical products
0.53
0.51
0.47
0.31
0.29
0.28
0.28
0.27
0.26
0.25
It can be concluded from these tables that the price impact of media and content
mainly stimulates the sectors that are manufacturing of ICT products (radio
communication equipment and apparatus) and financial sector. It can also be
inferred from the tables that the media content themselves are the sectors which
enjoy the greatest benefit of price reduction. On average, a 1 % reduction in price
contributes to an increase in growth of the media and content from approximately
0.4 % in 1995 and 2000 and 0.5 % in 2005. The results also show the tendency of
lack of connection of the media and content sectors having found that during the
later period, and these sectors have absorbed the price reduction impact, whereas
less impact has been channeled to the other sectors which becomes another reason
that during the later investigation from 2000 to 2005, the region also has a lower
technological change effect from the media and content sector.
6 How Important is the Media and Content Sector to the European Economy?
Table 6.8 Impact on the rest
of the economy as a result of
a 1 % reduction in the media
and content sector, 2005
(percentage)
129
Sector
Elasticity
Publishing, printing, and reproduction of recorded
media
Post and telecommunications
Manufacture of radio, television, and
communication equipment and apparatus
Other business activities
Computer and related activities
Activities auxiliary to financial intermediation
Extraction of crude petroleum and natural gas;
service activities incidental to oil and gas
extraction excluding surveying
Activities of membership organization
0.55
0.47
0.42
0.38
0.35
0.27
0.27
0.26
6.10 Conclusion
This study aims to investigate the economic assessment of the media and content
sectors based on the definition by the OECD (2008). Employing the definition and
matching up with the ESA 95 consisting 59 sectors of the European IO table, the
media and content sectors correspond to printed material, post and telecommunication services, computer services, and business services. The coverage of the
study includes the multiplier analysis, decomposition of the source of growth, and
the scenario analysis of the price reduction of the media and content sectors on the
GDP.
The study found that, in general, the media and content sectors contribute to
lower multiplier coefficients; the Scandinavian countries (Sweden, Finland, Denmark, and Norway), together with the Netherlands, recorded a higher multiplier
coefficient of the sectors compared with the average economic sectors. The
average multiplier of the media and content sectors ranged from 1.3 to 1.8 during
1995–2005. It has also been found from the decomposition analysis that the
change in output in the media and content sectors was mainly influenced by the
domestic final demand and technological change effect during 1995–2000. Having
maintained the domestic final demand effect for most of the countries, the media
and content sectors are driven more by the import substitution effect, which
describes the lower magnitude of the competitive advantage in the world market. It
is also supported by the fact that the other leading ICT countries, for example,
Japan, Korea, and Taiwan, have spent a more substantial amount on R and D for
ICT, and media and content products in the last couple of years.
Additionally, the price assessment identifies that each 1 % decrease in the
media and content price contributes 0.17 % to economic growth. The impact
varies between countries, with France, Sweden, and Norway entitled to a higher
elasticity coefficient. In terms of sectorial impacts, the study found that the price
reduction mainly affects the financial sector, and manufacturing of ICT products
beside the media and content sectors themselves. Thus, it is suggested that the link
130
I. K. Rohman and E. Bohlin
should be even greater, especially to the service sectors, knowing that these sectors
generally have a higher multiplier effect in the European region (Leeuwen and
Nijkamp 2009).
References
Akita T (1991) Industrial structure and the source of industrial growth in Indonesia: an I-O
analysis between 1971 and 1985. Asian Econ J 5(2):139–158
Bagchi K, Kirs P, Lopez FJ (2008) The impact of price decreases on telephone and cell phone
diffusion. Inf Manage 45(3):183–193
Bekhet HA (2009) Decomposition of Malaysian production structure input–output approach. Int
Bus Res 2(4):129–139
Blomqvist HC (1990) Growth and structural change of the Finnish economy, 1860–1980, a
development theoretical approach. J Econ Dev 15(2):7–24
Bresnahan TF, Trajtenberg M (1995) General purpose technologies: engines of growth? J
Econometrics 65:83–108
Brynjolfsson E (1996) The contribution of information technology to consumer welfare. Inf Syst
Res 7(3):281–300
Celasun M (1984) Sources of industrial growth and structural change: the case of Turkey
[Working Paper] Washington DC : The World Bank Staff Working Paper No 614
Chacko M, Mitchell W (1998) Growth incentives to invest in a network externality environment.
Ind Corp Chang 7(4):731–744
Chenery HB (1960) Patterns of industrial growth. Am Econ Rev 50(4):624–654
Clark C (1940) The condition of economics progress. Macmillan, London
Creel J, Laurent E, Le Cacheux J (2005) Delegation in inconsistency: the ‘Lisbon strategy’ record
as an institutional failure. OFCE Working Paper 2005–2007, June. Retrieved from http://
www.ofce.sciences-po.fr/
Cronin FJ, Parker EB, Colleran EK, Gold M (1991) Telecommunications infrastructure and
economic growth: an analysis of causality. Telecommun Policy 15(6):529–535
Duncan I (2009) The Lisbon Strategy old port in new bottles’?. [Working Paper] Scotland Europe
Paper 32. Retrieved from http://www.sdi.co.uk/*/media/SDI/Scotland%20Europa/Resources
%20Public/Innovation%20Research/Paper32%20The%20Lisbon%20Strategy.pdf Accessed 5
August 2010
Dutta A (2001) Telecommunications and economic activity: an analysis of granger causality.
J Manage Inf Syst 17(4):71
Edquist H (2005) Do hedonic price indexes change history? The case of electrification. SSE/EFI
working Paper Series in Economics and Finance, 586. Retrieved 10 October 2010, from http://
swopec.hhs.se/hastef/papers/hastef0586.pdf
Eichengreen B (2008) The European economy since 1945: coordinated capitalism and beyond.
Princeton University Press, New Jersey
European Commission (2010) A digital agenda for Europe [Report]. Retrieved from http://eurlex.europa.eu/LexUriServ/LexUriServ.do?uri=COM:2010:0245: FIN:EN:PDF
Eurostat ( 2010) IO databases. Retrieved from http://epp.eurostat.ec.europa.eu/portal/page/portal/
esa95_supply_use_input_tables/data/database Accessed 15 September 2010
Fisher A (1939) Production, primary, secondary, tertiary. Econ Rec 15(1):24–38
Gould DM, Ruffin SJ (1993) What determines economic growth?. Federal Bank of Dallas, Dallas
Grady P., Muller RA (1988) On the use and misuse of Input-Output based impact analysis in
evaluation. Can J Program Eval 3(2):49–61
Haacker M (2010) ICT equipment investment and growth in low- and lower-middle-Income
countries. IMF working papers 10/66, International Monetary Fund
6 How Important is the Media and Content Sector to the European Economy?
131
Heng TM, Thangavelu SM (2006) Singapore information sector: a study using input–output table,
vol 15. Singapore Center for Applied and Policy Economics
Institute for Prospective Technological Studies (IPTS) (2011) Prospective insights on ICT R&D
(Predict): main results of the first phase (2008-2011). IRIS Seminar. Brussels: the European
Comission
Jorgenson DW, Stiroh KJ (1995) Computers and U.S. economic growth. Econ of Innovation and
New Tech 3(3–4):295–316
Kohli U (1978) A gross national product function and the derived demand for imports and supply
of exports. Can J Econ 11:167–182
Kramer WJ, Jenkins B, Katz RS (2007) The role of information and communication technology
sector in expanding economic opportunity. The Fellows of Harvard College, Cambridge
Lamborghini B (2006) Broadband and digital content: creativity, growth and employment.
Conference Paper, Rome: OECD, 2006
Leeuwen ESV, Nijkamp P (2009) Social accounting matrices: the development and application of
SAMs at the local level. In Bavaud F, Mager C (eds) Handbook of theoretical and quantitative
geography. Université, Faculté des Géosciences et de l’Environnement, Lausanne,
pp 229–259
Madden G, Savage SJ (1998) Telecommunication and economic growth. Inf Econ Policy
10:173–195
Miller RE, Blair D (2009) input–output analysis: foundations and extensions. Cambridge
University Press, Cambridge
OECD (2008) Guide to measuring the information society. OECD, Geneva
OECD (2009) Information economy product definitions based on the central product classification. OECD, Geneva
OFCOM (2010) Economic analysis of the TV advertising market. Available at http://
www.ofcom.org.uk/research/tv/reports/tvadvmarket.pdf. Accessed 14 July 2010
Open Society Institute (2009) Television across Europe: more channels, less independence,
overview. Available at http://www.soros.org/initiatives/media/articles_publications/publications/television_20090313/2overview_20080429.pdf. Accessed 12 July 2010
Oulton, N. (2010) Long term implications of the ICT revolution: Applying the lessons of growth
theory and growth accounting. Paper presented at the ICTNET Workshop on The Diffusion of
ICT and its Impact on Growth and Productivity (pp. 1-49). Parma, Italy. Retrieved from https://
community.oecd.org/servlet/JiveServlet/previewBody/18559-102-2-63638/Oulton%20N.%20
-%20Long%20term%20implications%20of%20the%20ICT%20revolution%20for%20Europe
%20applying%20the%20lessons%20of%20growth%20theory%20and%20growth%20accounting.pdf
Roy S, Das T, Chakraborty D (2002) A study on the Indian information sector: an experiment
with input–output techniques. Econ Syst Res 14(2):107–128
Skolka J (1989) Input-output structural decomposition analysis. J Polly Model 11(1):45–66
Steindel C, Stiroh KJ (2001) Productivity: what is it and why we care about it? International
Finance Discussion Papers 638, Board of Governors of the Federal Reserve System
Summarized from Eurostat (2010) ESA 95 Supply Use and input–output tables. Retrieved 10
April 2010, from http://epp.eurostat.ec.europa.eu/portal/page/portal/esa95_supplysupply_
use_input_tables/introduction
United Nations (1999) Handbook of input–output table compilation and analysis. United Nations,
New York
Van Ark B, Mahony O, Timmer MP (2008) The productivity gap between Europe and the United
States: trends and causes. J Econ Perspect 22(1):25–44
Yan C-S (1968) Introduction to input–output economics. Holt, Reinhart & Winston, New York
Zakariah AR, Ahmad EE (1999) Sources of industrial growth using the factor decomposition
approach: Malaysia, 1978–1987. Dev Econ 37(2):162–196
Zgajewski T, Hajjar K (2005) The Lisbon strategy: which failure? Whose failure? And why?
Policy Paper, Egmont Paper
Chapter 7
Product Differences and E-Purchasing:
An Empirical Study in Spain
Teresa Garín-Muñoz and Teodosio Pérez-Amaral
7.1 Introduction
E-commerce is gradually changing the way in which many goods and services are
sold and bought in most countries. This phenomenon, made possible by the
internet, has attracted the attention of retailers, marketers, researchers and
policymakers.
The number of internet users has grown significantly over the last few years,
from 361 million users worldwide in 2,000 to 1,575 million by the end of 2008
(Internet World Stats 2008). This rapid growth in the number of internet users has
promoted the belief that the web represents a huge marketing opportunity. However, there is much evidence to suggest that initial forecasts of the value in
business-to-consumer (B2C) sales were over-optimistic. Despite the growing
popularity of e-commerce over the last few years, online purchases continue to
increase at a slower pace than expected. Many researchers assert that the failure in
predictions may be the consequence of a limited understanding of e-consumer
purchasing behavior.
The goal here is to contribute to a better understanding of consumer behavior by
taking into account the fact that not all products or services are equally suitable for
selling online. For different products, the internet shows diverse suitability as a
shopping medium. Therefore, mixing categories in e-shopping behavior research
may yield inconclusive or inconsistent results. For example, a consumer may be
more likely to purchase software online but less inclined to acquire clothing; his
T. Garín-Muñoz (&)
Facultad de Económicas (UNED), Paseo Senda del Rey 11 28040 Madrid, Spain
e-mail: mgarin@cee.uned.es
T. Pérez-Amaral
Facultad Económicas (UCM), Campus de Somosaguas 28223 Madrid, Spain
e-mail: teodosiso@ccee.ucm.es
J. Alleman et al. (eds.), Demand for Communications Services - Insights and Perspectives,
The Economics of Information, Communication, and Entertainment,
DOI: 10.1007/978-1-4614-7993-2_7, Springer Science+Business Media New York 2014
133
134
T. Garín-Muñoz and T. Pérez-Amaral
overall e-shopping intention will be some unknown mixture of high intent and low
intent. This heterogeneity is account for by disaggregating by product categories.
This chapter identifies the factors that distinguish internet users who have
purchased specific products and services (e.g., books, music and computers) on the
internet from those who have not. This approach reduces the heterogeneity
resulting from different product types. Two broad categories of variables as possible determinants of internet purchases are explored: socio-demographic factors
(i.e., gender, age and education), and the so-called webographic factors (i.e.,
computer and internet literacy, internet exposure, internet confidence).
This chapter is of interest to advertisers for two reasons. First, some products
may be more natural to promote on the internet than others. Second, knowing the
profile of buyers, in general, and of buyers of particular products and services on
the internet, in particular, allows advertisers to develop effectively and target their
advertising efforts. Here, identifying consumers’ characteristics that are associated
with the likelihood of making specific purchases are identified. In fact, the
knowledge of these factors may be of use for creating segmentation and promotion
strategies.
Earlier literature studied consumer behavior for specific products or services.
Books and music are the most common choice of online products in these studies
(e.g., Foucault and Scheufele 2002; Gefen and Straub 2004; Liu and Wei 2003;
Zentner 2008). There are also papers on the online sales of groceries (Hansen et al.
2004; Henderson and Divett 2003). Travel-related products have also been studied
(Young Lee et al. 2007; Athiyaman 2002). Bhatnagar et al. (2000) studied the
adoption of 14 types of online products and services. Kwak et al. (2002) explored
the factors that may potentially influence consumer online purchases of nine different types of products.
However, most of these studies have limitations derived from the datasets used.
In many cases, the authors used data from internet-based surveys or telephone
interviews designed for the objectives of the study. The problem with those
datasets is that the sample is usually drawn from a homogeneous group lacking the
desired representation. In this sense, the present study has the advantage of being
based on a survey conducted by the Spanish Statistics Institute which is representative at the national level.
The chapter is organized as follows. In the next section, the penetration of B2C
e-commerce in Spain overall and by product types is established. This is followed
by a brief review of the literature and the theoretical framework used in order to
explain the consumer behavior of Spanish online shoppers. A descriptive analysis
of the data follows in the next section. The empirical model and the results are
discusses next. Finally, the last section contains the main conclusions and possible
lines of further research.
7 Product Differences and E-Purchasing
135
7.2 Online Shopping in Spain
Before studying the behavior of internet shoppers in Spain, it is useful to have a
general picture of the Spanish market size and its recent evolution. Given that the
evolution of e-commerce is dependent on the evolution of internet users, data are
shown with respect to the level and evolution of both variables.
E-commerce in Spain is not popular. In fact, just 13 % of individuals were
involved in e-commerce activities during the last 3 months of 2007. This is well
below the average figure for the EU27 (23 %).
However, in recent years, e-commerce in Spain has been rapidly increasing.
The volume of sales reached 4,700 billion Euros during 2007, up 71.4 % over the
2006 figure. The evolution during the period 2000–2007 is shown in Fig. 7.1.
The key determinants of this spectacular increase in the volume of sales are the
corresponding increase in the number of internet users and internet shoppers.
However, average annual spending increased at a much lower rate of about 13.8 %
over the 2006 level. Figure 7.2 shows the evolution of both variables: percentage
of internet users and percentage of internet users who have purchased online.
When looking at specific categories, it appears that tourism-related products
and tickets for entertainment are the most popular products for online purchases.
Other important types of products are books, electronics, software, clothing and
hardware, followed by home apparel and films. The remaining products, with a
demand level lower than ten percent of internet shoppers, are food, financial
services and lotteries. Figure 7.3 shows these figures for 2007.
If one looks at the ranking of products in terms of online purchasing popularity,
the two most popular categories (Travel and Entertainment) are services rather
than goods. One possible explanation for this is that the popularity of internet
5000
4761
4500
4000
3500
3000
2778
2500
2143
1837
2000
1530
1500
1163
1000
525
500
204
0
2000
2001
2002
2003
2004
Fig. 7.1 Volume of e-commerce B2C (million of euros)
2005
2006
2007
136
T. Garín-Muñoz and T. Pérez-Amaral
100
Internet users
Internet shoppers
53.5
90
50.0
80
46.6
40.3
70
42.8
60
37.8
50
40
39.8
23.1
23.5
27.8
30
23.8
20
19.4
27.3
25.1
13.8
12.7
10
0
2000
2001
2002
2003
2004
2005
2006
2007
Fig. 7.2 Percentage of internet users and internet users who have purchased online
7000
6000
5000
4000
3000
2000
Lotteries
Financial products
Food
Videos & Music
Computer
Clothing
Software
Electronic
Books &
Newspapers
Entertainment
Travel
OVERALL
0
Home apparel
1000
Fig. 7.3 Internet shoppers (in thousands): overall and by product type
purchases depends negatively on the consumer’s perceived risks, and the perceived
risks are different depending on the product type. In general, consumers tend to be
more concerned about the risks involved when purchasing goods rather than services online; because, when purchasing goods online, the risks that consumers tend
to perceive are risks associated with the acquisition of services plus the risks
7 Product Differences and E-Purchasing
137
related to delays in delivery, difficulties in predicting quality, and whether they
will receive what they ordered.
The third most important category in terms of popularity is Books and Newspapers. This category belongs to the goods category. However, for this specific
type of good (standardized or homogeneous), there is no uncertainty about the
quality of the product that the consumer will receive. That means that the consumer’s perceived risks will be lower than for other goods, and hence, its popularity is higher.
It is not surprising that travel-related products rank first in terms of popularity.
In fact, it is well known that the travel industry was one of the first industries to
undertake business online and is perhaps the most mature industry in the B2C
e-commerce area. On the other hand, the popularity of travel-related products is
not specific to Spain. When looking at the ranking of products in terms of popularity in European countries, one finds that travel-related products are the most
popular product in 12 of 30 countries.
For purposes of comparison, Fig. 7.4 shows the profile of internet purchases for
the case of Spain and the EU27.
7.3 Literature Review and Theoretical Framework
The benefits and limitations of the internet for consumers have been widely discussed and documented in both the popular press and academic journals (Krantz
1998; Mardesich 1999). Consumer e-commerce favors a professional transparent
70
EU27
Spain
60
50
40
30
20
10
Fig. 7.4 Ranking of internet purchases by products
Lotteries
Financial
Services
Food
Hardware
Electronic
Software
Entertainment
Videos & Music
Home apparel
Books &
newspapers
Clothing
Travel
0
138
T. Garín-Muñoz and T. Pérez-Amaral
Table 7.1 E-commerce: benefits and barriers for consumers
Benefits
Barriers
Potentially cheaper retail prices
Convenience
Greater product variety and information about that
variety
Time saving
Provision of hard to find goods
Instant delivery of certain products (e.g., software,
electronic documents, etc.)
Necessary IT skills and competencies
Cost of platforms and access
Existing social group values, attitudes and
ways of life
Ease of using e-commerce sites
Lack of trust and concerns regarding the
reliability of services
Lack of service/product information and
feedback
market, offering greater choice, cheaper prices, better product information and
greater convenience for the active consumer. But this new medium is not without
its limitations. Some of the barriers are the reduced opportunities for sensory
shopping and socialization, and postponement of consumption or enjoyment of
tangible products until they can be delivered. Table 7.1 shows a catalog of benefits
and barriers found in previous research.
Current internet sales figures for different products and services may be a
reflection of the channel’s strengths and weaknesses. Books, music, travel, computer hardware, and software are some of the best-selling items on the internet
(Rosen and Howard 2000). The success of these products can be attributed to a
match of fit between their characteristics and those of the electronic channel.
Several studies have addressed this interaction between product type and channel
characteristics (Peterson et al. 1997; Girard et al. 2002; Korgaonkar et al. 2004).
This can be done by formally incorporating a product and service classification
into the analysis (Peterson et al. 1997).
A number of different classifications have been proposed in the literature.
Nelson (1970) distinguished two types of products (search and experience products), depending on whether the attributes can be fully ascertained prior to use or
cannot be determined until the product is used. According to this classification, the
probability of adopting e-shopping will be higher for search products than for
experience products.
Although Nelson’s classification is useful, Peterson et al. (1997) proposed a
more detailed classification system in which products and services are categorized
along three dimensions that are even more relevant in the context of the internet.
The three dimensions are:
• Cost and frequency of purchase: from low cost, frequently purchased goods
(e.g., consumable products such as milk) to high cost, infrequently purchased
goods (e.g., durable products such as automobiles).
• Value proposition: tangible and physical products (e.g., clothing) versus intangible and service-related products (software).
• Degree of differentiation: branded products versus generic products.
7 Product Differences and E-Purchasing
139
With this classification, Peterson et al. (1997) concluded that products and
services that have a low cost are frequently purchased, have intangible value
proposition, and/or are relatively high on differentiation are more amenable to be
purchased over the internet. Phau and Poon (2000) applied this classification
system in an empirical study and found similar results.
Taking into account the different suitability of products for online purchases,
this study investigates the key factors to explain online shopping behavior on a
product-by-product basis. The explanatory variables of the proposed model belong
to two broad categories: socio-demographic and the so-called webographic characteristics of the potential online shopper.
The results will enable consideration of the factors affecting all the categories of
products in a similar or different way.
7.3.1 Socio-Demographic Factors
Consumer demographics are among the most frequently studied factors in online
shopping research. The effects of gender, age, education, and culture of consumers
on online shopping behavior have been investigated in Bellman et al. (1999), Li
et al. (1999), and Swaminathan et al. (1999), among others.
The study of how gender relates to the purchase decision has always been of
interest to the academic world, as women make the purchase decision in many
product categories. However, the new shopping channel provided by the internet
seems to yield a different, if not opposite, gender pattern. Such a change in gender
pattern in the online shopping environment has been explained by using different
models or factors, including shopping orientation, information technology
acceptance, product involvement, product properties, and perceived risks. The
results of this study will provide information on how gender affects adoption of
online purchasing, depending on the product in question.
The effects of age on the adoption of e-shopping have been widely studied, but
there are discrepancies among the results. For example, some studies identified a
positive relationship between consumer age and the likelihood of purchasing
products online (Stafford et al. 2004), whereas others reported a negative relationship (Joines et al. 2003) or no relationship (Rohm and Swaminathan 2004).
Such a discrepancy might be caused by the use of different groups of products for
the study, by the use of simple versus multiple regression models, or by the use of
nonrepresentative samples. This study gives us the opportunity to help clarify
these results by considering how different products may be influenced by age.
Traditionally, internet users were reported as having high levels of education.
However, the changing demographics among internet users indicate increasing
web usage across education levels. This study assesses to what extent the level of
studies is supposed to have a positive effect on actual e-shopping adoption. The
hypothesis is that, ceteris paribus, more educated people are more likely to adopt
online shopping for several reasons. First, the kind of products available online
140
T. Garín-Muñoz and T. Pérez-Amaral
will probably be more suited to their tastes. Second, the higher level of studies
probably makes it easier for them to use the internet. The results of the study will
reveal if education has the same effect for all the products or if there are some
categories of products that are more influenced than others.
7.3.2 Webographic Factors
Even though socio-demographic factors must be considered as predictors of online
shopping behavior, computer and internet-related webographic characteristics
seem more closely related to actual online purchase behavior. The selected
explanatory variables in this case are computer literacy, internet literacy, internet
exposure, and internet confidence.
Consumers’ computer/internet literacy refers to their knowledge of the computer and the internet and the nature of their usage. Education and information
technology training (Liao and Cheung 2001) and internet knowledge (Goldsmith
and Goldsmith 2002; Li et al. 1999) were found to positively impact consumers’ eshopping adoption. Thus, according to the findings of previous work, one would
expect positive effects of both variables (computer literacy and internet literacy)
on actual e-shopping adoption. Moreover, for these variables, it would be interesting to know whether the effects differ depending on the type of product in
question.
Previous studies have consistently found that the actual use of e-shopping is
positively associated with internet usage (e.g., Bellman et al. 1999; Forsythe and
Shi 2003). Therefore, the expectation is to find a positive relationship between the
frequency of use of the internet (Internet exposure) and e-shopping adoption.
It is likely that heterogeneous consumer perceptions of the risks and benefits of
internet influence whether or not they will adopt this channel for shopping. The
expectation is that the higher is the level of internet confidence, the higher will be
the likelihood of making purchases online. The results of this study should enable
one to determine whether the level of internet confidence affects the adoption of
internet purchasing for all the products in the same or different way.
7.4 Data Analysis
In this study, the unit of analysis is the individual consumer. Data are from the
2007 Survey on Information and Communications Technologies Equipment and
Use in Households,1 conducted by the Spanish Statistics Institute. The focus of the
1
The ICT-H 2008 Survey on Information and Communication Technologies Equipment and Use
in Households has been carried out by the National Statistics Institute (INE) in cooperation with
7 Product Differences and E-Purchasing
141
study is a sample of 8,837 individuals who have used the internet during the last
three months.
Among the respondents, there were 50.8 % males and 49.2 % females, and
33.3 % with a bachelor’s or higher degree. A majority of the respondents (43.2 %)
was in the 30–44 age category, and 55.4 % said they used the internet at least
5 days per week.
The dataset distinguishes between the following categories of products:
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
Travel and holiday accommodation.
Tickets for events.
Books, magazines, e-learning material.
Electronic equipment (including cameras).
Computer software (including video games).
Clothes and sporting goods.
Computer hardware.
Household goods (e.g., furniture, toys, etc.).
Films and music.
Food and groceries.
Stocks, financial services, insurance.
Lotteries and bets (gambling).
Table 7.2 gives a detailed characterization of consumers for different categories
of products. It is important to note the variability of the characteristics depending
on the product under consideration.
According to the gender of the consumers, the participation of men ranges from
a maximum of 76.9 % (hardware) to a minimum of 46.2 % (food/groceries).
Age profiles appear different for different categories. For instance, although the
majority of online buyers belong to the 30–44 age group, large differences have
been found depending on product types. For instance, people between 30 and
(Footnote 1 continued)
the Statistics Institute of Cataluña (IDESCAT), the Statistics Institute of Andalucía (IEA), the
Statistics Institute of Navarra (IEN) and the Statistics Institute of Cantabria (ICANE) within the
scope of their respective Autonomous Communities. The Survey follows the methodological
recommendations of the Statistical Office of the European Communities (Eurostat). It is the only
source of its kind whose data are strictly comparable, not only among Member States of the
Union, but also among other international scopes. The Survey (ICT-H 2008) is a panel-type data
survey that focuses on persons aged 10 and above residing in family dwellings and collects
information on household equipment related to information and communication technologies
(television, telephone, radio, IT equipment) and the use of computers, the internet and e-commerce. Interviews are carried out in the second quarter of the year, by telephone or by personal
interview. For each Autonomous Community, an independent sample is designed to represent it,
given that one of the objectives of the survey is to facilitate data on that breakdown level.
Although the population scope has not varied as compared with previous surveys, it is important
to mention that, for the purpose of achieving greater comparability with the data published by
Eurostat, the results published on the INE website refer to dwellings inhabited by at least one
person aged 16–74 and persons of that same age group. Likewise, the data concerning minors
refer to the group aged 10–15 (the group researched previously was aged 10–14).
Age
B18
19–29
30–44
45–64
C65
Gender
Male
Female
Level of
studies
Primary
1st level of
secondary
2nd level of
secondary
Vocational
education
University
Computer
literacy
Low
Medium
High
Very high
0.9
24.9
48.9
24.2
1.1
52.6
47.4
1.5
7.0
24.1
14.2
53.0
10.2
46.9
25.1
17.7
2.3
26.1
47.8
22.7
1.1
57.2
42.8
2.1
10.7
26.3
14.6
46.3
10.3
29.7
42.1
17.9
7.3
44.1
28.8
19.9
56.6
12.7
23.5
1.3
5.8
55.3
44.7
1.1
26.5
49.5
21.8
1.0
6.9
39.8
29.1
24.1
59.7
21.5
6.0
0.0
1.2
63.1
36.9
1.7
23.6
50.6
23.5
0.7
4.7
32.5
32.3
30.5
42.7
29.4
8.6
0.0
1.8
71.2
28.8
2.2
31.9
44.0
21.1
0.8
4.1
33.1
32.7
30.0
47.9
15.8
27.8
0.6
7.8
71.8
28.2
2.5
22.2
51.9
22.8
0.6
Table 7.2 Data description. Percentages of internet purchasers by characteristics
Characteristics Overall Travel Entertainment Books Electronics Software
(2,989) (1,807) (977)
(605) (511)
(486)
7.7
41.0
32.5
18.8
38.7
18.2
27.7
1.0
14.3
59.6
40.4
4.1
34.4
47.2
14.1
0.2
Clothing
(483)
4.3
29.0
29.2
37.4
47.3
15.8
26.9
2.1
8.0
76.9
23.1
2.5
26.9
51.8
18.0
0.7
Hardware
(438)
8.6
38.4
28.1
24.9
43.0
19.3
26.7
1.2
9.8
61.4
38.6
1.2
21.5
54.5
21.3
1.5
Home
(409)
5.3
35.3
30.5
28.9
46.8
17.4
28.3
0.8
6.7
69.2
30.8
2.8
29.4
47.6
19.0
1.1
Films/
music
(357)
7.3
46.6
27.4
18.8
59.8
12.0
21.8
2.1
4.3
46.2
53.8
0.4
14.5
63.2
19.7
2.1
2.9
37.8
29.7
29.7
65.1
11.5
20.1
0.5
2.9
75.6
24.4
0.0
16.7
55.5
27.8
0.0
Food Financial
(234) services
(209)
(continued)
5.5
38.2
30.9
25.5
38.2
10.0
38.2
2.7
10.9
64.5
35.5
0.0
28.2
50.0
20.0
1.8
Lotteries
(110)
142
T. Garín-Muñoz and T. Pérez-Amaral
15.9
36.0
31.9
16.2
78.7
16.4
4.0
0.8
12.1
55.5
28.2
4.3
16.2
35.6
32.3
15.9
75.2
19.4
4.6
0.9
13.1
57.1
26.0
3.8
8.0
55.2
31.1
5.7
0.6
80.7
15.0
3.7
12.2
32.8
36.9
18.1
Travel Entertainment
(1,807) (977)
7.8
56.4
29.9
6.0
0.7
85.5
12.1
1.8
10.1
32.7
32.2
25.0
Books
(605)
7.8
53.6
34.2
4.3
1.2
81.0
13.3
4.5
6.5
26.2
40.5
26.8
Electronics
(511)
9.1
53.9
31.9
5.1
0.2
83.7
14.6
1.4
2.1
28.2
41.6
28.2
Software
(486)
In parentheses, below each category, is the total number of internet purchasers
Internet
literacy
Low
Medium
High
Very high
Internet
exposure
Daily
Weekly
At least 1 a
month
Not every
month
Internet
confidence
Low
Medium
High
Very high
Table 7.2 (continued)
Characteristics Overall
(2,989)
12.2
56.1
26.9
4.8
0.4
78.7
17.4
3.5
10.6
34.4
36.6
18.4
Clothing
(483)
7.1
55.0
32.9
5.0
0.2
84.2
14.4
1.1
5.0
23.5
43.6
27.9
Hardware
(438)
11.0
54.5
29.8
4.6
0.2
78.0
18.6
3.2
13.7
30.6
36.2
19.6
Home
(409)
6.7
51.3
35.6
6.4
0.8
84.0
12.0
3.1
7.0
24.9
40.9
27.2
Films/
music
(357)
11.5
54.7
28.2
5.6
0.9
82.1
14.1
3.0
14.1
36.3
34.2
15.4
5.3
57.4
30.6
6.7
0.0
92.8
6.2
1.0
7.2
30.6
38.3
23.9
Food Financial
(234) services
(209)
9.1
44.5
40.0
6.4
0.0
80.9
13.6
5.5
9.1
30.9
38.2
21.8
Lotteries
(110)
7 Product Differences and E-Purchasing
143
144
T. Garín-Muñoz and T. Pérez-Amaral
44 years of age represent 44 % of internet buyers of electronic equipment. At the
other end, people in that age group represent 63 % of internet buyers of food/
groceries.
The profile of online buyers in terms of level of studies also differs by product
type. For example, in the case of lotteries, 38.2 % of buyers have a university
degree. At the other end, people purchasing financial services on the web who have
a university degree represent 65.1 % of the total.
7.5 Model Specification and Results
When consumers make e-shopping decisions, they are confronted with a binary
choice, such as whether or not an online purchase is made (e.g., Bellman et al.
1999; Bhatnagar et al. 2000; Lohse et al. 2000; van den Poel and Buckinx 2005),
multiple categorical choices, such as the frequency of e-shopping (treated as a
nominal variable by Koyuncu and Bhattacharya 2004), or conventional store
versus internet versus catalog.
This work is interested in studying the key factors for purchasing online on a
product-by-product basis. A separate model is elaborated for each category of
products. The dependent variable in each of the models is a binary variable that
takes the value 1 if the individual decides to purchase the product online and 0
otherwise. Logistic regression models are used to estimate the conditional probability of the dependent variable.
The results indicate if consumer behavior is similar for all categories of
products or, on the contrary, the determinants depend on the category in question.
Table 7.3 summarizes the variables used in the analysis, while Table 7.4 shows
the main results for each of the 12 categories, as well as the overall results.
In Table 7.4, we show elasticities of probabilities to illustrate the signs and
magnitudes of the effects of the main determinants of web purchasing for the 12
categories of products, as well as overall. The significance of each variable is
shown with the t-ratio below each elasticity value.
In order to facilitate the reader’s interpretation of these results, the main findings are summarized in terms of the effects of the potential determinants over the
likelihood of being an internet buyer.
Gender: Gender plays an important role in web purchasing in many categories,
as well as overall. In general, men are more likely than women to be online buyers.
However, as expected, the effect of gender is mixed. The likelihood of purchasing
on the internet for product categories like food/groceries and travel-related products is higher for women. However, computer hardware and financial services are
more likely to be bought by men.
One possible explanation of these results is that for product categories where
men have more experience as shoppers (for example, hardware, software, and
electronics), being male significantly increased the probability of purchase, while
Self-elaborated index obtained as an average of the confidence of the respondents to the following questions: give
personal information through the web, Give personal information in a chat room, Give e-mail address, Download
programs/music, internet banking
Number of days that the respondent uses internet during the month
Self-elaborated index obtained from eight tasks on the web that the respondent may be able to perform
Self-elaborated index obtained from eight different routines the respondent may be able to do
1 if male; 0 if female
Five (5) age groups are considered: \18; 19–29; 30–44; 45–64; C65
The years of study required to obtain the degree of the respondent
Webographic
FacTORS
Explanation
Demographic
factors
Gender
Age
Level of
studies
Computer
literacy
Internet
literacy
Internet
exposure
Internet
confidence
Explanatory variables
Table 7.3 Explanation of variables
Dependent variable
1 if buyer; 0 if non-buyer
7 Product Differences and E-Purchasing
145
0.05
(2.41)1
0.11
(5.19)
Gender
0.73
(10.70)
1.18
(14.54)
LR v210 ¼
1405:18
Pseudo R2 =
0.1570
0.54
(11.91)
1.01
(16.71)
LR v210 ¼
984:663
Pseudo R2 =
0.1755
Internet
exposure
45.4
42.8
Pseudo R2 =
0.1466
LR v28 ¼
900:58
0.66
(6.58)
1.36
(12.43)
1.86
(10.64)
0.25
(3.60)
0.38
(5.42)
0.35
(4.27)
0.18
(3.82)
–
0.32
(2.33)
1.10
(7.40)
0.52
(2.24)
0.63
(6.21)
0.65
(6.78)
0.27
(2.73)
0.18
(3.10)
–
0.24
(4.46)
0.12
(2.31)
Electronics
0.67
(4.17)
0.92
(5.94)
0.82
(3.43)
0.44
(4.16)
0.97
(9.59)
0.32
(6.02)
0.19
(5.23)
–
0.23
(4.18)
–
Software
0.72
(5.17)
0.81
(5.50)
0.44
(2.11)
0.28
(2.90)
0.37
(3.90)
-0.11
(-3.49)
-0.04
(-2.03)
–
–
–
Clothing
45.1
42.7
47.1
31.0
LR v29 ¼
LRv28 ¼ 669:60 LR = v27 ¼ 3
06:65
581:49
Pseudo R2 = Pseudo R2 = Pseudo R2 =
Pseudo R2 =
0.1554
0.1489
0.1779
0.0818
LR v28 ¼
685:66
0.96
(6.55)
1.15
(8.37)
2.01
(8.90)
0.28
(3.09)
0.54
(6.08)
0.19
(4.20)
0.12
(3.70)
–
0.11
(2.23)
–
–2
0.13
(2.98)
Books
Entertainment
0.63
(4.25)
0.97
(6.13)
0.49
(4.68)
0.39
(3.74)
–
0.66
(4.58)
0.34
(3.98)
0.03
(2.76)
0.20
(2.40)
–
Home
apparel
41.94
Pseudo R2 =
0.1828
Pseudo R2
=
0.0918
47.2
LRv27 ¼ 637:273 LR v28 ¼
303:98
0.73
(4.26)
1.05
(6.43)
0.79
(3.19)
0.73
(6.43)
0.62
(5.91)
–
0.17
(3.90)
–
0.38
(6.33)1
–2
Hardware
Pseudo R2
=
0.1431
42.8
LR v26 ¼
427:83
0.62
(3.45)
1.37
(7.81)
1.23
(4.67)
0.31
(2.61)
0.70
(6.16)
–
–
–
0.21
(3.45)
–
Films/
music
LR v28 ¼
377:29
1.87
(5.32)
1.56
(6.93)
2.15
(5.20)
0.70
(5.21)
–
6.95
(23.94)
3.94
(23.97)
–
0.38
(4.40)
3.63
(22.69)
Financial
services
Pseudo R2 = Pseudo R2
=
0.1065
0.1917
45.1
44.4
LR v28 ¼
230:26
0.96
(4.56)
1.30
(6.38)
1.51
(4.30)
0.44
(3.73)
–
0.50
(5.90)
0.18
(3.20)
0.03
(2.62)
-0.24
(-3.37)
–
Food/
groceries
Pseudo R2
=
0.0918
42.8
LR v27 ¼
108:64
0.66
(2.28)
1.82
(6.31)
0.69
(4.10)
–
–
0.98
(2.25)
0.53
(2.12)
0.06
(2.08)
0.46
(2.01)
–
Lotteries
2
t-ratios in parentheses
The insignificant variables have been deleted and the corresponding equations have been re-estimated
3
LR is the likelihood ratio test for the joint significance of all the explanatory variables. Under the null of joint insignificance of all the explanatory variables, it is distributed as v2 with the number of degrees of freedom
equal to the number of variables tested
4
The last row contains the age for which the probability of purchase is highest. This has been obtained from an alternative formulation of the basic logit model in which we used age and age squared as explanatory variables
1
Age (max.
44.14
Probability)
N. obs = 8,837
Internet
confidence
Internet literacy
Computer
literacy
1.46
(11.86)
0.17
(3.18)
0.34
(6.59)
0.95
(11.02)
0.23
(6.21)
0.35
(9.30)
Level of studies
Age (C65)
Age (45–64)
0.65
(7.81)
0.38
(7.86)
0.02
(3.79)
0.32
(8.29)
0.17
(7.50)
0.01
(1.61)
-0.08
(-3.02)
0.29
(6.32)
Travel
Age (30–44)
Age (19–29)
Overall
Explanatory
variables
Table 7.4 Average of individual elasticities of probabilities
146
T. Garín-Muñoz and T. Pérez-Amaral
7 Product Differences and E-Purchasing
147
in categories such as food and groceries, tickets for events and travel-related
services, the effect of being male was significantly negative.
Age: The results suggest that age is a good predictor of web purchasing in many
categories, as well as overall. The only exception in the 12 product categories
studied was films and music. For the rest of the categories, the likelihood of
purchasing on the internet increases with age up to a certain point and then
decreases.
A possible explanation is that, as people mature, they learn more through
experience about the products in the marketplace and form more confident opinions about what suits their tastes and what does not. Since they know what they
need, they do not have to feel and touch and be reassured by a salesperson that
what they are purchasing is really what they need. Through experience they gain
the confidence to choose products through their own initiative.
A second reason that older people might find internet stores more attractive is
because their lives are typically more time constrained. As people climb higher in
their professional careers, the demands on their time increase, forcing them to look
for retail formats where they have to spend the least time. For this, the internet is
ideal. The reader should keep in mind that these consumers are already using the
internet. Thus, elderly individuals who may have an aversion to computers and the
internet are not included in the study.
A third reason might be the positive correlation between income and age. Since
income is not available in the dataset, the coefficient of age may be isolating the
effect of income as well as the effect of age.
Finally, in the last row of Table 7.4, the age for which the probability of making
a web purchase is a maximum.2
Education: Education is an important determinant of online shopping behavior
overall, and for most of the studied categories. With the exception of lotteries and
home apparel, a positive and significant relationship between level of education
and probability of purchasing on internet existed. The highest elasticity values
were found for financial services (2.15), books, magazines and e-learning material
(2.01), tickets for events (1.86), travel and holiday accommodation (1.46), and
food/groceries (1.38).
Computer literacy: All the studied categories show a positive and significant
relationship between the computer skills of the consumer and the probability of
purchasing online. The highest effects of computer literacy on the likelihood of
being an online shopper occur for products like computer hardware (0.73),
financial services (0.70), and electronic equipment (0.63).
As consumers accumulate computer skills, these products cease to be a black
box and become more like any other tool that consumers use every day. Hence, we
expect consumers with greater computer experience to be more favorably inclined
to shopping online.
2
To do so, different model was used including the original variable age and also the square of
the age to allow for a nonlinear effect.
148
T. Garín-Muñoz and T. Pérez-Amaral
Internet literacy: According to the results, the likelihood of purchasing on the
internet increases as the consumer’s experience on the internet increases. These
results apply to all the categories except for financial products, lotteries and food/
groceries. In this case, the highest elasticities to internet literacy correspond to
computer software (0.97) and films/music (0.70).
In addition to risk, there may be individual characteristics and idiosyncrasies
that affect the likelihood of purchasing on the internet. As consumers become more
knowledgeable, their perception of risk decreases. Therefore, consumers with
greater knowledge would tend to be less risk-averse. On the other hand, the
likelihood of shopping online increases with internet literacy.
Internet exposure: A positive and significant relationship of this variable and
the probability of purchasing online exists. This applies to all the 12 categories, as
well as overall. In this case, financial services and books, magazines and e-learning
material show the highest elasticity values (1.81 and 0.96, respectively).
Internet confidence: The level of trust of consumers in the internet is found to
have a large impact on the probability of purchasing online. The results show that
this variable has a positive and significant effect for all the considered product
categories, as well as overall.
Some of the estimated elasticities are higher than 1 (food and groceries, 1.30;
films and music, 1.37; books and magazines, 1.15; computer hardware, 1.05;
electronic equipment, 1.10; financial services, 1.56; travel and holiday accommodation, 1.18; tickets for events, 1.36; lotteries, 1.82). Elasticities below 1 are
found for the following categories: home apparel, clothing and sporting goods, and
computer software. However, even in these cases, the estimated values are relatively high and significant.
7.6 Conclusions
The main objective of this work was to identify the factors that distinguish internet
users who have purchased specific products and services on the internet from those
who have not. First, it was found that demographic and webographic factors are
important determinants of the probability of purchasing online. However, the
empirical analysis shows that the impact of the considered variables differs
depending on the type of product. Thus, the results confirm the central premise of
the paper that product characteristics influence the consumer decision of using the
internet for purchasing purposes.
Overall, men are more likely to purchase than women. However, products and
services that are more popular among women and others are identified than are
more popular among men. Food, travel-related products and tickets for entertainment are bought more intensively by women, while men are more likely to buy
hardware, software and electronic products.
7 Product Differences and E-Purchasing
149
For most products, the probability of purchase increases with age up to a certain
point, at which it starts decreasing. An alternative model was estimated in which a
quadratic term was included in age to allow for a nonlinear response. This permits
one to estimate the age that gives the maximum probability of purchase by the
different categories. These ages vary between 31.0 for clothing and 47.1 for
software.
Education generally has a positive and significant effect on the probability of
purchase of most categories of products, although it varies by categories. The
highest elasticities were found for financial services, books, travel-related products, entertainment and food/groceries.
All the categories show a positive and significant relationship between the
computer skills of the consumer and the probability of purchasing online. The
highest impact of computer literacy is on computer hardware, lotteries, financial
services and electronic equipment.
The results show that internet skills are a good predictor of the probability of
web purchasing for most of the considered product categories. With the exception
of food and groceries, financial services and lotteries, internet literacy has a
positive and significant impact on the likelihood of purchasing online.
The frequency of use and the level of trust of consumers in the internet are also
found to have a large impact on the probability of purchasing online, although in
varying degrees according to the categories. For instance, internet confidence has a
high impact in all categories, with high values of the elasticities in all the groups
and values greater than 1 in most.
According to the empirical findings presented, the future evolution of B2C ecommerce in Spain would be dependent on the penetration of the internet, the
familiarity of consumers with the use of computers and the internet, and also the
trust of individuals in the security of the internet. The results thus suggest that
providing consumers with secure web systems and making the use of online
trading easier would improve their acceptance of e-shopping.
These results should be useful for retailers in order to devise marketing strategies and for policymakers to decide if and how to promote e-commerce to close
the gap between Spain and the European Union.
References
Athiyaman A (2002) Internet users’ intention to purchase air travel online: an empirical
investigation. Mark Intell Plann 20(4):234–242
Bellman S, Lohse GL, Johnson EJ (1999) Predictors of online buying behavior. Commun ACM
42(12):32–38
Bhatnagar A, Misra S, Raghav Rao H (2000) On risk, convenience, and internet shopping
behavior. Commun ACM 43(11):98–105
150
T. Garín-Muñoz and T. Pérez-Amaral
Foucault B, Scheufele DA (2002) Web vs. campus store? Why students buy textbooks online.
J Consum Mark 19(5):409–423
Forsythe SM, Shi B (2003) Consumer patronage and risk perceptions in internet shopping. J Bus
Res 56(11):867–875
Gefen DE, Straub DW (2004) Trust and TAM in online shopping: an integrated model. MIS Q
27:51–90
Girard T, Silverblatt R, Korgaonkar P (2002) Influence of product class on preference for
shopping on the internet. J Comput Mediated Commun 8(1). Available at http://
jcmc.indiana.edu/vol8/issue1/girard.html
Goldsmith RE, Goldsmith EB (2002) Buying apparel over the internet. J Prod Brand Manage
11(2):89–102
Hansen T, Jensen JM, Solgaard HS (2004) Predicting online grocery buying intention: a
comparison of the theory of reasoned action and the theory of planned behavior. Int J Inf
Manage 24:539–550
Henderson R, Divett MJ (2003) Perceived usefulness, ease of use and electronic supermarket use.
Int J Hum Comput Stud 3:383–395
Internet World Statistics (2008) Available at http://www.internetworldstats.com/stats.htm
Joines J, Scherer C, Scheufele D (2003) Exploring motivations for consumer web use and their
implications for e-commerce. J Consum Mark 20(2):90–109
Korgaonkar P, Silverblatt R, Becerra E, (2004) Hispanics and patronage preferences for shopping
from the internet. J Comput Mediated Commun 9 (3). Available at http://jcmc.indiana.edu/
vol9/issue3/korgaonkar.html
Koyuncu C, Bhattacharya G (2004) The impacts of quickness, price, payment risk, and delivery
issues on on-line shopping. J Socio-Econ 33(2):241–251
Krantz M (1998) Click till you drop. Time 20:34–41
Kwak H, Fox RJ, Zinkhan GM (2002) What products can be successfully promoted and sold via
the internet? J Advertising Res 42(1):23–38
Li H, Kuo C, Russell MG (1999) The impact of perceived channel utilities, shopping orientations
and demographics on the consumer’s online buying behavior. J Comput Mediated Commun
5(2). Available at http://jcmc.indiana.edu/vol5/issue2/hairong.html
Liao Z, Cheung MT (2001) Internet-based e-shopping and consumer attitudes: an empirical
study. Inf Manage 39(4):283–295
Liu X, Wei KK (2003) An empirical study of product differences in consumers’ e-commerce
adoption behavior. Electron Commer Res Appl 2:229–239
Lohse GL, Bellman S, Johnson EJ (2000) Consumer buying behaviour on the internet: findings
from panel data. J Interact Mark 1481:15–29
Mardesich J (1999) The web is no shopper’s paradise. Fortune 140(9):188–198
Nelson P (1970) Information and consumer behavior. J Political Econ 2:311–329
Peterson RA, Balasubramanian S, Bronnenberg BJ (1997) Exploring the implications of the
internet for consumer marketing. J Acad Mark Sci 25(4):329–346
Phau I, Poon SM (2000) Factors influencing the types of products and services purchased over the
internet. Internet Res 10(2):102–113
Rohm AJ, Swaminathan V (2004) A typology of online shoppers based on shopping motivations.
J Bus Res 57(7):748–758
Rosen KT, Howard AL (2000) E-retail: gold rush or fool’s gold? California Manage Rev
42(3):72–100
Stafford TF, Turan A, Raisinghani MS (2004) International and cross-cultural influences on
online shopping behavior. J Glob Inf Mgmt 7(2):70–87
Swaminathan V, Lepkowska-White E, Rao BP (1999) Browsers or buyers in cyberspace? An
investigation of factors influencing electronic exchange. J Comput Mediated Commun 8(1).
Available at http://jcmc.indiana.edu/vol5/issue2/swaminathan.htm
7 Product Differences and E-Purchasing
151
Van den Poel D, Buckinx W (2005) Predicting online-purchasing behavior. Eur J Oper Res
166(2):557–575
Young Lee H, Qu H, Shin Kim Y (2007) A study of the impact of personal innovativeness on
online travel shopping behavior-a case study of Korean travelers. Tourism Manage
28:886–897
Zentner A (2008) Online sales, internet use, file sharing, and the decline of retail music specialty
stores. Inf Econ Policy 20(3):288–300
Chapter 8
Forecasting the Demand for Business
Communications Services
Mohsen Hamoudia
For telecommunications and information and communication technology (ICT)
providers, analyzing and forecasting the demand for business communication
services (BCS) is a critical but not an easy task. Accurate demand analysis and
forecasting1 enables them to: anticipate and meet the market demand and customers’ expectations and determine the service level to be provided to companies;
determine the size and timing of investments in networks, new technologies, new
services; and planning for resources that support its development and growth. In
addition, providers continually need to assess and estimate the financial and
economic benefits of their offers, and assess the companies’ ability and willingness-to-pay for such services.
The focus of this chapter is on estimates of BCS demand and supply in the
French BCS market.2 The chapter first provides a broad overview of the demand
for BCS: it briefly describes the scope of BCS products and solutions, their
structure, evolution and key drivers. Then, it presents the specification and estimation of the demand and supply models and some illustrative forecasts.
8.1 Introduction/Overview
ICT markets have undergone a profound metamorphosis in the last decades.
Factors underlying this evolution include the deregulation and liberalization of
telecommunications markets that opened franchised monopoly markets to competition throughout the world (including emerging countries); the development of
new technologies and improvements in existing ones; a revolution in service
1
2
This chapter uses demand analysis and forecasting interchangeably.
The modeling and forecasting processes were adopted from Hamoudia and Scaglione (2007).
M. Hamoudia (&)
Orange—France Telecom Group, Paris and ESDES Business School, Lyon, France
e-mail: mohsen.hamoudia@orange.com
J. Alleman et al. (eds.), Demand for Communications Services - Insights and Perspectives,
The Economics of Information, Communication, and Entertainment,
DOI: 10.1007/978-1-4614-7993-2_8, Springer Science+Business Media New York 2014
153
154
M. Hamoudia
offerings and innovations in telecommunications and information technology
(IT)3; the aggressive and strong competition from many emerging countries
(Brazil, India, China, …) with competitive products (smartphones, tablets, PBX,
…) in search of markets; and the globalization trend of the emergence of new
multinational enterprises, which reinforced the standardization of solutions,
equipments and services.
Finally, the convergence of IT and telecommunications is driven, in part, by the
rivalry of IT providers and telecommunications operators. This rivalry is confounding any demand analysis or forecasts. In particular, telecommunications
operators are progressively entering formerly exclusive IT company markets, for
example, helpdesk, data centers and business applications. Simultaneously, IT
providers are encroaching on activities that are usually reserved for telecommunications operators. Thus, telecommunications operators (e.g., BT Global Services, Orange Business Services and T-Systems) are offering both IT and
telecommunications services. They are making strong bids to enter IT service
delivery markets. Conversely, IT companies such as Atos, EDS, and IBM Global
Services are offering services which overlap with the core domains of telecommunications operators—mainly managed network services.
Further, after a long period of industry stability, in terms of market structure
and customer usages, the demand for BCS have moved to more competitive
markets where products and services have shorter lifecycles. All of these factors
have led to dramatic changes in the ICT industry which has added complexity to
demand estimation.
The convergence of IT and telecommunications markets has created more
complex behavior of market participants. Customers expect new product offerings
to coincide with these emerging needs fostered by their growth and globalization.
Enterprises require more integrated solutions for security, mobility, hosting, new
added-value services, outsourcing and voice over internet protocol (VoIP). This
changing landscape has led to the decline of traditional product markets for
telecommunication operators.
8.2 Defining Business Communication Services
For this study, a narrower specification of ICT is applied that is more oriented
toward IT and telecommunications services.4
3
In this chapter, IT refers to a sub-set of the ICT sector. In this context IT does not include
telecommunication.
4
The Organization of Economic Cooperation and Development (OECD) and the European
Union (EU) define ICT in a broad manner. It includes, for example, telecommunications;
consumer electronics; computers and office machinery; measurement and control instruments;
and equipment and electronic components.
8 Forecasting the Demand for Business Communications Services
155
Table 8.1 Main Business Communication Services products
Telecoms
IT Core
Core
Mobile
Fixed voice
Fixed data
IT Infrastructure and equipment
Software and applications
Services related to ‘‘infrastructure and equipment’’ and to ‘‘software and
applications’’
Source the author
8.2.1 Scope of the Business Communication Services
BCS encompasses all products and services dedicated to companies for their
professional activities ranging from telecommunication core to the information
technology ones (see Table 8.1).
8.2.2 The Telecommunications Core
The demand for core telecommunications products and services consists of the
telecommunications network services on fixed and mobile networks—regardless
of the technology—for voice and data communication. This part of the network is
also referred to as connectivity. The three core telecommunication products and
services are as follows: the traditional fixed voice including the public switched
telecommunications network (PSTN); the fixed data network; and mobile services.
8.2.3 The IT Core
IT core products and services are composed of the IT infrastructure including
combinations of IT and telecommunication infrastructures and IT applications.
The IT infrastructure consists of transmission networks and associated equipment
including computer mainframes/servers and related services. At the edge of the
network, a wide variety of equipments is attached. Then, there are numerous layers
of applications that drive the usage. All of this must be managed and maintained.
Adding to the mix is the proliferation of mobile devices: conventional mobile
phones, smartphones, tablets, hot spots, etc. The purpose here is not to enumerate
all of the products; software and applications (the ‘‘brains’’ of the IT infrastructure); and services, but rather to illustrate the complexity of the BCS. The sectors
are complex and not easily delineated. When one discusses the demand for BCS,
what is the objective? Is it forecasting the demand by service, function or
equipment?
156
M. Hamoudia
8.3 Demand Drivers for BCS
There are mainly three basic drivers of BCS revenues: (a) macroeconomic environment, (b) enterprises transformation, and (c) technology and innovation.
8.3.1 Macroeconomic Environment
Telecommunications services and IT market success rely on the immediate macroeconomic and regulatory environments. The principal market growth and risk
drivers vary from Western Europe to North America and to Asia Pacific and
emerging countries. A few of these drivers include, inter alia, gross domestic
product (GDP); emerging markets; continued liberalization and deregulation.
These positive drivers are offset by risks to the BCS markets including, among
others, the high unemployment rate in the EU (around 10 % in 2010) and the
United States coupled with sluggish job creation in 2011.
8.3.2 Enterprises Transformation
Drivers related to ‘‘enterprise transformation’’ are important as the business
environment has introduced new enterprise services and globalization. The most
important characteristics that emerging are as follows: (a) new relationships with
suppliers and customers; (b) new contracts and service agreements; (c) corporate
offshore locations; and a variety of other factors.
The growth drivers of the demand related to this transformation include: (a)
substantial investment in proprietary areas such as homeland security, healthcare
and infrastructure security; (b) ICT sector revival via technological innovation, for
example, the LTE (mobile 4G), VoIP and expansion of ultra-broadband and the
increasing usage of computer tablets and other mobility means. Among other
drivers are: increased company mobility; implementation of new infrastructure
technology (e.g., middleware, internet, integration platforms and Open Source);
outsourcing, especially infrastructure; emergence and adaptation of new cloud
computing business models; as well as emergence of unified communications and
collaboration (UC&Cs) to name a few.
8 Forecasting the Demand for Business Communications Services
157
8.3.3 Innovation and New Technologies’ Impact on BCS
Demand
Innovation and new technologies drive the change in demand for IT and telecommunication services. They are reshaping the demand for BCS as the lifecycle
of many products and services are shorter and the companies are in a continuous
business transformation. Table 8.2 shows the usage evolutions and new technological trends for the demand.
Innovation is not related only to devices, but also to applications, analytics, and
processes within companies.
Devices innovations are the smartphone, the tablet PC, and other means of
mobility; in terms of applications, the key applications that are fostering the usage
innovation are cloud computing, mobility, UC&Cs, and collaborative applications.
Table 8.2 Evolution of usage and new technological trends in the demand for business communication services
Decreasing
Remaining
Increasing
Devices
Desktop PC
Netbook PC
Mobile Phone
Fixed Phone
3G Dongle
Webcam
Fax
GPS Receiver
Applications
Fixed Voice
Email
Laptop PC
Conferencing Phone
Smartphones
Tablet PC
Surface Technology
SMS
Fixed Broadband
Mobile Voice
Video Communication
Voice as an IT Service
Augmented Realty
Virtual Desktop
Collaborative Tools
Social Networks
Instant Messaging
Storage Apps
Mobile Broadband
Business Apps
Payment Apps
Identity Apps
Unified Communications
Image & Video
Mobile Broadband
Source Orange Business Services 2011
158
M. Hamoudia
In summary, the demand for the BCS will be driven in the near future by mobile
devices, including tablets, smartphones and laptops; cloud computing; social
network technologies; large data requirements (e.g. ‘‘Big Data’’); and emerging
markets such the Brazil, Russia, India, and China (BRICs).
8.4 Modeling and Forecasting the Demand for BCS
In this section, we specify and estimate a multi-equation model to forecast the
demand and the supply of BCS for the French market. This work is based on an
earlier study (Hamoudia and Scaglione 2007). A multiple-equation model that
integrates supply and demand sides is estimated, since a single equation model
cannot capture the interactions among services.
In Europe, France is the third largest market for BCS market in terms of
revenue. Only the United Kingdom and German markets are larger. From 2000 to
2010, the French market of the IT and Telecommunications for BCS services
reported 6.2 % growth per annum over the period. Total spending reached €38.1
billion in 2010 from €20.1 billion in 2000.
Compared with Hamoudia and Scaglione (2007), this model adds new drivers
and explanatory variables that are affecting the demand, such as the increasing role
of cloud computing, infrastructure/platform/software as a service (IaaS/PaaS/
SaaS); the development of UC&Cs; and the increased adoption of video and image
solutions. The volatility of Euro/US dollar of exchange rates is also taken into
account. Figure 8.1 highlights the overall framework of the model and Table 8.3
shows the list of variables used in the model.
8.4.1 Supply Variables
The supply variables include:
• Price of each service;
• Network capacity;
• Network accesses provided by telecom and IT providers.
The price variables are expressed as average revenue per user (ARPU) or
average price per staff member (€/staff); bundled services catalog prices that
represent the negotiated product prices are included. Some prices in the product
catalog are 20–30 % below publicly listed prices, for instance.
The capacity variables represent the ability of providers to meet customer needs
in terms of bandwidth (broadband, ultra-broadband and wireless broadband) and
traffic (e.g., VoIP).
8 Forecasting the Demand for Business Communications Services
159
Overall Framework for Business Communication Se rvices market
Price of Data Services
Price of Mobile Data Services
Price of Fixed Voice Services
Price of Mobile Voice Services
Price of Communication Services
IT & Telecom Providers
Price of Applicative Solutions Services
Price of Desktop Management Services
Price of IT Infrastructure Services
Network Capacity (bandwidth)
Network Accesses
SUPPLY
ENVIRONMENT
TECHNOLOGY & INNOVATION
Very High Broadband
Tablets and Surface Technology
IaaS, PaaS, SaaS & Virtualization
Internet of Things (e.g. M2M)
4G and LTE
Future Workspace
Image and Video
Security Services
REGULATION
Level of open market
Interconnection
DEMAND
Data Services
Security Softwares & Services
Network Devices
Data W an Accesses
Broadband Accesses
Managed Services
RAS (Remote Access)
Voice Services
PBX (TDM) & IPBX
Fixed Voice
Mobile Data Services
Messaging, Mobile Broadband
Mobile Voice Services
Investments in Technology & innovation
Mobile Voice
Competition local loop
Communication Services
Messaging & Collaborative applications
Visioconferencing
Applicative Solutions
Business Intelligence
M2M (Machine to Machine)
Desktop Management Services
Infrastructure Software (Citrix …)
Workstation (PDA, Laptop)
IT Infrastructure Services
Servers, LAN, W AN
ECONOMY
Spending in ICT
Companies Turnover
GNP/GDP
Merging & Acquisition
Number of sites (locations)
Emerging countries
$ and Exchange rates
MNCs, SMEs and LEs headcounts
Fig. 8.1 Overall framework for business communication services market updated in 2012
8.4.2 Demand Variables
Demand services are categorized by:
• Data services including: security software, network devices, wide area network
(WAN) and broadband and ultra-broadband accesses, managed services and
remote access services (RAS);
• Voice services that are comprised of private branch exchange (PBX) and
Internet (iPBX private branch exchange) and fixed voice;
• Mobile data services including wireless broadband and new mobile
applications;
• Mobile voice services which include mobile voice;
• Communication services contain SoHo, SMB, and Large Enterprises collaborative applications support, video-conferencing and messaging;
160
Table 8.3 Variables used in the models
Variables
1- ICTs Providers Supply
Price of Data Services
Price of Mobile Data Services
Price of Fixed Voice Services
Price of Mobile Voice Services
Price of Communication Services
Price of Applicative Solutions Services
Price of Desktop Management Services
Price of IT Infrastructure Services
Network Capacity and braodband
Bundles and Catalogue of Services
Network Accesses
2- Technology & Innovation
VoIP
WLAN
IP Transformation
IP PBX
3G/3G+ Wireless
Cloud computing (IaaS, SaaS and PaaS)
3- Regulation
Level of open market
Interconnection
Investments in ICTs & Innovation
Competition local loop
4- Economy and Business
Spending in BCS
Customers Turnover
GNP/GDP
Merging & Acquisition
Number of sites
Business segment
€/$ exchange rate
Competition in BCS services
MNCs, SMEs and LEs Headcount (staffing)
5- Demand of IT/Telecom Services
5.1- Data Services
Security softwares & Services
Network Devices
Data Wan Accesses
Internet Accesses
Managed Services
RAS (Remote Access)
5.2- Voice Services
PBX (TDM) & IPBX
Fixed Voice
M. Hamoudia
Variables Code
Units
P_D
P_MD
P_FV
P_MV
P_COM
P_APL
P_DK
P_ITINF
NC
BUNCAT
NA
M€
€
€
€
€
€
€
€
€
Mbits
%
Thousands
VoIP
WLAN
iPTRANS
iPBX
3G_WIRS
CC
Dummy
Dummy
Dummy
Dummy
Dummy
Dummy
OPEN_MKT
INX
INV
COM_LLOOP
1 to 5
Dummy Variable
M€
Dummy Variable
SPEN
TURNOV
GDP
MERGE
SITES
BUS_SEG
EXCH
COMP_BCS
EMP
M€
M€
M€
M€
Units
Dummy Variable
DAT_SER
M€
M€
M€
M€
M€
M€
M€
M€
M€
M€
V_SER
Variable
Variable
Variable
Variable
Variable
Variable
%
Thousands
(continued)
8 Forecasting the Demand for Business Communications Services
Table 8.3 (continued)
Variables
5.3- Mobile Data Services
Workstation (PDA, Laptop)
5.4- Mobile Voice Services
Mobile Voice
5.5- Communication Services
Messaging & Collaborative applications
Visioconferencing
5.6- Applicative Solutions
Business Intelligence
M2M (Machine to Machine)
5.7- Desktop Management Services
Infrastructure Software (Citrix …)
Workstation (PDA, Laptop)
5.8- IT Infrastructure Services
Servers, LAN, WAN
161
Variables Code
Units
MD_SER
M€
M€
M€
M€
M€
M€
M€
M€
M€
M€
M€
M€
M€
M€
M€
MV_SER
COM_SER
APL_SER
DESK_SER
IT_SER
(1): As some statistics were based on US$, the exchange rate is provided by IMF source
• Applicative solutions and services are related to new services such as business
intelligence and machine-to-machine transmission;
• Desktop management services that include infrastructure software (Citrix) and
workstation (PDA and laptop) services; and
• IT Infrastructure services that include servers, LAN, and WAN.
8.4.3 Independent Variables
• Technology and innovation variables represent the availability and intensity of
new services deployment such as VoIP, iPBX, high broadband and 4G/LTE.
Also included is ICT investment by manufacturers;
• Regulation variables relate to policy levers, market openness (which has a
significant impact on ICT markets), connection among rules, and fees and
obligations, that is, the obligation to invest in new technology and innovation,
and universal service obligations; and
• Economy variables including GNP, exchange rates (to take into account the
impact of the volatility of Euro/USD), Small office/home office (SoHo), small to
medium enterprise (SME) and large enterprise (LE) employment, company
turnover, and company locations. Also included are the customers’ expenditure
(e.g., access fees and licenses);
Dummy variable are used to represent qualitative effects, that is, market
openness. For instance, market openness is scaled on a 5-point Likert scale indicating low openness to complete openness.
162
M. Hamoudia
Obviously, data series are of limited length for many new and innovative
services such as VoIP and 4G/LTE for mobile.
8.5 Data Set and Sources
The forecasts were estimated based on quarterly data from Q1-2000 to Q4-2010.
Data are obtained from a variety of sources such as: Quarterly reports from IT and
telecommunication providers (IBM, Atos, Capgemini, BT, Orange, Telefónica, TSystems, AT&T and Verizon); consultancies (Data Monitor, Forrester Research,
Gartner, IDATE, Ovum, Markess International, IDC, and Jupiter Research)—
release of detailed databases on many ICT services.
Most sources report annual data. This modeling is based on quarterly data, thus
the statistical approach is based on the Chow and Lin (1971) procedure for
quarterly interpolation of annual data. Pavía Miralles et al. (2003) is used to
approximate quarterly data from annual data.
Economic data are from European Union Office of Statistics (Eurostat) and the
National Statistical Office (INSEE, France). Both of these sources provide
monthly, quarterly, and annual data by industry and other segments. While
information on the economics and demographics is abundant, regulatory data is
problematic. There is a lack of information concerning regulatory variables such as
market openness, local loop competition, and interconnection.5 Dummy variables
are used to represent the qualitative effects of market openness and the IP transformation of SMEs and LEs. For example, the market openness variable is a
categorical values variable with the discrete values 1 (closed market) through 5
(fully open market). The limited length of the data series for new products such as
3G+, ADSL+, VoIP, iPBX, and security is also an issue. This paucity of data is a
limitation when the impact of new offers and services is integrated into the core
business models.
8.5.1 Model Specification
When forecasting BCS demand and supply, in which convergent telecommunications and IT markets are experiencing significant innovation, the cross-elastic
impacts must be considered. Single-equation estimations are unable to include
interaction among sub-markets that are characterized by new technology and
services for which there is a lack of historical data (e.g., iPBX, mobile data
applications, ultra-high broadband and VoIP) (Rao and Angelov 2005). An
alternative approach is to apply a multi-equation framework to the market (Loomis
5
In part, because some of this information is confidential.
8 Forecasting the Demand for Business Communications Services
163
and Swann 2004). For this system, single equation ordinary least squares (OLS)
estimations are biased when arguments are endogenous (Fisher 1966; Fernández
1981; Brown 1983; Amemiya 1986). Moreover, OLS parameter estimates in a
structural simultaneous equation system are biased and inconsistent because of
nonzero correlations among the random error term and right-hand side endogenous
variables (Gorobets 2005). Consistent parameter estimates are obtainable from
indirect least squares, instrumental variables, two-stage least squares (2SLS), or
three-stage least squares (3SLS) routines. 3SLS, which is a combination of 2SLS
and seemingly unrelated regression, is employed for this estimation (Alvarez and
Glasgow 1999). However, both 2SLS and 3SLS models are estimated for the
purpose of comparison. The ‘‘threesls’’ routine from Zelig package is used to
obtain parameter estimates (Alimadhi et al. 2007). This approach also allows
integration of a multi-output and multi-input industry framework. Alternative
model specifications considered for estimation are models which includes publicly
listed revenues to determine ARPU and price per staff member arguments. Model
2 contains negotiated prices (which are lower than the listed prices).
Many model specifications integrating several combinations of variables are
run. The two most accurate models are presented. Both models were estimated by
3SLS on quarterly data from Q1-2000 to Q4-2010.
8.5.2 Model 1
8.5.2.1 Supply Equation
BCS product j supply function is specified as a log–log function of the form:
X
X
ln ysjt ¼ as þ bs ln INVt1 þ
csj ln Pjt þ
dsj ln ydjt
j
j
ð8:1Þ
þ es ln NAt þ ns ln NCt1 þ ut ;
where: ysjt is product j supply, INVt1 is ICT and technology investment (lagged),
Pjt is product j price, ydjt is product j demand, NAt is network accesses, NCt1 is
network capacity (lagged) and ut is a random error term. csj is the own-price
elasticity of supply, dsj is the demand price elasticity of supply; es is the network
access elasticity of supply and ns is the network capacity elasticity of supply.
A priori, the sign of the own-price coefficient is negative, whereas the crossprice coefficients are expected to be positive. A lag supply response to ICT and
technology investment is also considered.
8.5.2.2 Demand Equation
Similarly, the demand equation for product j is a log–log function of the form:
164
M. Hamoudia
ln ydt ¼ ad þ bd COMPNRSt þ
þ
X
X
cdj ln Pjt
j
ddj ln ysjt þ ed ln EMPt þ nd ln SPENt1 þ tt
ð8:2Þ
j
where ydt is product j demand, COMPRNSt is competition in network-related
service markets, Pjt is product j price, ysjt is the supply of product j, EMPt is SoHo,
SME, and LE employment, SPENt1 is SoHo, SME, and LE expenditure on ICT
products and mt is a random error term. bd is the Network Related Services
competition elasticity of demand, cdj is the own-price elasticity of demand, ddj is
the supply price elasticity of demand, ed is the SoHo, SME, and LE employment
elasticity of demand; and nd ICT expenditure elasticity of supply. A priori, the
own-price coefficients are assumed negative, while the cross-product price
parameters are assumed positive.
8.5.3 Model 2
8.5.3.1 Supply Equation
In Model 2 BCS product j supply and demand functions are similar to Model 1.
The major difference consists in the bundled price as an explanatory variable.
Model 2 is specified as a log–log function of the form:
X
ln yst ¼ as þ bs ln INVt1 þ
csj ln BUNCATjt
þ
X
j
dsj ln ydjt
þ es ln NCt1 þ ut
ð8:3Þ
j
where yst is product supply, INVt1 is ICT and technology investment, BUNCATjt is
the bundled (and catalogued) product price, ydjt is product j demand, NCt1 is
network capacity and ut is a random error term. bs is the ICT and technology
investment supply elasticity, csj is the own price supply elasticity, dsj is the own
product demand supply elasticity and es is the network access supply elasticity. A
priori bundle (and catalogued) products price is assumed negative. All remaining
price parameter values are assumed positive. The lagged demand response to ICT
investment and technology and network capacity change represents the ability to
supply broadband.
8.5.3.2 Demand Equation
Finally, the demand for the product is:
8 Forecasting the Demand for Business Communications Services
ln ydt ¼ ad þ bd COMPNRSt þ
þ
X
X
165
cdj ln BUNCATjt
j
ddj ln ysjt þ ed ln EMPt þ fd ln ydt1 þ tt
ð8:4Þ
j
where the terms are defined as above, in Eq. (8.2).
A priori bundled (and catalogued), the product price parameter is assumed
negative. All the cross-product price parameters are assumed positive. Also
specified is a lag demand response.
8.5.4 Estimation Results
8.5.4.1 Supply Equation in Model 1
As shown in Table 8.4, the estimated INV parameter is inelastic, except for
‘‘Mobile Data’’ services. While investments in technology commonly impacts on
supply and demand, the low elasticity only reflects the immediate effect, that is,
there may be a more complex temporal pattern ignored by this specification. The
estimated product price parameters are signed correctly, except for ‘‘Communication Services.’’ ‘‘Data,’’ ‘‘IT Infrastructure,’’ and ‘‘Desktop Management’’ services are relatively elastic in supply. The estimated network capacity parameter is
inelastic for all products, and the result from the specified lag structure. The
estimated demand parameters exceed unity for ‘‘Data,’’ ‘‘Mobile Data’’ and ‘‘IT
Infrastructure’’ services, which means the demand impact on supply is important.
However, a longer lag might provide a more elastic estimate.
8.5.4.2 Supply Equation in Model 2
Also shown in Table 8.4 are the supply estimates of Model 2. All estimated INV
parameters are inelastic. ICT investment substantially impacts on supply. Again,
the results suggest that investigation of a more complex geometric lag structure
may prove useful. In this model, negotiated price (BUNCAT) replaces ARPU. All
price parameter estimates have their expected signs, with most products more
supply-elastic than in the Model 1 specification.
8.5.4.3 Demand Equation in Model 1
The demand estimates are shown in Table 8.5. The Network Related Services
(NRS) competition parameters are less than unity. This positive impact suggests
that competitive forces stimulate demand. However, as the products are standardized they are distinguished only by delivery, price and ability to manage
Model 2
Variable
Communication
Management
Model 1
Applicative Solutions
IT Intrastructure
Constant
Investments in BCS (t-1)
Product Price
Demand
Network Access
Network Capacity (t-1)
Adjusted R2
Durbin-Watson
Constant
Investments in BCS (t-1)
Bundle Price
Demand
Network Capacity (t-1)
Adjusted R2
Durbin-Watson
16.02
0.82
-1.44
1.55
0.76
0.76
0.66
1.63
24.71
0.77
-0.92
0.77
0.54
0.87
1.84
Data
Table 8.4 Estimated Supply parameters (Q1-2000 to Q4-2010)
Estimated Supply Parameters printed in bold are
parameters
significant at the 5% level
9.23
0.76
-1.59
0.53
0.74
0.52
0.87
2.033
8.73
1.23
-1.12
0.93
0.37
0.66
1.33
7.05
1.36
-1.09
1.64
0.62
0.77
0.82
1.86
12.86
0.87
-0.72
0.55
0.85
0.82
1.09
Voice Mobile
0.77
0.57
-0.76
0.66
0.38
0.96
0.33
1.68
7.89
0.54
-1.03
0.32
0.44
0.75
1.65
4.66
0.66
-0.62
0.59
0.55
0.21
0.92
1.33
8.39
1.03
-0.44
0.89
0.7
0.61
0.94
Data Mobile Voice
Desktop
11.52
1.06
-0.84
1.06
0.76
0.62
0.95
1.84
1.88
0.55
-1.16
1.12
0.94
0.84
1.66
0.99
0.95
-1.22
0.22
0.93
0.08
0.87
1.39
6.33
0.92
-1.09
0.77
-0.12
0.81
1.95
11.32
1.35
-0.71
1.36
1.05
0.94
0.77
1.74
12.2
0.36
-1.55
0.25
0.77
0.77
1.44
166
M. Hamoudia
Adjusted R2
Durbin-Watson
Adjusted R2
Durbin-Watson
Model 2
Variable
Communication
Management
Model 1
Applicative Solutions
IT Intrastructure
Constant
-11.94
Investments in BCS
Product Price
-1.71
Supply
Enterprise Employment
Spending in BCS
0.77
1.33
Constant
-6.1
Investments in BCS
Bundle Price
-0.88
Supply
Enterprise Employment
Demand
0.88
1.63
1.33
0.88
1.19
0.96
1.25
0.85
2.05
0.94
0.96
0.87
1.33
0.63
2.05
0.77
0.63
1.06
0.85
1.56
0.44
0.98
1.12
1.09
0.78
1.76
0.77
0.03
0.69
1.66
1.34
0.84
1.02
1.13
0.81
0.98
7.53
0.77
0.72
1.13
1.32
0.52
1.1
1.59
0.91
1.23
Data Voice Mobile
Table 8.5 Estimated Demand parameters (Q1-2000 to Q4-2010)
Estimated Demand Parameters printed in bold are
parameters
significant at the 5% level
0.75
1.24
1.3
0.7
1.87
1.23
-1.37
1.12
1.55
0.87
0.66
1.38
-9.26
-0.33
1.06
-0.99
0.66
-2.12
0.45
0.52
2.04
0.91
1.95
0.33
0.92
1.95
0.66
1.66
0.87
0.22
0.88
0.83
1.06
-4.29
1.17
-1.17
-9.59
0.57
-1.4
1.09
0.77
0.92
0.56
1.78
0.88
0.21
-1.77
23.71
Data Mobile Voice
Desktop
0.02
0.61
1.36
0.9
2.27
-0.26
-1.04
0.98
0.69
0.55
0.88
1.99
-1.66
0.05
-0.87
0.43
0.53
0.94
0.94
0.33
-1.56
-9.54
1.04
1.09
0.46
0.52
-0.67
-0.77
8 Forecasting the Demand for Business Communications Services
167
168
M. Hamoudia
Actual
Estimated
Fig. 8.2 Forecast business communication services in France 2011–2013 (€ million) based on
model 1
complex IT and telecommunications projects. Accordingly, the number of suppliers appears less important than anticipated. The reported price parameters are
correctly signed, however, demand is elastic. The estimated supply parameter
value is less than unity for all products, except for ‘‘Voice,’’ ‘‘Mobile Data,’’ and
‘‘Communications’’ services, for which the estimated expenditure is elastic. This
suggests the integration of delayed supply in the demand equation.
8.5.4.4 Demand Equation in Model 2
Also shown in Table 8.5 are the demand estimates of Model 2. As with Model 1,
the NRS competition parameters are less than unity and follow the conclusions
above. The estimated parameters for negotiated prices are appropriately signed.
Via comparison with Model 1, demand is uniformly more price elastic (with most
reported prices elastic). Further, the enterprise (SoHo, SME and LE) employment
parameter is positive and inelastic. This result accords with the interpretation that
an increase in employment more than matches growth in customer need.
8.5.5 Illustrative Forecasts
Figure 8.2 shows forecast for BCS of the French market over the 2011–2013
period based on model 1 which is the most optimistic. The average growth rate per
year over the period (+2.9 %) is lower than over the 2005–2010 period (4.4 %).
8 Forecasting the Demand for Business Communications Services
169
This is essentially due to (a) the continuing decrease in voice and data spending
(-8 to -10 %), (b) the uncertainty about the spending of some new solutions
(UC&Cs, Cloud computing) over 2011–2013 period, and (c) the low macroeconomic perspectives in France (GDP: +1.2 % per year) and in the Euroland.6
However, it seems that the growth of GDP and spending in ICT recorded in 2012
and early 2013 in France is in fact lower than that retained in this model.
The forecast based on model 1 shows 5.3 % variation with commercially
available projection of French BCS demand. The estimated annual demand based
on our model will reach 41.6 € million in 2013 versus 39.5 € million from
commercial sources. Naturally, individual product forecast comparisons record
greater discrepancy, with margins of 3–18 %.
8.6 Concluding Remarks
This study estimates the interaction of the ICT and IT sectors—the expansion of IT
and telecommunications services into each other’s territory. It notes the important
role of innovation in terms of devices, processes, and applications. Obviously,
analyzing and forecasting the demand of BCS is key for IT and telecoms providers. In aggressive, competitive markets it is critical to understand the key
drivers of the demand and their evolution.
Correct analysis and forecasting of the demand will enable IT and telecoms
providers to reduce the financial risks and optimize their investments in resources
(networks, new products, human resources, skills, customer experience …). They
also will be able to anticipate and meet the market demand, customers’ expectations and determine the service level to be provided to their clients.
Although recent technology waves shape the BCS markets thus enabling
enterprise IP transformations resulting in price competition (especially negotiated
prices), innovation and investment in technology remain important variables in
explaining market growth. The modeling addressed the deployment of new technology (4G+, iPBX) for which observations are available. The estimation suggests
that model specifications are robust and valid using the simultaneous equation
modeling approach. Future refinements are being considered. In particular, data
are being gathered on variables related to new BCS—especially security, hosting,
and professional services. Further, the analysis intends to focus on other new
services such as UCCs and cloud computing. For instance (intuitively), investment
in technology and innovation and prices should impact on demand for more than a
quarter.
Acknowledgments We would like to warmly thank Professor James Alleman for reviewing the
manuscript and for his helpful comments and suggestions. We also are grateful to Professor
Robert Fildes for his comments and remarks.
6
Forecast based on model 1.
170
M. Hamoudia
References
Alimadhi F, Lu Y, Villalon E (2007) Seemingly unrelated regression. In: Imai K, King G, Lau O
Zelig: Everyone’s statistical software. Available at: gking.harvard.edu/zelig
Alvarez R, Glasgow G (1999) Two-stage estimation of non-recursive choice models. Political
Analyses 8:147–165
Amemiya T (1986) Advanced econometrics. TJ Press Ltd, Oxford
Brown B (1983) The identification problem in systems nonlinear in the variables. Econometrica
51:175–196
Chow G, Lin A (1971) Best linear unbiased distribution and extrapolation of economic time
series by related series. Rev Econ Stat 53:372–375
Fernández R (1981) Methodological note on the estimation of time series. Rev Econ Stat
63:471–478
Fisher F (1966) The identification problem in econometrics. McGraw-Hill, New York
Gorobets A (2005) The error of prediction for a simultaneous equation model. Econ Bull 17:1–7
Hamoudia M, Scaglione M (2007) An econometric model for forecasting the ICT business
market using the simultaneous multi-equation modeling. Presented at the international
telecommunications society Africa-Asia-Australasia regional conference, Perth, Australia
Loomis D, Swann C (2004) Telecommunications demand forecasting with intermodal
competition: a multi-equation modeling approach. Telektronikk 100:180–184
Pavía Miralles J, Vila Lladoas L, Escuder Vallés R (2003) On the performance of the Chow-Lin
procedure for quarterly interpolation of annual data: some monte-carlo analyses. Span Econ
Rev 5:291–305
Rao B, Angelov B (2005) Bandwidth intensive applications: demand trends, usage forecasts, and
comparative costs. In: NSF-ITR technical report on fast file transfers across optical circuitswitched networks, Grant No. ANI-0
Chapter 9
Residential Demand for Wireless
Telephony
Donald J. Kridel
9.1 Introduction
Headlined by Verizon Communications agreement to pay $130 billion to buy
Vodafone Group out of its U.S. cellular business, and the earlier attempted purchase of T-Mobile by AT&T (itself a merged SBC, Bell South and ‘‘old’’ AT&T
Wireless), the wireless telephone industry continues to attract media attention and
to grow at significant rates.
However, despite this impressive presence and growth, empirical evidence
regarding the demand for wireless services is still relatively uncommon. In this
chapter, we provide estimates of the price (and other) elasticity of residential
demand for wireless telephony (using individual household observations). Using a
large data set with over 20,000 observations, we estimate a discrete-choice model
for the demand for wireless telephony. Preliminary elasticity estimates indicate
that residential wireless demand is price-inelastic.
9.2 Data Analysis
The discrete-choice model was estimated using data from a proprietary survey of
telecommunications services conducted by CopperKey Technologies late in the
second quarter of 2001. The data set analyzed includes approximately 20,000
Portions of this chapter were presented at the 2002 International Forecasting Conference, San
Francisco, CA, June 2002. Originally the paper was intended to be part of the Taylor-RapportKridel series (1997, 1999a, 1999b, 2001, 2002a, 2002b, 2003, 2004) on telecom demand. For a
variety of legal reasons related to the demise of CopperKey, access to the data was delayed
significantly.
D. J. Kridel (&)
1 University Blvd, St. Louis, MO 63121, USA
e-mail: kridel@umsl.edu
J. Alleman et al. (eds.), Demand for Communications Services - Insights and Perspectives,
The Economics of Information, Communication, and Entertainment,
DOI: 10.1007/978-1-4614-7993-2_9, Springer Science+Business Media New York 2014
171
172
D. J. Kridel
Wireless Subscribers (Millions)
160.0
140.0
120.0
100.0
80.0
60.0
40.0
20.0
-
1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002
Year
Fig. 9.1 Wireless subscribers (Milions)
responders from the superset of 34,000 panel members that were mailed the survey
instrument. Demographic data are available for the entire 34,000 households,
while the detailed telephony questions (on which this analysis is based) are
available only for the responders.
In addition to the survey response data, pricing and demographic data have been
matched to the survey respondents. In particular, poverty rates and income distribution information (from US Census data updated by CACI) were matched to
the households (via Zip Code or Zip+4, depending on the variable in question). In
addition, price information was collected from various web-based providers and
resellers. For each geographical area involved, wireless access prices, usage prices,
free minutes and wire-line access prices were collected.1
As is evident in Fig. 9.1, since its emergence in 1985, the wireless communications sector has experienced incredibly strong year-over-year growth (Fig. 9.1
shows growth through the year after the survey was collected; for more recent data
see final section of the chapter).
Figures 9.2, 9.3, 9.4, 9.5 (using data from the survey) relate wireless penetration rates to various demographic factors of interest. In addition, penetration rates
for other ‘‘access’’ type services like paging, local-phone, internet, and cable are
provided as a basis of comparison. Figure 9.2 details penetration rates by income
category. While all services follow the same basic pattern, it is interesting to note
that wireless- and internet-penetration appears to be more income-sensitive.
Penetration by age group is displayed in Fig. 9.3. While both local- and cableservices do not exhibit diminishing trends with age, wireless, paging, and internet
1
For each geography, carrier information for at least three (up to five) carriers were selected. For
each carrier, the lowest-price (or entry level) plan and a mid-level plan (a plan that included
200–300 free minutes) were summarized. For wire-line rates, tariff information for the local
exchange companies (LECs) were utilized.
9 Residential Demand for Wireless Telephony
Has Pager
Has Cable TV
Has Wireless
Has Local
173
Has Internet
100%
90%
80%
70%
60%
50%
40%
30%
20%
10%
0%
< $20K
$20K - $35K
$35K - $55K
$55K - $85K
>$85K
Fig. 9.2 Telecom services by income
Has Pager
Has Internet
Has Wireless
Has Cable TV
Has Local
100%
90%
80%
70%
60%
50%
40%
30%
20%
10%
0%
< 30
30 - 39
40 - 49
50 - 59
> 60
Fig. 9.3 Telecom services by age
services appear to have a ‘‘slimmer’’ right tail, indicating relatively lower penetration rates for the older (over 50) respondents.
Figure 9.4 illustrates penetration by occupation for male members of a
household. (Occupation for female members looks similar, although with lower
wireless rates among the Crafts and Farming occupations.)
Market-size differences in penetration are highlighted in Fig. 9.5. Wireless,
paging, cable, and internet services have higher penetration rates in more urbanized markets, while local service does not seem to be influenced by market size.
The differences are relatively small reflecting improved geographic availability
(coverage) for those services.
174
D. J. Kridel
Has Pager
Has Internet
Has Wireless
Has Cable TV
Has Local
100%
90%
80%
70%
60%
50%
40%
30%
20%
10%
0%
Managerial,
Professional
Technical,
Sales, Admin
Supp
Service
Farming,
Forestry,
Fishing
Craftsman,
Repairman
Operator,
Laborer
Retired,
Student, Military
Fig. 9.4 Telecom services by occupation, male
Has Pager
Has Internet
Has Wireless
Has Cable TV
Has Local
100%
90%
80%
70%
60%
50%
40%
30%
20%
10%
0%
< 100K
100K- 500K
500K- 2MM
> 2MM
Fig. 9.5 Telecom services by market size
Figure 9.6 displays the means and standard deviations for the communications
services summarized above. Local telephony has the lowest variance (indicating
little difference in penetration rates across geographies), while the other services
have comparable variances.
9 Residential Demand for Wireless Telephony
100%
90%
80%
70%
60%
50%
40%
30%
20%
10%
0%
175
Mean
StDev
Wireless
Paging
Internet
Cable TV
Local
Fig. 9.6 Mean and standard deviation by service
9.3 A Logit Model of Wireless Demand
We now turn to an econometric analysis of the demand for wireless service. The
model employed takes its cue from the modeling framework that has been widely
used in various choice situations.2 Wireless service, as it will be used in this
chapter, will refer to the demand for wireless phone service by a household.
Beginning with the usual utility maximization assumptions, the model is given
by:
Prob ðwireless access j xÞ ¼ P ðeN eY [ VY VN Þ;
ð9:1Þ
Where P (wireless access | x) is the probability, conditional on x, that a household
subscribes to wireless service. The Vi’s denote the observable utilities for having
wireless service (Y) or not (N). These utilities depend on the vector x which
contains attributes of the choice (price and free minutes) and attributes of the
decision-maker (income and socio-demographic variables). Specifying the Vi’s (as
linear functions of the variables in x) and the ei’s as independent and identically
distributed (IID) Type-I extreme-value random variables yields a standard logit
model. With these assumptions, the model can accordingly be written as:
Prob ðwireless access j xÞ ¼ 1=ð1 þ expð xbÞÞ
2
ð9:2Þ
Had usage data been available, the standard (at least in telecom demand analysis) method of
employing the 2-step consumer surplus approach could have been utilized (Taylor 1994). Here,
since no usage information is available, the standard indirect utility maximization approach is
used. See for example, Train (1986) and Amemiaya (1985).
176
D. J. Kridel
9.4 Data Sources and Definitions of Variables
9.4.1 Data Sources
As previously noted, the model was estimated using data from the proprietary
survey of telecommunications services conducted by CopperKey Technologies in
the second quarter of 2001. The data set contained over 20,000 responders from
the superset of 34,000 surveyed households. As noted above, the cross-market
pricing and call-plan information were collected and cleansed. Further, some
variables (from the US Census, e.g., poverty rates) were matched to the survey
data.
The availability of demographic data for both responders (20,000) and nonresponders (14,000) provided a basis to assess the representativeness of the final
sample. The following figures provide a comparison of the demographic attributes
of responding and nonresponding households. Figure 9.7 compares the income
distribution. As can be seen, the poor (\$15,000 and $15,000-$25,000) respond
more frequently (possibly as a result of the incentive to participate in the panel).
There is a slight under-response in the middle incomes (between $35,000 and
$125,000). All in all, the sample seems relatively representative with respect to
income.
Larger differences in response can be seen in Fig. 9.8. The young generally
under-respond, while older households are more likely to respond (especially
households over 60). The importance of these differing response rates will be
captured formally by modeling response and adding an additional independent
25%
Non-Respondent
20%
Respondent
15%
10%
5%
0%
$1K-$15K $15K$25K
$25-$35 $35-$50
$50K$75K
$75K$100K
$100K$125K
$125K$150K
Fig. 9.7 Income distribution, respondent versus nonrespondent
$150K$175K
$175K$200K
$200K$250K
$250K+
9 Residential Demand for Wireless Telephony
25%
177
Non-Respondent
Respondent
20%
15%
10%
5%
0%
Under 30 30 - 39
yrs
yrs
40 -49
yrs
50 - 59 60 yrs+ Male
Male Female Female
yrs
under 35 35+ under 35 35+
Fig. 9.8 Age respondent versus nonrespondent
variable to the final wireless model, namely the hazard rate from the familiar 2step Heckman procedure.3
The wireless service variable is a binary variable coded to be equal to 1 if the
respondent indicated that they had a wireless phone and 0 otherwise. Wireless
price variables used in the analysis were derived from the pricing plans reported by
the major wireless service providers in various US markets; wire-line prices were
collected from LEC tariffs.
The following household economic and socio-demographic variables are also
included in the analysis as predictors:
•
•
•
•
•
•
•
•
•
Income
Age
Whether household
Whether household
Whether household
Whether household
Occupation
Ethnic origin
Household size.
head is self-employed
head runs business from home
owns their own home
head attended college
Some of the attributes of Communities and Wireless Services used in the model
include:
• Size of a community
• Price of basic wireless plan available in the community.
No modeled specifications yielded statistically significant estimates (many
specifications yielded estimates with the incorrect sign) of the wire-line price
3
See Heckman (1976). Since demographic variables were available for the entire ‘‘mailed-to’’
population, we first built a logit model for response/non-response. From this model of 34,000
observations, an inverse Mill’s ratio or hazard rate variable was calculated and added to the
model for wireless.
178
D. J. Kridel
indicating that as of mid-2001, there was no evidence of the substitution of
wireless access for wire-line access.
9.5 Estimation Results
The results from estimating the logistic regression model in expression (9.2) are
presented in Table 9.1. The columns in this table give the estimated coefficient,
estimated standard error, and t-statistic for each of the independent variables.
Likelihood ratio tests indicated that ethnic parameters differ by market size and
as such had to be estimated separately.
Table 9.1 Estimation results: wireless access
Variable definition
Constant
Price of a basic wireless package
Price of a basic wireless package—Large metro area
# of free minutes included in the basic wireless package
Log of income
Log of age
Household size
Home ownership indicator
Marital status married
Home business indicator
Self-employed indicator
Telecommute indicator
Education: high school graduate
Education: college graduate
Household composition: male alone
Household composition: female alone
Ethnic origin: African-American living in large metro area
Ethnic origin: African-American living in medium-size metro
area
Ethnic origin: African-American living in small metro area
Ethnic origin: African-American living in rural area
Ethnic origin: Hispanic
Occupation: managerial, executive
Occupation: sales
Occupation: crafts
Large metro area indicator
Medium size metro area indicator
Small metro area indicator
% in poverty indicator
Selectivity adjustment parameter
Coefficient Tstatistic
Mean
-0.607
-0.033
-0.019
0.002
0.671
-0.546
0.032
0.257
0.138
0.332
0.259
0.497
-0.044
0.067
-0.503
-0.037
0.557
0.466
-1.03
-2.60
-1.87
1.74
24.91
-4.56
1.74
5.55
2.81
5.37
5.26
6.90
-0.97
1.53
-6.18
-0.51
5.82
3.79
1.00
25.35
11.22
111.90
3.72
3.89
2.66
0.78
0.68
0.12
0.20
0.08
0.20
0.22
0.06
0.10
0.04
0.02
0.427
0.482
-0.020
0.300
0.297
0.257
0.745
0.293
0.087
-0.652
-0.500
2.71
3.18
-0.23
6.71
4.96
3.50
2.88
5.15
1.47
-2.72
-3.82
0.01
0.01
0.05
0.29
0.10
0.06
0.44
0.20
0.15
0.11
-1.06
9 Residential Demand for Wireless Telephony
179
Price Elasticity By Income Bin
0.70
0.60
0.50
0.40
0.30
0.20
0.10
0.00
0.63
0.50
0.38
0.32
0.27
0.17
0.11
Income Bin
Fig. 9.9 Estimated price elasticity by income bin
Income Elasticity By Income Bin
0.6
0.5
0.54
0.42
0.42
0.4
0.34
0.28
0.3
0.2
0.1
0
Overall
<$10K
$10K-$20K $20K-$30K
> $30K
Income Bin
Fig. 9.10 Income elasticity by income bin
Monthly Personal Consumption Expenditures for
Telephone Service per Household
90
80
70
60
50
40
30
20
10
0
1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009
landline
cellular
Fig. 9.11 Monthly personal consumption
Internet Access
180
D. J. Kridel
Table 9.2 Recent measures of wireless industry performance
Metric
June 1996
June 2001
June 2006
June 2011
Wireless subscribers (M)
Wireless penetration (%)
Wireless-only households
Wireless total $B
Wireless data $B
Annualized minutes of use (B)
Annualized text messages (B)
Cell sites
219.6
72.5
10.5 %
118.3
11.3
1,680.0
113.5
197,576
322.8
102.4
31.6 %
164.6
55.4
2,250.0
2,120.0
256,920
38.2
14.0
N/A
21.5
N/A
44.4
N/A
24,802
118.4
40.9
N/A
58.7
0.3
344.9
N/A
114,059
The price response parameter with respect to the price of a basic wireless
package (available in the community) was differentiated by size of market. The
overall price elasticity of subscription (i.e., access) to wireless service was found
to be about -0.33 which is considerably more elastic than the -0.04 that is
estimated for local wire-line telephony.4 This suggests that even taking into
account factors such as convenience, mobility and the relative insecurity of
wireless service, wireless subscribers are still more responsive to price than are
wire-line subscribers. Finally, as is to be expected, wireless-subscription demand is
more price-elastic for households with lower income (Fig. 9.9).
The overall income elasticity of demand for wireless subscription was found to
be around 0.42. As shown in Fig. 9.10, the income elasticity falls as income rises
and ranges from about 0.55 for low-income households (\10 K) to about 0.30 for
relatively high-income households ([30 K).
9.6 Conclusions
The price-elasticity of demand for wireless access is inelastic, with an estimated
value of about -0.33. Further, a relatively low-income elasticity of demand was
found. Perhaps most surprisingly, no empirical evidence of wireless for wire-line
substitution was found. This finding is almost certainly a result of the study period.
In Fig. 9.11 (recreated from the Federal Communications Commission (FCC)),
consumption expenditures per household for wireless, wire-line, and internet
access are displayed. As clearly demonstrated, the relative importance of wireless
and wire-line has shifted dramatically from the period when this survey utilized in
this study was undertaken. In 2001 (the year of the survey), we observed the first
(modest) decline in wire-line expenditures; the decline has continued over the rest
of the decade. At the same time, wireless has continued to grow at a dramatic pace.
Indeed, in 2007, wire-line expenditures are smaller than were wireless
expenditures.
4
See Taylor and Kridel (1990).
9 Residential Demand for Wireless Telephony
181
Table 9.2 (from CTIA, 2009) further demonstrates the shifts in the telecommunications industry. It is now estimated that there are more wireless subscribers
than the number of living people in the United States. (At the time of the survey,
wireless penetration was approximately 40 %.) Minutes of use have grown dramatically. The shifts in usage toward texting and data and away from voice are
even more dramatic: texting is now as large as is voice while data have grown to
account for about 1/3 of total wireless revenues.
These data suggest two important next steps in a comprehensive wireless
research agenda. These studies will require additional data (and/or alternative data
sources).
• Models that account for substitution, for example, using wireless services to
substitute for wire-line services. Figure 9.11 (along with Table 9.2) indicates
that this phenomenon is becoming relatively commonplace. Ward and Woroch
(2009) provide estimate of usage substitution.
• The demand for usage needs to be carefully considered. This will likely require
collection of actual bills, which unfortunately is a nontrivial undertaking. (Bill
Harvesting once performed by PNR is generally no longer available.) In principle, the component should include voice, data, and text.
References
Amemiya T (1985) Advanced econometrics. Harvard University Press, Harvard
CTIA (2009) The wireless association, Washington, DC, USA www.ctia.org
Heckman JJ (1976) The common structure of statistical models of truncation, sample selection
and limited dependent variables and a simple estimator for such models. Ann Econ Soc
Measur 5:475–492
Kridel DJ, Rappoport PR, Taylor LD (1999a) An econometric analysis of internet access. In:
Loomis DG, Taylor LD (eds) The future of the telecommunications industry: forecasting and
demand analysis, Kluwer Academic Press, New York, pp 21–42
Kridel DJ, Rappoport PR, Taylor LD (1999b) IntraLATA long-distance demand: carrier choice,
usage demand and price elasticities. Int J Forecast 18(2002):545–559
Kridel DJ, Rappoport PR, Taylor LD (2001) Competition in IntraLATA long-distance: carrier
choice models estimated from residential telephone bills. Inf Econ Policy 13:267–282
Kridel DJ, Potapov V, Crowell D (2002a) Small geography projections, presented at the 20th
Annual ICFC conference, San Francisco, CA, June 25–28 2002
Kridel DJ, Rappoport PR, Taylor LD (2002b) The demand for high-speed access to the internet:
the case of cable modems. In: Loomis D Taylor L (eds) Forecasting the internet:
understanding the explosive growth of data communications, Kluwer, New York, pp 11–22
Rappoport PR, Taylor LD, Kridel DJ, Serad W (1997) The demand for internet and on-line
access. In: Bohlin E, Levin SL (eds) Telecommunications transformation: technology,
strategy, and policy, IOS Press, Amsterdam
Rappoport PR, Kridel DJ, Taylor LD, Alleman JH, Duffy-Deno KT (2003) Residential demand
for access to the internet. In: Madden G (ed) Emerging telecommunications networks: the
international handbook of telecommunications economics, vol II, Edward Elgar, USA
pp 55–72
182
D. J. Kridel
Rappoport PN, Kridel DJ, Taylor LD (2004) The demand for broadband: access, content and the
value of time. In: Crandall R, Alleman J (eds) Broadband: should we regulate high speed
internet access?, AEI-Brookings, pp 62–87
Taylor LD (1994) Telecommunications demand in theory and practice. Kluwer Academic
Publishers, New York
Taylor LD, Kridel DJ (1990) Residential demand for access to the telephone network. In: de
Fontenay A, Sibley D, Shugard M (eds) Telecommunications demand modeling: an integrated
view, North-Holland, pp 105–118
Train KE (1986) Qualitative choice analysis. MIT Press, Cambridge MA
Ward M, Woroch G (2009) The effect of prices on fixed and mobile telephone penetration: using
price
subsidies
as
natural
experiments,
http://businessinnovation.berkeley.edu/
Mobile_Impact/Ward_Woroch_Fixed_Mobile_Penetration.pdf
Part III
Empirical Applications: Other Areas
Chapter 10
Pricing and Maximizing Profits
Within Corporations
Daniel S. Levy and Timothy J. Tardiff
This chapter identifies some of the issues encountered in estimating demand
models for corporate clients and then uses related results to suggest pricing
strategies that might be more profitable.
The first section provides an illustration of how Professor Taylor’s findings and
insights can be applied in business settings. The second section discusses—based
on pricing decisions within certain businesses—the uneven trend toward the
application of demand, cost, and optimization approaches. The next section briefly
notes the econometric and other technical challenges that confront companies that
are attempting to optimize their prices. The subsequent section explores a number
of these econometric challenges through the use of stylized scenarios. The final
section concludes the chapter.
10.1 Incorporating Professor Taylor’s Insights:
Inside the Corporation
To set the stage for the discussion of the issues encountered in estimating demand
models for corporate clients, the experience one of us (Tardiff) had in collaborating with Professor Taylor shortly after his update to Telecommunications
Demand published was informative.1
We have benefited from James Alleman‘s editorial suggestions and Megan Westrum’s superb
programming of simulations presented in this chapter.
1
This discussion in this section is based on Tardiff (1999), pp. 97–114.
D. S. Levy (&) T. J. Tardiff
Advanced Analytical Consulting Group, Boston, USA
e-mail: danlevy@aacg.com
J. Alleman et al. (eds.), Demand for Communications Services - Insights and Perspectives,
The Economics of Information, Communication, and Entertainment,
DOI: 10.1007/978-1-4614-7993-2_10, Springer Science+Business Media New York 2014
185
186
D. S. Levy and T. J. Tardiff
During the time in which he was finishing the update, Professor Taylor participated in one of the most hotly debated telecommunications demand elasticity
issues of the early 1990s: how price-sensitive were short-distance toll calls (then
called intraLATA long-distance calls)? The answer to that question would determine the extent to which the California state regulator reduced long-distance
prices (and increased other prices, such as basic local service prices) in a ‘‘revenue-neutral’’ fashion.2 One side of the debate proposed that the interstate toll price
elasticity of approximately -0.7 be used to determine the revenue-neutral price
change. The other side—which Professor Taylor supported—suggested smaller
elasticities, reflective of the finding that calls within ‘‘communities of interest’’
should be less price-sensitive. The commission more or less ‘‘split the difference’’
by using an elasticity of -0.5 for the incumbent carriers’ retail toll calls and -0.44
for the wholesale service (carrier access) they supplied to long-distance carriers
that provided intrastate-interLATA retail calling.3 On the basis of these elasticities
and the concomitant expected volume stimulation, the Commission reduced prices
for these services on the order of 45–50%, effective January 1, 1995.
Shortly thereafter, Pacific Bell (now AT&T California) asked Professor Taylor
and Tardiff to ascertain whether calling volumes had changed as much as the
Commission had believed they would (Tardiff and Taylor 1995). Since the specific
timing of the price change was Commission-ordered, it provided an exogenous
price change and did not suffer from the endogeneity issues that are encountered
when companies themselves establish prices based on supply-side considerations.
Accordingly, the changes in volumes subsequent to the price reduction were
treated as a quasi-experiment, controlling for (1) the growth in volumes experienced in recent years before the price change—a period in which prices had been
essentially flat and (2) whether consumers had fully responded to the price change,
for example, whether volumes had reached a steady state with respect to that price
change. Based on this analysis, Tardiff and Taylor (1995) concluded that the
volume changes were much more consistent with the lower proposed price elasticities than with the Commission’s adopted values, let alone the even higher
elasticities proposed by other parties.4
Important insights can be gained from addressing the following questions
prompted by this experience: First, can observed price changes be considered as
exogenous (rather than jointly determined with supply-side considerations); second can effects other than the price changes be removed from the measures of
volume changes attributable to the price change? Third, to the extent that
2
Technically speaking, the rate rebalancing was profit neutral, that is, to the extent that
increased calling also increased calling costs, such ‘‘cost onsets’’ would be included in
determining (net) revenue neutrality.
3
During this time period, the incumbents had not met the requirements that would enable them
to provide retail intrastate-interLATA calls.
4
In ordering a later reduction in toll and carrier access prices, the Commission used elasticities
quite similar to those that the incumbent carriers had proposed (but the Commission declined to
use) in the earlier proceeding. Tardiff (1999).
10
Pricing and Maximizing Profits within Corporations
187
consumer demand and a company’s pricing changes are jointly determined,5 can
working within a company provide additional information on how that company
determines prices, for example, are there ‘‘rules of thumb’’ which can be used to
pass through effects such as increased materials costs to product prices?
10.2 Transition to Explicit Profit Maximization
A typical underlying assumption in economic analyses is that companies that
survive in the market tend to set prices in order to maximize profits. Of course this
does not mean that every company explicitly estimates the demand and marginal
cost or sets profit-maximizing prices; or even that every company acts as if they do
so. Many successful corporations that make or sell products and services typically
do not explicitly use the methods that economists and econometricians use to study
how business works.
Economists have been careful to say that businesses ‘‘act as if’’ they actually
analyze the types of information that economists use when analyzing a business,
and again ‘‘act as if’’ businesses make decisions based on the types of maximization methods that economists use when analyzing what decisions business will
make. But today increasing numbers of firms are moving toward explicit optimization of prices based on estimated demand curves and estimated, or directly
calculated, cost curves.
This section (1) provides a high-level description of how cost and demand
information can be used to move toward optimal prices; (2) acknowledges that
there may be compelling reasons why particular firms may not yet (or may never)
explicitly attempt to set profit-maximizing prices; and (3) describes the trend
toward more analytical demand and pricing analysis in other types of companies.
10.2.1 Improving Profitability
The fundamental motivation here is to find prices that have the prospect of
improving the profitability (short-run or long-run) of a company’s product offerings. If one knew enough about demand, such prices would be produced by the
familiar Lerner-like relations:
Price ¼
Cost
1 þ 21
ð10:1Þ
(where 2 is the company‘s own price elasticity).6
5
In the econometric sense that unspecified demand effects (the error terms or residuals in a
demand model) come into play in a company’s pricing decisions.
6
Lerner-like relations are frequently used by economists analyzing competition and antitrust
issues, such as in models that simulate the effects of mergers, allegedly anticompetitive behavior,
188
D. S. Levy and T. J. Tardiff
In many cases, the company may not know enough about the demand for its
products to go directly to the pricing exercise. At this point, the question shifts to
the following: if the company does not know enough about its demand, how does
one collect and analyze the data that will fill in the gaps? Before discussing the
possibilities one has to consider, it is useful to review a fundamental econometric
challenge involved in using ‘‘real-world’’ price and quantity data to estimate
demand models: potential endogeneity between consumer demand and company
supply/pricing decisions. A stylized supply/demand system illustrates these
issues.7
Demand : q ¼ a1 þ b1 Price þ c1 Cross Price þ d1 W þ e1
‘‘Inverse supply} : price ¼ a2 þ b2 q þ d2 Z þ e2
ð10:2Þ
ð10:3Þ
In these equations, W and Z denote exogenous variables (which may overlap to
some extent) that affect demand and supply, respectively.
The endogeneity problem arises when the quantity term in the ‘‘inverse supply’’
equation is in equilibrium with demand. In particular, because the error term in the
demand equation (e1) is a component of price,8 price and the demand error term
are correlated, which in turn would lead to a biased and inconsistent estimate of
the price coefficient (b1). Depending on the nature of the data, there are several
possible approaches, depending on the specifics of how the company establishes
prices.
First, in some circumstances, price changes can be established through processes resembling experimental conditions.9 For example, especially in the case of
new products, consumers can be presented prices as part of a structured survey—to
the extent that survey responses reasonably approximate market-place behavior,
the resulting price/quantity data would not pose endogeneity issues. A similar
approach would be to change the price as part of a real-world experiment, most
likely administered to a representative group of consumers.
Second, as discussed in greater detail below, examination of how the company
in question has changed prices in the past could support the conclusion that such
price-setting was essentially random. In particular, historical price changes could
be viewed as exogenous if such changes were not driven by changes in contemporaneous demand volumes but rather, through administrative rules not resulting
(Footnote 6 continued)
and the like. See, for example, Froeb et al. (1998) pp. 141–148; Tardiff (2010), pp. 957–972;
Zona (2011) pp. 473–494.
7
In the second equation, the quotation marks around ‘‘inverse supply’’ denote the possibility that
a company’s pricing decisions are not strictly profit maximizing with respect to contemporaneous
demand.
8
Specifically, since q appears in the ‘‘inverse supply’’ equation, b2 e1 is a component of price.
9
This would be analogous to how regulators formerly set the prices that were the subject of the
bulk of the demand findings reviewed in Taylor’s (1980, 1994) seminal books.
10
Pricing and Maximizing Profits within Corporations
189
from the demand curve.10 Such data could be analyzed with standard demand
modeling techniques, such as ordinary least squares.
Third, examination of previous pricing rules may uncover a systematic pricesetting mechanism. For example, if a company sets prices with reference to some
measure of cost (not necessarily marginal cost) plus a percentage mark-up and also
changes prices in response to increases in critical cost drivers, such as the price of
oil,11 the resulting price/quantity data could be free of the standard endogeneity
problem.
Finally, perhaps as a result of learning about demand (e.g., from historical
pricing changes that were effectively random) and using such information to set
more rational (profit enhancing) prices, the resulting data begin to take on some of
the endogeneity problems that require econometric attention. In this situation, the
exercise of discovering better prices, for example, through explicit optimization
along the lines of Eq. (10.1), above, may well produce independent information
that could be used to (1) econometrically identify the structure of the (inverse)
supply equation (Greene 1993, p. 595) and/or (2) specify instrumental variables
that do not suffer from the properties of weak instruments.12
10.2.2 Why Some Companies Do Not Explicitly Optimize
There are many reasons why the powerful tools used to study profit maximization
decisions have not been used by many corporate economists.13
10
In this case, there may still be endogeneity issues with respect to estimating certain cross-price
coefficients, e.g., the prices of other firms with competing products.
11
Such a pricing strategy could reflect the belief that competitors similarly pass through such
price increases.
12
With regard to the possibility of company-specific information providing more effective
instruments, one possible avenue of further exploration are cases in which (1) a company’s
marginal cost is relatively flat with respect to output (possibly locally within the range of likely
observations) and (2) the company is setting prices with reference to marginal cost. In such
situations, marginal cost measurements (to the extent they vary due to factors such as changing
input prices) may serve as effective instruments and/or the typical endogeneity problem with
price as an explanatory variable in the demand equation may be mitigated.
13
Despite the empirical reality that corporate pricing decisions can depart from textbook profit
maximization for many reasons, prominent economists nonetheless make legal and policy
recommendations based on seemingly literal adherence to the optimizing model. For example, a
recent article by Kaplow (2011) on the detection of price fixing observed the following.
‘‘[O]ne would expect firms to have knowledge of their own prices and marginal cost and thus
an estimate of price-cost margins. Firms think about which costs are fixed and variable and how
joint costs are properly allocated. They know when production is at or near capacity and if
marginal cost is rising sharply. When they price discriminate or grant a price concession to a
large buyer, they presumably are aware of their costs and their reasons for charging different
prices to different customers. If their prices vary across geographic markets, they again have
reasons and information on which their reasoning is based. In deciding how much of a cost shift
190
D. S. Levy and T. J. Tardiff
10.2.2.1 Structure of the Market Does Not Call for Detailed Economic
Modeling to Approximate Maximum Profits
In some cases, it could be that the companies or products managed do not require
any detailed analysis. This could be because their products are commodities, which
make the analysis of the profit optimization decision of relatively little value:
optimal profits can be achieved by setting prices to match the market, driving costs
down as far as possible, and running the operation with every possible efficiency.
Of course this process of running the business efficiently might benefit from
methods to help streamline operations and attenuate the effects of turbulence, such
as supply shocks and related cost variations. But there may be settings or industries
where even these variations and improvements for operational improvement are
minimal. These are all hard tasks, requiring skilled management, but in this setting, economic and econometric models are not of great value to corporate managers for maximizing profits, even if economists are developing economic theories
and performing econometric analyses that describe the behavior of such markets.
10.2.2.2 Detailed Data Needed to Explicitly Maximize Profits are
not Available
In other cases, the detailed cost and demand data needed to perform profit maximization analyses are not available, or at least not easily and/or reliably available
on a frequent enough basis.
Even today, there are many large corporations that do not have their data in an
electronically accessible form that is captured with adequate frequency on a
consistent basis. For example, weekly data for specific sales, along with the cost
drivers associated with marginal costs, may be needed to analyze demand, cost,
and improve profits.
The time and cost of collecting the data in the format required for this type of
analysis may intimidate companies from performing these standard economic and
statistical analyses. But in fact the first step need not be inhibited by the initial lack
of data. The data required for an initial look at profit maximization topics can be
generally captured efficiently and used quickly to produce powerful insights.
Many companies have installed extensive data warehouses or enterprise
resource planning (ERP) systems, but even these systems often are not structured
in a way that provide easy access to cost or price data that is categorized in the
right way for developing strategies to increase profits. Rather than accounting
(Footnote 13 continued)
to pass on to consumers or how to respond to demand fluctuations, they are thinking about
whether their marginal costs are constant over the relevant output range, what is the elasticity of
the demand they face, and possible interactions with competitors. If they have excess capacity,
they have thought about using more of it, which probably involves reducing price, and presumably have decided against it, again, for a reason’’ (pp. 404–405).
10
Pricing and Maximizing Profits within Corporations
191
costs, with which businesses are more likely to be familiar, improving profitability
must be based on marginal costs that are the additional costs associated with
changing production by a marginal unit (or relatively small demand increment). In
contrast, accounting costs often include allocations of costs that are fixed over the
range of product volumes that one wants to optimize. For example, if a company
wanted to maximize short-term profits, say by not considering the wear and tear
that additional units of production would cause, the company would exclude these
costs from the analysis of what price and quantity result in the largest (short-term)
profits.14 If on the other hand, the company wanted to maximize long-run profits it
would take into account the wear that an additional unit of production causes and
factor that into the marginal cost of the additional unit even though the cash for
that cost may not actually be paid out until sometime in the future.15
10.2.2.3 Managers may have the Knowledge and Ability to Maximize
Profits Without Explicit Modeling
In some cases, managers have extensive experience with the products, customer
base, geography, and the company’s costs structure. If these broad supply and
demand conditions have been stable enough for long enough, the individuals
making pricing decisions may have an accurate idea of how a change in price will
change the quantity sold and how a given quantity will alter marginal costs. With
this knowledge, whether the pricing manager obtains it from an analytical study of
the data or from a long history of observing the process, optimal prices and
maximum profits can be approximated. Managers are more likely to have this
constellation of cost and consumer-demand knowledge when products they sell
and the competitors they face are few in number, and where consumers and costs
of production do not change often or greatly.16 In more dynamic markets, it is
harder for managers to maintain accurate perceptions of what can be myriads of
changing product features, competitors’ offerings, customers’ demands by geography and sales channels, customers’ demographics, and input costs.
14
Determining whether certain types of cost are included in a particular marginal cost estimates
can be illustrated by costs associated driving a car and additional mile. There is gas, which is a
short-run marginal cost. Oil might be considered a medium-term marginal cost. Wear and tear on
the car engine transmission, etc., also happens with each mile. So it also has a marginal cost, but
one that is only paid for far into the future, when a new car has to be purchased.
15
Indeed, the cash expenditure may occur even months or years into the future when repairs
resulting from operating production facilities at higher levels of output in the earlier period are
made.
16
That is, whether corporate decisions comport with the textbook description in footnote 13,
above likely varies by industry and by company within particular industries.
192
D. S. Levy and T. J. Tardiff
10.2.2.4 Advances in Computing Power Needed for Large-Scale
Elasticity Estimation and MultiProduct Optimization
Another reason these scientific analytical methods have not been used is that the
computing power needed to perform such analysis on a regular basis for a large
array of a manufacturer’s products was expensive and difficult to obtain. Twentyfive years ago, scientific measurements of the sensitivity of product sales to a
change in prices for an entire portfolio of products could take many hours, even
days, to run on a major university research mainframe computer. Today, the same
analysis would require a computer or server that could fit under a desk and could
complete the same calculation in a matter of minutes.
10.2.3 Movement Toward Explicit Modeling of Optimal
Prices Based on Estimated Price Elasticities
As data collection and computing power advance, and the benefit of rapidly
adjusting prices increases, a growing number of companies are explicitly developing demand and marginal cost models for the products they offer. Obviously, the
ability to accurately model the company’s supply and demand curves is a significant advantage in maximizing profits. Those firms that do not have this ability
have a greater chance of being weeded out of the competitive field.17
These improvements in data availability and computing power have been
accompanied by advances in the analytical methods used to measure consumer
sensitivity to changes in prices and to optimize prices. These technological
changes along with the growing familiarity among corporate managers with these
analytical techniques have produced an expanding use of these powerful scientific
methods—in some cases, on a daily basis—in manufacturing, wholesaling,
retailing, and service companies.
In addition, the ranks of CEO and corporate leaders are now from a generation
who have been exposed to these more powerful analytical models and computing
technology through coursework at universities and from practical experience.
These resources can accommodate the estimation of own price elasticities, crossprice elasticities and marginal cost functions for hundreds or even thousands of
products within a corporation. Further, advances in computing power and optimization software then allows these elasticities and costs to be combined with
other corporate strategic and logistical restrictions to optimize profits within the
broader context of corporate strategic goals.18
17
These abilities are analogous to cost advantages or disadvantages.
Corporate strategic goals can be thought of as a component of marginal cost, but here they are
simply noted as additional constraints on the optimization process.
18
10
Pricing and Maximizing Profits within Corporations
193
Even with this evolution in data warehousing, computing power and modeling
techniques, most manufacturing, retail and service companies have not performed
the detailed economic and econometric analysis to know how to maximize their
profits, or even whether they are close to maximizing profits.
In fact, even in large companies, there may be relatively few formally trained/
PhD-level economists that perform significant economic and econometric analyses
to help companies determine such fundamental economic decisions as what price
and quantity should be sold to maximize profit.19 Certainly PhD-level economists,
many of whom are capable of performing such analyses, do work, or in some
cases, even run large segments or even entire corporations. But still decisions
about fundamental components of profit maximization are rarely if ever explicitly
analyzed in many industries using standard tools that economists use to analyze the
behaviors of those same businesses.
10.3 Path to Profit Maximization in a Corporate Setting
Maximizing profits in a corporate setting, particularly one where explicit profit
maximization has not occurred before, presents a unique set of analytical concerns.
While the expanded use of advanced economic concepts and econometric models
to observe and scientifically measure the behavior of consumers, competitors, and
suppliers often relies heavily on standard economic and econometric concepts and
the latest academic developments, the application of these techniques in the
business setting presents a different set of challenges and opportunities than is the
case in academic settings. Furthermore, applying optimization techniques presents
an additional set of technical economic and econometric issues that academic
economists rarely have to deal with when studying the same set of businesses.
These differences go far beyond the obvious, albeit important, differences that
typically come to mind: The analysis for corporate purposes typically has to have
practical implications and must lead to implementable results.
More profoundly, the results produced by corporate economists about fundamental business decisions, such as product pricing and quantity determination are
often actually used in the market. (That was the whole purpose for the business to
perform the analysis in the first place.) This means that the observed empirical
behavior and resulting data in the market is altered by corporate decisions that are,
in turn, based on the empirical econometric analysis of the market data. In this
way, corporate economic research of fundamental decisions interacts with the data
19
For example, the trend towards reducing the number of economists and demand analysts
within large telecommunications companies that Professor Taylor noted in the 1990s has resulted
in many fewer such specialists than there were when the industry was regulated. Similarly, we
have analyzed demand and profitability for companies that have billions of dollars of annual
sales. In many cases, minimal resources had been assigned to price setting and profit
improvement before they asked us to analyze their business.
194
D. S. Levy and T. J. Tardiff
being analyzed in ways that are rarely observed in academic research of corporate
behavior. Academic analysis of consumer demand and costs has rarely made it into
the actual pricing and production decision of corporation on an ongoing basis, but
now they do.20 This close interaction between the analysis of the behavior in the
market and this influence that the economic research and resulting corporate
decision have on the data that is being analyzed creates some important econometric challenges that must be recognized and accounted for in order to understand
and scientifically estimate the impact that corporate decisions about pricing and
production will have on consumer demand, competitor behavior and corporate
profits.
At the same time, economic analysis performed within a corporate setting
provides some enormous advantages, not only in data quality, but also in the
ability to access data almost continuously over time as it is produced by the
market. Economic analysis within a corporate setting also provides access to
certain types of data which are rarely if ever in the market, including detailed
company-specific cost data, by product, region, customer, etc. Furthermore, and
perhaps more importantly, in the corporate setting economists may have access to
the specific implicit or explicit rules companies use to set prices (even to the point
of having participated in their development). Use of this data (and even more so
the optimization function that corporate decision-makers use to set prices and
quantities) changes the estimation strategy required to get the best estimate of
customers’, competitors’, and suppliers’ reactions in the market.
The remainder of the chapter shows the effect of the use of these differing
estimation strategies on the observed demand and supply curves and ultimately on
the prices, quantities, and profits achieved by the firm. Further, the chapter shows
how the use of certain standard approaches in an applied corporate setting, without
recognizing how explicit efforts to optimize profits can affect the resulting price
and quantity data, can lead to a path of pricing decisions that are far from optimal
and in fact could even be worse than using alternative naïve pricing rules.
10.4 Empirical Evidence of Methods Based on Business
Requirements
In some cases, managers responsible for setting prices report that prices are set
with little or no regard for marginal costs, or even costs in general. Instead, the
focus is on revenues or sales. This claim is not inconsistent with the possibility that
the price variation created by a corporate pricing department is within a small
enough range to approximate the optimal price during some period of time.
20
Some notable examples that are exceptions are the airlines industry where some forms of fare
optimization has been in use for years; and more recently the hotel and hospitality industry, where
pricing systems have been used to price ‘‘excess’’ capacity.
10
Pricing and Maximizing Profits within Corporations
195
Perhaps when some larger price changes occur, managers do tend to move prices
in the expected direction. But if corporate managers are right, there may be a range
of time when the price changes trace out the demand curve. If this is the case, the
variation in the prices over this range could be sufficient to estimate the demand
curve. And further, if it is appropriate to estimate the demand curve directly, the
precision of the estimates will be greater than if estimated through a two-stage
process. However, as one will see, once the company starts to explicitly maximize
profits, the classic problem of endogeneity may kick in, requiring some other
identification strategy.
To illustrate the analytical issues and challenges, several scenarios are developed, described in the following seven subsections.
10.4.1 Marginal Costs are Not Volume-Sensitive
A simple, but not necessarily unrealistic example, illustrates the potential power of
understanding costs. Suppose a marginal costs do not vary significantly with
volume over the range of variation, but can be different from period to period, for
example, as input prices change. The firm, which faces a linear demand curve—
that in turn may shift in and out from period to period—sets prices to maximize
profits in each period.
In particular, suppose one observes 100 periods of volume, price, and marginal
cost outcomes generated as follows:
• Demand curve slope: -1.5.
• Marginal cost: Mean = 25, standard deviation = 20.
• Intercept of demand curve: Mean = 200, standard deviation = 50.
Figure 10.1 displays the prices and quantities observed from the firm’s profitmaximizing pricing decisions. The square points reflect the actual demand curve.
Figure 10.1 illustrates the fundamental endogeneity issue: the diamond
points—representing the market equilibrium prices—suggest almost no relationship between the volume demanded and price. If anything, the Figure suggests a
weak positive relation between price and volume.
It turns out that with a linear demand curve and volume-insensitive marginal
costs, knowing costs in every period—along with the price and volume data
typically used in demand analysis—allows exact recovery of the slope of the
demand curve by means of basic algebra.21 Table 10.1 compares this algebraic
result with ordinary least squares and instrumental variables estimation.
V
p; c are the sample
The slope is calculated from the following equation: b ¼ pc
; where V;
averages for volume, price, and marginal cost, respectively. The estimate of the intercept is:
^ ¼ V ð2pcÞ.
A
pc
21
196
D. S. Levy and T. J. Tardiff
Price_actual
160
Price_demand
140
120
P
r
i
c
e
100
80
60
40
20
0
-40
-20
0
20
40
60
80
100
120
140
160
180
Volume
Fig. 10.1 Price and volume data: volume-insensitive marginal cost scenario
As anticipated, the algebraic solution exactly reproduces the slope, while the
intercept is close to the mean of the assumed distribution (202.8 vs. 200). The
instrumental variables (IV) results are also quite close to actual. On the other hand,
as depicted in Fig. 10.1, ordinary least squares does a poor job of uncovering the
demand curve.
10.4.2 Pre-optimization Demand Model Estimation
The scenarios in this and the subsequent two subsections are based on the following simplified example:22
Demand Equation
Q ¼ 550 0:5 price oil þ e:
ð10:4Þ
Marginal Cost Equation
MC ¼
Q þ oil þ steel
:
0:5
ð10:5Þ
In Eq. (10.4), the quantity of the firm’s output demanded by consumers is a
linear function of the product’s price and the price of oil (which can be viewed as a
22
For each of the scenarios described below, observations are generated for price, quantity, oil,
and steel using the following distributional assumptions:
oil—(mean = 100, standard deviation = 5)
steel—(mean = 50, standard deviation = 15)
e—(mean = 0, standard deviation = 75).
10
Pricing and Maximizing Profits within Corporations
197
Table 10.1 Demand model estimation: Volume-insensitive marginal costs
Actual Algebraic
Instrumental variables (IV) Ordinary least squares (OLS)
Intercept
Slope
200
-1.5
202.8
-1.5 Exact
203.7 (43.96)
-1.51 (0.55)
19.6 (10.56)
0.80 (0.13)
Standard errors are in parentheses
Source authors’ simulation
proxy of economic conditions). Equation (10.5), the parameters of which in this
example the company knows with certainty, indicate that the marginal cost of the
company’s product increases by two dollars for each dollar increase in the price of
critical production inputs (oil and steel) and by $2 for additional unit of output.
If the company is attempting to maximize profits, it will select prices and
quantities that equate marginal revenue (derived from Eq. 10.4) with marginal cost
Eq. (10.5). Because the resulting prices may be a function of demand, possibly
including its error term, even if the business knows the parameters of its marginal
cost function with certainty—the only source of error (e) comes from the demand
function, the use of observed prices to estimate the demand equation could result
in biased and inconsistent estimates.
Consider the possibility that the company in question has not been optimizing
its prices, but will do so in the future, based on what it can learn about the demand
for its product. If prices were previously set for some period of time without regard
to the marginal costs, the demand curve can be estimated directly from the historical price, quantity, and exogenous variables, for example, with ordinary least
squares.
Table 10.2 lists the coefficients of the demand equation for this scenario. The
results represent two years of historical weekly observations (100 weekly data
points). The estimated coefficients are quite close to the parameters of the true
demand equation.
Table 10.2 Demand model estimation: Pre-optimization (initial) results
Actual
Initial Estimation
Intercept
Price
Oil
519.27 (137.12)
-0.43 (0.03)
-0.94 (1.36)
Standard errors are in parentheses
Source authors’ simulation
550
-0.5
-1
198
D. S. Levy and T. J. Tardiff
10.4.3 Potential Endogeneity Problems Stemming
from Price-Optimizing Efforts: Ordinary
Least Squares
1500
Because the company estimated the demand curve for the purpose of selecting
profit-maximizing prices, the company’s new price-setting process may cause the
standard endogeneity problem to creep into subsequent estimation of the demand
curve. For example, if prices are reset weekly based on a demand curve that is reestimated weekly, it may take less than a year for the estimated demand curve to
become severely biased. This can be seen in Figs. 10.2, 10.3, 10.4 and Table 10.3.
These results represent the following process: (1) start with the original 100
historical data points; (2) estimate an initial demand model; (3) based on the
known parameters of the marginal cost function, observed values of the exogenous
variables, and estimated parameters of the demand model determine the quantity
that maximizes expected profits; (4) based on this production decision, the company then adjusts it price to clear the market—a price response that will be based
in part of the unobserved component of the demand function; (5) record the
quantity produced and the resulting price for that period; and (6) re-estimate the
demand model, using the 100 most recent observations. Steps 3 through 6 are
repeated for each production decision period (e.g., weekly). As the following three
Figures show, the estimated demand curve rotates with successive periods of
optimization, in this case becoming more inelastic.
The hollow circles in the three graphs are the price and quantity points that
were generated before the firm started optimizing. The crosshair symbols are the
price and quantity points that were generated after the firm started optimizing.
Even though OLS was the most appropriate method to use prior to the optimization
Actual Demand Curve
Estimated Demand Curve
Marginal Cost
Estimated Marginal Revenue
Initial Data
0
500
price
1000
Fig. 10.2 Initial estimate
(OLS)
0
100
200
300
quantity
400
500
Pricing and Maximizing Profits within Corporations
199
1500
10
Actual Demand Curve
Estimated Demand Curve
Marginal Cost
Estimated Marginal Revenue
Initial Data
After Optimization
0
500
price
1000
Fig. 10.3 After one year
(OLS)
0
100
200
300
400
500
1500
quantity
Actual Demand Curve
Estimated Demand Curve
Marginal Cost
Estimated Marginal Revenue
Initial Data
After Optimization
0
500
price
1000
Fig. 10.4 After two years
(OLS)
0
100
200
300
400
500
quantity
Table 10.3 Demand model estimation: Initial and two-years of post-optimization (Ordinary
least squares (OLS))
Actual
Initial Estimation
After 1 Year (OLS)
After 2 Years (OLS)
Intercept
Price
Oil
550
-0.5
-1
519.27 (137.12)
-0.43 (0.03)
-0.94 (1.36)
Standard errors are in parentheses
Source authors’ simulation
329.91 (127.24)
-0.31 (0.03)
-0.02 (1.25)
125.16 (24.16)
-0.03 (0.01)
-0.20 (0.23)
200
D. S. Levy and T. J. Tardiff
process, over time the estimated demand curve diverges from the true demand
curve. Table 10.3 shows that the estimated price coefficient changes from its initial
value of -0.43 to an almost completely inelastic -0.03 after two years.
10.4.4 Postoptimization Estimation: Instrumental Variables
1500
Typically, economists will look for an identifying variable to include in the two
equation model to solve for endogeneity. Here, one can use the variable, steel,
which is found in the marginal cost equation but is not in the demand equation. As
in the prior example, re-estimating the demand curve weekly for two years is
simulated, using the evolving 100 most recent observations. The results are shown
in Figs. 10.5, 10.6, and 10.7 and Table 10.4.
The last two columns of Table 10.4 show the OLS estimates and the instrumental variable (IV) estimates after 100 periods. The OLS estimate has become
more inelastic than the actual demand curve, at -0.03, while the IV estimate
provides a relatively good estimate of the slope of the demand curve, -0.47, which
is close to the actual value of -0.50 listed in the first column. It is interesting to
note that in this specific example, the IV estimates have a much large standard
error than the OLS estimates after one year of re-estimation. At this time, in the
estimation process, the demand curve is being estimated on 50 data points generated pre-optimization (where there was no endogeneity) and 50 data points
postoptimization (where endogeneity exists in the data). After two years of reestimation, the standard error of the IV estimates has reduced greatly.
Actual Demand Curve
Estimated Demand Curve
Marginal Cost
Estimated Marginal Revenue
Initial Data
0
500
price
1000
Fig. 10.5 Initial estimates
0
100
200
300
quantity
400
500
Pricing and Maximizing Profits within Corporations
201
1500
10
Actual Demand Curve
Estimated Demand Curve
Marginal Cost
Estimated Marginal Revenue
Initial Data
After Optimization
0
500
price
1000
Fig. 10.6 After one year
(IV)
0
100
200
300
400
500
1500
quantity
Actual Demand Curve
Estimated Demand Curve
Marginal Cost
Estimated Marginal Revenue
Initial Data
After Optimization
0
500
price
1000
Fig. 10.7 After two years
(IV)
0
100
200
300
400
500
quantity
Comparing Figs. 10.2, 10.3, 10.4 with Figs. 10.5, 10.6, 10.7 demonstrates that
using steel as the identifying variable results in a much more accurate estimate of
the demand curve. The estimated demand curves in Figs. 10.5, 10.6, 10.7 diverge
much less from the true demand curve than the estimated demand curves in
Figs. 10.2, 10.3, 10.4. Additionally, as shown in Table 10.4, using the identifying
variable produces less biased estimates of the price coefficient than the first set of
estimates in which one did not use the identifying variable. In both cases, OLS was
the best method to estimate the initial regression, for data generated prior to price
optimization.
202
D. S. Levy and T. J. Tardiff
Table 10.4 Demand model estimation: Initial and two-years post-optimization Ordinary least
squares (OLS) v. (Instrumental variables (IV)
Intercept
Price
Oil
Actual
Initial
Estimation
After
1 Year (OLS)
After
1 Year (IV)
After
2 Years (OLS)
After
2 Years (IV)
550
-0.5
-1
519.27 (137.12)
-0.43 (0.03)
-0.94 (1.36)
329.91 (127.24)
-0.31 (0.03)
-0.02 (1.25)
329.22(219.01)
-0.31 (0.23)
0.01 (1.28)
125.16 (24.16)
-0.03 (0.01)
-0.2 (0.23)
545.17 (236.75)
-0.47 (0.12)
-1.17 (1.64)
Standard errors are in parentheses
Source authors’ simulation
10.4.5 Postoptimization Estimation: Inside Supply Curve
Information
The economist in the corporate setting can take this one step further and actually
capture the marginal cost curve from the corporate processes. In this case, one can
directly observe the marginal cost equation, for example, Eq. (10.5) above, which
may provide significant advantages.
In many cases, locating an effective instrumental variable to identify the
demand curve is a problem. Without knowing how the company sets prices and
what inputs or factors the company considers when setting price, the researcher
actually does not know what variables will make suitable instruments. With
complete knowledge of which variables explain costs, and more importantly which
variables influence the companies supply curve, the corporate economist knows
whether or not there is a viable instrumental variable approach.
In situations where there is not an informative instrumental variable approach,
explicit knowledge of how the variables considered in the cost curve and the pricesetting process can still identify the demand curve based on the knowledge that
error term in the corporate supply curve used is uncorrelated with the error in the
demand curve. The corporate economist can know that the error term in the supply
curve is not correlated with any other factors because she/he knows all of the
factors in that supply curve, leaving the error term to be pure measurement error
rather than some result of misspecification or omitted variables.
The significant advantage here is that where an economist outside the company
may not be able to identify an instrument or an identification strategy, the economist inside the firm often can. This can often lead the economist outside the
company to instrument with a variable that is not actually used by the company in
the supply curve or perhaps a weak instrument. Here, a scenario is considered in
which there is a weak instrument as compared to the situation where the same
supply and demand is identified based on the knowledge that the error terms in the
supply and demand are not correlated. (Technical details, which are based on
Kmenta (1986), pp. 668–678, are available from the principal author).
In this example, the supply curve and the demand curve underlying the preoptimization data are as follows:
10
Pricing and Maximizing Profits within Corporations
203
S : QS ¼ 100 þ 3 price :01 Steel þ es
D : QD ¼ 550 2 price þ eD
eS N ð0; 1Þ
eD Nð0; 10Þ:
Note that the steel instrument is weak. In addition, there is no correlation
between the error terms in the demand and supply equations.
Data for this scenario were generated as follows. First, similar to the previous
scenarios, assume that 100 price/quantity data points were the result of an
essentially random price-setting process. Ordinary least squares was then used to
estimate an initial demand curve. Then, for the next (101st) period (e.g., the
following week), the firm produces a quantity of output based on (1) the parameters of the estimated demand curve, (2) draws from the distributions for the
exogenous variable (steel) and the supply and demand curves error terms, and (3)
the intersection of the supply and demand curves based on those values. Price then
adjusts, based on the true demand curve. The resulting price and quantity are
recorded and the demand model is re-estimated (using either an instrumental
variable without an error term restriction or a restricted instrumental variable
estimation) with the most recent 100 observations, that is, the new observation
replaces the first observation of the original data. These steps are repeated for
periods 102 through 204. Table 10.5 shows these results for demand models
estimated from this process.
The first column reports the actual slope and intercept for the demand equation.
The second column shows the ordinary least squares estimated for the initial 100
observations. The next six columns contrast (a) the OLS estimate (OLS2); (b) the
simple IV estimate (IV2); and (c) the constrained estimate (cov = 0) estimated
with (1) observations 53–152 (most recent 100 observations after the first year of
price optimization) and (2) estimated with observations 105–204 (most recent 100
observations after the second year of price optimization). In particular, the last
column of Table 10.5 shows the results for the identification by the error structure
Table 10.5 Demand model estimation: Ordinary least squares (OLS2), weak instrument (IV2),
and uncorrelated errors (cov=0)
Actual Initial
After 1 After 1
After 1 After 2
After 2
After 2
estimate
year
year
year
years
years
years
Intercept 550
Price
-2
547.920
(8.406)
-1.981
(0.083)
(OLS2)
(IV2)
(cov=0)
(OLS2)
(IV2)
(cov=0)
539.273
(8.406)
-1.863
(0.090)
458.400
(158.427)
-1.008
(1.673)
553.955
(11.576)
-2.011
(0.040)
276.580
(24.960)
1.048
(0.277)
558.373
(74.220)
-2.077
(0.825)
536.703
(18.174)
- 1.940
(0.182)
Standard errors are in parentheses
Source authors’ simulation
204
5
Point Estimate
95% CI
Actual Price Coef.
Range of Figure 10.9
−10
−5
0
Fig. 10.8 Price
coefficients—IV2
D. S. Levy and T. J. Tardiff
0
20
40
60
80
100
Iteration
after two years of optimization. The second-to-last column shows the estimates
based on a weak instrument after two years of optimization. The third-to-last
column shows the estimates based on the OLS estimates. The estimated slope of
the IV estimate has a large standard error relative to both the OLS estimate and the
(cov = 0) estimate.
Figure 10.8 illustrates the practical effect of the large variance in the IV estimate. With the weak instrument, the variance of the estimated price coefficient is
large and the estimated price coefficient tends to drift significantly over time.
Such instability in the estimated price coefficient could cause a significant
practical problem in establishing profit-improving prices, as suggested by the two
dashed horizontal lines representing the much tighter bounds of the estimated price
coefficient when the identification is based on the more detailed understanding of
the supply curve used in the (cov = 0) model.
Figure 10.9 displays the progression of the estimated price coefficient over time
that results from the constrained (cov = 0) estimation.23 These results make use of
the fact that one knows that the error in the supply curve is not correlated with the
demand, which follows from understanding the company’s production process.
Here, the precision of the estimates is much greater, ranging from -2.1 to -1.95,
and with much tighter 95% confidence intervals.
Figure 10.9 shows some variation in the estimated price over time, but the
range of the variation is much smaller. In fact, while both the IV and the (cov = 0)
estimates reveal that the random draw of the exogenous and error term data used in
23
These results are based on the same historical pattern of exogenous variables and error terms
used to generate the results shown in Fig. 10.8.
10
Pricing and Maximizing Profits within Corporations
205
Fig. 10.9 Price coefficient
estimates—(Cov = 0)
Point Estimate
95% CI
Actual Price Coef.
−1.6
−1.8
−2.0
−2.2
−2.4
0
20
40
60
80
100
Iteration
these simulations had a rather extreme combination around month 88, the estimated price coefficient of the (cov = 0) model was considerably more stable than
the estimated price coefficient based on the IV method. This outcome illustrated
the potential advantage that knowing the structure of the supply curve can afford.
Simply by knowing how the company sets its prices can allow the economist to
know whether the error in the supply curve is correlated with the error in the
demand curve. With this knowledge, which would be hard for economists outside
−5
0
5
Point Estimate
95% CI
Actual Price Coef.
Range of Figure 10.9
−10
Fig. 10.10 Price coefficient
estimates—OLS2
0
20
40
60
Iteration
80
100
206
D. S. Levy and T. J. Tardiff
the company to obtain, one can obtain better estimates of the demand curve than
would otherwise be available—in great part because this knowledge opens up the
use of a broader set of estimation techniques.
Finally, Fig. 10.10 shows the pattern of OLS estimates of price coefficient over
time. Not surprisingly, the OLS estimates show a pattern of bias and moves outside
the bounds of the estimates provided by the (cov = 0) estimates.
10.4.6 Both the Wrong and the Right Marginal Costs
Need to be Used
Even when inside a company, some may wonder whether the precise cost curves
can be known. When estimating the demand curve, it is important to recognize that
the supply curve that the company used to set prices is the one that should be used
to estimate the demand curve. The supply curve used by the company to set prices
may not be the actual supply curve derived from the marginal cost curve. In fact,
the supply curve used by the company often does depart significantly from that
derived from marginal costs. There are a wide range of reasons accounting for why
this can be the case. For example, corporate managers may have strategic goals to
increase volume in certain market segments. Or it simply could be that corporate
managers have not measured costs with sufficient accuracy. Regardless of the
reason, the marginal cost curve that the corporate managers use to set prices is
the one that should be used to recover the demand curve. However, even if this is
the case, in extracting an estimate of the demand curve from the market price and
quantity pairs, the supply curve that should be used is the one the company
actually used to set prices. This type of information is unlikely to be available to
any researcher outside the company and it can greatly improve the precision of the
estimates and eliminate bias.
By not using the accurate marginal cost curve in setting prices, corporate
managers are foregoing maximum profits. This problem must be addressed in the
optimization process and not during recovery of the demand curve. This leads to
the interesting result that as a company moves toward the more explicit process of
optimization, the supply curve used for estimating the demand curve will be the
one that managers used historically. However, once the demand curve has been
estimated, the marginal cost curve used to optimize prices should reflect as closely
as possible actual marginal costs. This means that as a company makes the transition to explicit price optimization, corporate managers will have to use two
different sets of supply curves. The first ‘‘functional’’ supply curve will be
whatever supply curve was used during the historical period over which the
demand curve is estimated; the second supply curve is based on the actual marginal cost curve measured as precisely as possible for use in optimizing future
prices.
10
Pricing and Maximizing Profits within Corporations
207
10.4.7 Approximate Optimization Approaches
and the Endogeneity Problem
Under certain circumstances, information from initial demand equation estimation
could be used to change prices in a way that does not introduce the usual endogeneity problem, that is, the price and quantity data could be used with standard
econometric methods such as ordinary least squares. This scenario proceeds as
follows.24 First, based on an earlier period in which prices were set in an essentially random fashion, a company produced estimates of the structural demand
parameters close to the actual values (550 for the intercept, -0.5 for price and 1.0 for oil). The estimated parameters are shown in Table 10.2 above. Second,
since the company does not know the precise value of the error term for a particular period, suppose the company set prices going forward based on the nonrandom components of an estimated price equation, which represents price
optimization, given (1) the results of the demand study and (2) lack of knowledge
of the error term.25 Since the price-setting process does not include the demand
equation error term (but does incorporate exogenous supply-side shifts and
expected demand reaction), the price/quantity data can be used with standard
ordinary least squares.
To illustrate this process, one hundred such data points are generated, representing approximately two years of weekly price changes to pass through the price
of steel, based on the previous demand model and optimization to expected
demand levels. Table 10.6 reports the results of this illustrative estimation.26
Because the prices set by the company are (by construction) not endogenous
with demand, the resulting coefficients are reasonably close to their true values. Of
course, if the company were reasonably satisfied with the model previously
developed from pre-optimization observations, the exercise depicted here is at best
a validation of the previous demand results. However, because (1) there is variation in the data, due to the effect of exogenous shifts in supply and (2) these prices
24
This scenario differs from the early one in which the company first determined a quantity that
was expected to maximize profits and then was able to adjust price in ‘‘real time’’ to sell just that
volume. In this alternative example, assume that while prices can be adjusted for exogenous
factors, the business is not able to respond to the random fluctuations in demand introduced by
factors not explicitly included in the estimated model.
25
In particular, the estimated coefficients from Table 10.2 and the known marginal cost curve
are used to determine prices that equate expected marginal revenue with marginal cost, given
observations for the exogenous variables.
26
The data are the second 100 observations from a random draw of 1,000 sets of values for the
prices of oil, steel, and the error term of the demand equation. The distributions are assumed to be
independently normal with means and standard deviations of (100, 5), (50, 15), and (0, 10) for oil,
steel, and the error term, respectively. For each of these sets, prices were generated by applying
the demand equation in Table 10.2, without the error term. Quantities were generated using the
structural parameters of the demand equation with the values for price, oil, and the error term.
Results for the other nine sets (e.g., observations 1 through 100, etc.) are similar.
208
D. S. Levy and T. J. Tardiff
Table 10.6 Demand model estimation after 100 periods of expected price optimization
Actual
Estimated coefficients
Intercept
Price
Oil
550
-0.5
-1
592.14 (80.97)
-0.540 (0.094)
-1.118 (0.245)
Standard errors are in parenthesis
Source authors’ simulation
Table 10.7 Demand model estimation after 100 periods of expected price optimization: price
sensitivity trend
Actual
Estimated Coefficients
Intercept
Initial Price
Oil
Price*Period
550
-0.5
-1
0.001
594.22 (81.52)
-0.538 (0.095)
-1.138 (0.252)
0. 000983 (0.0000465)
Standard Error in Parenthesis
Source authors’ simulation
are not correlated with the errors in the demand equation, the data can be used to
explore possible shifts in demand parameters.
For example, suppose consumers are gradually becoming less price sensitive.
To represent this possibility, the quantities used to produce Table 10.6 are adjusted
for consistency with the price parameter decreasing (in absolute value) by 0.001
per week, so that at the end of the 100 periods represented in the data, it is reduced
in magnitude from its original value of -0.5 to -0.4. Table 10.7 presents the
results.27
Again, because of the facts that (1) the prices for this example are uncorrelated
with the demand equation error term and (2) the supply-side shifts produce variability in prices and quantities, the trend in price sensitivity is properly detected. In
particular, the coefficient of the price/time period interaction term is close to the
true price sensitivity trend of a 0.001 per period reduction in magnitude. At the
same time, the coefficients of the other two variables, the initial price sensitivity
and oil are close to the values estimated in Table 10.6.
10.5 Conclusion
Motivated by Professor Taylor’s advice and the realization that working with
companies to apply these principles may not only improve short-term performance
(if successful), but also affect how the company subsequently improves its
27
The results are not sensitive to whether the trend in the price coefficient is assumed to be the
same constant reduction each period, or whether there is variability in the period-to-period
reduction in price sensitivity.
10
Pricing and Maximizing Profits within Corporations
209
understanding of how customers respond to its product and pricing decisions, some
of the econometric issues that may arise in the process are explored. As more
experience is gained along the path toward explicit price optimization, additional
issues are likely to emerge. For example, to the extent that ever-present proprietary
concerns permit, further identification of the ways in which issues such as data
availability, choice of estimation approaches, for example, IV, and identification
issues differ between academic and business settings would be valuable information for businesses and their demand analysts. Along these lines, more analysis
of how detailed knowledge of cost processes—for example, to the extent of
obtaining cost relations with minimal error—can be used in improving the demand
estimation process with respect to identifying structural equations, selecting
powerful instruments, and the like has the prospect of adding to the analytical tool
kit of practical demand analysts.
References
Froeb LM, Tardiff TJ, Werden GJ (1998) The Demsetz postulate and the welfare effects of
mergers in differentiated products industries. In: McChesney FS (ed) Economic inputs, legal
outputs: the role of economists in modern antitrust. Wiley, Chichester, pp 141–148
Greene WH (1993) Econometric analysis. Macmillan, New York
Kaplow L (2011) An economic approach to price fixing. Antitrust Law J 77(2):343–449
Kmenta J (1986) Elements of econometrics. 2nd edn Macmillan, New York
Tardiff TJ (1999) Effects of large price reductions on toll and carrier access demand in California.
In: Loomis DG, Taylor LD (eds) The future of the telecommunications industry: forecasting
and demand analysis. Kluwer, Boston, pp 97–114
Tardiff TJ (2010) Efficiency metrics for competition policy in network industries. J Competition
Law Econ 6(4):957–972
Tardiff TJ, Taylor LD (1995) Declaration attached as exhibit B of joint petition of Pacific Bell
and GTE California for modification of D.94-09-065, August 28, 1995
Taylor LD (1980) Telecommunications demand: a survey and critique. Ballinger, Cambridge
Taylor LD (1994) Telecommunications demand in theory and practice. Kluwer, Boston
Zona JD (2011) Structural approaches to estimating overcharges in price-fixing cases. Antitrust
Law J 77(2):473–494
Chapter 11
Avalanche Forecasting: Using Bayesian
Additive Regression Trees (BART)
Gail Blattenberger and Richard Fowles
11.1 Introduction
During the ski season, professional avalanche forecasters working for the Utah
Department of Transportation (UDOT) monitor one of the most dangerous highways in the world. These forecasters continually evaluate the risk of avalanche
activity and make road closure decisions. Keeping the road open when an avalanche occurs or closing the road when one does not are two errors resulting in
potentially large economic losses. Road closure decisions are partly based on the
forecasters’ assessments of the probability that an avalanche will cross the road.
This paper models that probability using Bayesian additive regression trees
(BART) as introduced in Chipman et al. (2010a, b) and demonstrates that closure
decisions based on BART forecasts obtain the lowest realized cost of misclassification (RCM) compared with standard forecasting techniques. The BART forecasters are trained on daily data running from winter 1995 to spring 2008 and
evaluated on daily test data running from winter 2008 to spring 2010. The results
generalize to decision problems that relate to complex probability models when
relative misclassification costs can be accounted for.
The following sections explain the problem, the data and provide an overview
of the BART methodology. Then, results highlighting model selection and
performance in the context of losses arising from misclassification are presented.
Previous research on this topic was funded, in part, by a grant from the National Science
Foundation (SES-9212017).
G. Blattenberger (&)
University of Utah, 11, 981 Windsor St, Salt Lake, UT 84105, USA
e-mail: gail.blattenberger@economics.utah.edu
R. Fowles
University of Utah, 11, 260 S. Central Campus Drive; Orson Spencer Hall, RM 343,
Salt Lake, UT 84112-9150, USA
e-mail: fowles@economics.utah.edu
J. Alleman et al. (eds.), Demand for Communications Services - Insights and Perspectives,
The Economics of Information, Communication, and Entertainment,
DOI: 10.1007/978-1-4614-7993-2_11, Springer Science+Business Media New York 2014
211
212
G. Blattenberger and R. Fowles
The conclusion discusses why BART methods are a natural way to model the
probability of an avalanche crossing the road based on the available data and the
complexity of the problem.
11.2 The Little Cottonwood Canyon Hazard
The Little Cottonwood Canyon road is a dead-end, two lane road that is the only
link from Salt Lake City to two major Utah ski resorts, Alta and Snowbird. It is
heavily travelled and highly exposed to avalanche danger; 57 % of the road falls
within known avalanche paths. The road ranks among the most dangerous highways in the world relative to avalanche hazard. It has a calculated avalanche
hazard index of 766 which compares with an index value of 126 for US Highway
550 crossing the Rockies in Colorado and an index value of 174 for Rogers Pass on
the Trans Canadian Highway.1 A level of over 100 on this index indicates that full
avalanche control is necessary.
There are over 20 major avalanche slide paths that cross the road. During the
ski season, the road is heavily utilized. Figure 11.1a shows daily traffic volume in
the canyon for February 2005. February is typically a month with a large number
of skiers in Utah. On peak ski days, over 12,000 automobiles travel to the two
resorts on the Little Cottonwood Canyon road or return to the city. Figure 11.1b
illustrates the hourly east–west traffic flow for February 26, 2005. The eastbound
traffic flow is from Salt Lake City to the Alta and Snowbird ski resorts and is high
in the morning hours. In the afternoon, skiers return to the city and westbound
traffic flow on the road is high.
Recognition of avalanche danger along this road and attempts to predict avalanche activity began early. In 1938, the US Forest Service issued a special use
permit to the Alta ski resort. One year later, the Forest Service initiated full-time
avalanche forecasting and control.2 By 1944, avalanche forecasters maintained
daily records on weather and the snowpack. During the 1950s, forecasters began to
utilize advanced snowpack instruments and meteorological information for avalanche prediction.3 Except where noted, the measurements apply to the guard
station.
Despite the fact that detailed physical measurements of climate and snowpack
conditions are available, the complexity of the avalanche phenomena makes
prediction difficult. Professional forecasters take into consideration multiple
interactions of climate and snowpack conditions. Variables that forecasters considered in previous studies and interactions among the variables differ among
forecasters, change through the season, alter across seasons, exhibit redundancy,
1
2
3
See Bowles and Sandahl (1988).
See Abromeit (2004).
See Perla (1991).
11
Avalanche Forecasting: Using Bayesian Additive Regression Trees (BART)
213
Fig. 11.1 a Natural and controlled avalanches by path, 1995-2005, little cottonwood canyon.
b Daily traffic volumes little cottonwood canyon road, February 2005. c Hourly traffic volume by
direction saturday, February 26, 2005
and vary according to particular avalanche paths. For these reasons, a Bayesian
sum-of-trees model as presented by Chipman et al. (2010a, b) is employed.
Bayesian sum-of-trees models provide flexible ways to deal with high-dimensional
and high-complexity problems. These problems are characteristics of avalanche
214
G. Blattenberger and R. Fowles
Fig. 11.1 continued
forecasting and the ensemble of Bayesian trees becomes the ‘‘forecaster.’’ Sets of
Bayesian forecasters contribute information that leads to a synthesized road closure decision. A closure decision is observable (the probability of an avalanche is
not) and we gauge the performance of our forecasters on their subsequent RCM.
Compared with other methods, the ensemble of Bayesian forecasters does a better
job.
11.3 Data
An earlier study was performed on the road closure decision in Little Cottonwood
Canyon (See Blattenberger and Fowles 1995, 1994). The data of the earlier study,
however, went from the 1975–1976 ski season through 1992–1993. The present
study uses training data running from 1995 to spring 2008 and test data from
winter 2008 to spring 2010. Various sources were used for the data in the earlier
study including US Department of Agriculture data tapes. The current study makes
use entirely of data from the UDOT guard station. Partly as a result of recommendations made in the earlier study, additional variables were recorded and are
now available from the guard station. These new variables are used here.
11
Avalanche Forecasting: Using Bayesian Additive Regression Trees (BART)
215
As in the earlier study, two key variables describe closure of the road, CLOSE,
and the event of an avalanche crossing the road, AVAL. Both are indicator
variables and are operationally measurable constructs, a key requirement to our
approach. Unfortunately, these two variables are less precise than desired. For
instance, the observation unit of the study is generally one day unless multiple
events occur in a day, in which case CLOSE and AVAL appear in the data as
multiple observations. The occurrence of an avalanche or, for that matter, a road
closure is a time-specific event. It may happen, for example, that the road is closed
at night for control work when no avalanches have occurred. The road is then
opened in the morning, and there is an avalanche closing the road. Then, the road
is reopened, and there is another avalanche. This sequence then represents three
observations in the data with corresponding data values CLOSE = (1, 0, 0) and
AVAL = (0, 1, 1). An uneventful day is one observation. If the road is closed at
11:30 at night and opened at 7:00 the following morning, it is coded as closed only
within the second of the 2 days. The variable AVAL is the dependent variable to
be forecasted in this analysis. The variable CLOSE is a control variable used to
evaluate model performance.
The data from the UDOT guard station are quite extensive. All of the
explanatory variables are computed from the UDOT data source to reflect the
factors concerning the avalanche phenomenon. The variables are local, primarily
taken at the Alta guard station. Measures can vary considerably even within a
small location. They can vary substantially among avalanche paths and even
within avalanche paths.
A listing of the variables used in this study and their definitions is given in
Table 11.1. All the variables, excepting NART, HAZARD, SZAVLAG, WSPD,
and NAVALLAG, were measured at the guard station. WSPD, NART, HAZARD,
and SZAVLAG are new to this study. The variable HAZARD was created in
response to the request in the previous paper (Blattenberger and Fowles 1995).
HAZARD is a hazard rating recorded by the forecasters. NART is the number of
artificial artillery shots used. NAVALLAG is the number of avalanches affecting
the road on the previous day. SZAVLAG weights these avalanches by their size
rating. High values of number of artillery shells fired, NART, would indicate that
real-world forecasters believe that there is instability in the snowpack requiring
them to take active control measures. WSPD, wind speed, is taken at a peak
location. It was not consistently available for the earlier study. The redundancy
among the variables is obvious. For example, WATER = DENSITY * INTSTK,
where DENSITY is the water content of new snow per unit depth and INSTK,
interval stake, is the depth of the new snow. There are no snow stratigraphy
measures. Only monthly snow pit data were available. Snow pits are undoubtedly
useful to the forecaster to learn about the snowpack, but snow pits at the Alta study
plot do not reflect conditions in the starting zones of avalanche paths high up on
the mountain, and monthly information was not sufficiently available. As noted
above, some attempt was made to construct proxies for stratigraphy from the data
available. The variable called RELDEN is the ratio of the density of the snowfall
on the most recent snow day to the density of the snowfall on the second-most
216
G. Blattenberger and R. Fowles
Table 11.1 Variables used in the analysis
VARIABLE
VARIABLE DEFINITION
NAME
YEAR
MONTH
DAY
AVAL
CLOSE
TOTSTK
TOTSTK60
INTSTK
SUMINT
DENSITY
RELDEN
SWARM
SETTLE
WATER
CHTEMP
TMIN
TMAX
WSPD
STMSTK
NAVALLAG
SZAVLAG
HAZARD
NART
Forecast Date (year, month, day)
Avalanche crosses road : 0=no, 1=yes
Road closed: 0=open 1=closed
Total stake - total snow depth in inches
If TOTSTK greater than 60 cm.TOTSTK60 = TOTSTK - 60 in centimeters
Interval stake - depth of snowfall in last 24 hours
Weighted sum of snow fall in last 4 days weights=(1.0,0.75,0.50,0.25)
Density of new snow, ratio of water content of new snow to new snow depth
Relative density of new snow, ratio of density of new snow to density
ofprevious storm
Sum of maximum temperature on last three skidays, an indicator of a
warmspell
Change in TOTSTK60 relative to depth of snowfall in the last 24 hours
Water content of new snow measured in mm
Difference in minimum temperature from previous day
Minimum temperature in last 24 hours
Maximum temperature in last 24 hours
Wind speed MPH at peak location
Storm stake: depth of new snow in previous storm
Number of avalanches crossing the road on the previous day
The size of avalanche, this is the sum of the size ratings for all avalanches
inNAVALLAG
Hazard rating of avalanche forecasters
Number of artificial explosives used
recent snow day. This is an attempt to reconstruct the layers in a snowpack. The
days compared may represent differing lags depending on the weather. A value
greater than 1 suggests layers of increasing density, although a weak layer could
remain present for a period of time.
The data employed by forecasters are fortunately redundant,4 fortunate because
this can compensate for imprecision. The redundancy is well illustrated by the following story. Four professional forecasters at Red Mountain Pass in Colorado all had
similar performances in the accuracy of their forecasts. When questioned subsequently the forecasters listed a combined total of 31 variables that they found
important in their projections; individually, each of the forecasters contributed less
than 10 variables to the 31 total. Each focused on a collection of variables. Of the 31
variables, however, only one was common to all four of the forecasters (Perla 1970).
Eighteen explanatory variables extracted from the guard station data were
included. The large number of variables is consistent with the Red Mountain Pass
4
The word redundant more generally than correlation is used in this paper. This indicates when
several variables are designed to measure the same thing or may be functions of each other.
11
Avalanche Forecasting: Using Bayesian Additive Regression Trees (BART)
217
story described. The four forecasters in the story all had similar forecasting performance each using a few but differing variables.
All of the explanatory except NART, NAVALLAG, HAZARD, and SZAVLAG can be treated as continuous variables. NART, NAVALLAG, HAZARD, and
SZAVLAG are integer variables; AVAL and CLOSE are factors. Descriptive
statistics for these variables in the training data are given in Table 11.2. The
training DATA consist of 2,822 observations.
Many of the variables were taken directly from the guard station data. Others
were constructed. TOTSTK or total stake, INTSTK or interval stake, DENSITY or
density, HAZARD or hazard rating, TMIN or minimum temperature, TMAX or
maximum temperature, WSPD or wind speed, and STMSTK or storm stake came
directly from the guard station weather data which is daily. TOTSTK60, SUMINT,
WATER, SWARM, SETTLE, and CHTEMP were computed from the guard
station weather data. NART, NAVALLAG, and SZAVLAG were constructed
from the guard station avalanche data. These last three variables are not daily, but
event specific and needed conversion into daily data.
SZAVLAG employs an interaction term taking the sum of the avalanches
weighted by size.5 Descriptive statistics for the 2,822 observations of these variables in the training data are given in Table 11.2.
The test data consist of 471 observations. Descriptive statistics for the test data
are given in Table 11.3.
The data are surely not optimal. A relevant question is whether they are
informative for real-world decision making. The imprecision and redundancy of
the data channel our focus to the decision process itself.
11.4 The BART Model
BayesTree is a BART procedure written by Hugh Chipman, Ed George, and Rob
McCulloch. Their package, available in R, was employed here.6 This is well
documented elsewhere and only basic concepts and the relevance to the current
application are introduced here.7
BART is an ensemble method aggregating over a number of semi-independent
forecasts. Each forecast is a binary tree model partitioning the data into relatively
homogeneous subsets and making forecasts on the basis of the subset in which the
observation is contained. The concept of a binary tree is illustrated in Fig. 11.2a
and b. Figure 11.2a presents a simple tree which explains some vocabulary. All
trees start with a root node which contains all the observations in the data set. The
5
In computing SZAVLAG the measure which we use is the American size measure, which is
perhaps less appropriate than the Canadian size measure. However, a similar adjustment might be
relevant.
6
Chipman et al. (2009).
7
See Chipman et al. (1998, 2010a, b).
218
G. Blattenberger and R. Fowles
Table 11.2 Descriptive statistics used for the TRAINING data
Variables
Min.
1st Qu.
Median
Mean
3rd Qu.
Max.
AVAL
CLOSE
TOTSTK
TOTSTK60
INTSTK
SUMINT
DENSITY
RELDEN
SWARM
SETTLE
WATER
CHTEMP
TMIN
TMAX
WSPD
STMSTK
NAVALLAG
SZAVLAG
HAZARD
NART
0.0
0.0
90.16
169.0
8.00
24.0
8.333
1.0
86.0
0.0769
7.0
3.0
26.0
44.0
24.0
1.0
0.0
0.0
2.0
0.0
1.0
1.0
159.1
344.0
84.0
122.75
250.0
1,150.0
152.0
43.0
90.0
40.0
54.0
76.0
53.0
174
14.0
42.0
4.0
23.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0025
0.0
-110.0
0.0
-42.0
-12.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
33.46
25.0
0.0
0.0
0.0
1.0
52.0
0.0
0.0
-3.0
10.0
26.0
12.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
63.78
102.0
0.0
7.75
0.0
1.0
68.5
0.0
0.0
0.0
19.0
35.0
18.0
0.0
0.0
0.0
1.0
0.0
0.0361
0.1247
61.44
104.5
6.076
15.10
4.694
4.574
67.40
-0.6542
5.836
0.0138
18.14
34.58
18.05
7.577
0.0698
0.203
0.921
0.2392
Table 11.3 Descriptive statistics used for the TEST data
Variables
Min.
1st Qu.
Median
Mean
3rd Qu.
Max.
AVAL
CLOSE
TOTSTK
TOTSTK60
INTSTK
SUMINT
DENSITY
RELDEN
SWARM
SETTLE
WATER
CHTEMP
TMIN
TMAX
WSPD
STMSTK
NAVALLAG
SZAVLAG
HAZARD
NART
0.0
0.0
90.55
170.0
8.000
23.38
8.225
1.0
86.0
0.0
7.5
4.0
26.0
44.0
24.0
14.75
0.0
0.0
2.0
0.0
1.0
1.0
141.70
300.0
62.0
87.75
47.5
266.7
144.0
2.667
72.0
23.0
41.0
72.0
57.0
189.00
8.0
24.0
4.0
22.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.02105
0.0
-90.0
0.0
-21.0
-9.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
24.21
1.5
0.0
0.0
0.0
1.0
55.0
0.0
0.0
-4.0
11.0
27.0
12.5
0.0
0.0
0.0
1.0
0.0
0.0
0.0
74.80
130.0
0.0
8.725
0.
1.0
69.0
0.0
0.0
0.0
19.0
35.0
18.0
0.0
0.0
0.0
2.0
0.0
0.04176
0.1810
61.68
106.9
6.385
15.92
4.476
4.073
70.07
-1.034
6.081
0.00232
18.5
35.69
17.95
13.47
0.0951
0.2877
1.65
0.4246
Avalanche Forecasting: Using Bayesian Additive Regression Trees (BART)
(a)
0.0884615
50
SWARM
(b)
150
Fig. 11.2 Presents a simple
tree which explains some
vocabulary
219
100
11
0.180327
0
0.027189
0
20
40
60
80
INTSTK
data set is bifurcated into two child nodes by means of a splitting rule, here
INTSTK [20. Observations with INTSTK [20 are put into one child node;
observations with INTSTK\20 are put into the other child node. Subsequently, in
this diagram, one of the child nodes is split further into two child nodes. This is
based on the splitting rule, SWARM [50. This tree has 3 terminal nodes, illustrated with boxes here, and two internal nodes, illustrated with ellipses. The
number of terminal nodes is always one more than the number of internal nodes.
The splitting rules are given beneath the internal nodes. This tree has depth 2; the
splitting employs two variables, INTSTK and SWARM. This partitioning of the
data according to the splitting rules given here is shown in a scatter plot in
Fig. 11.2b. This scatter plot highlights the actual observations when an avalanche
220
G. Blattenberger and R. Fowles
crosses the road. Each observation is contained in one and only one terminal node.
A forecasting rule for this partition is given and the misclassification rate for each
node in Fig. 11.2b is illustrated.8
The basic BART model is
yi ¼
m
X
g Xij jT j ; M j þ uij ; uij N 0; r2
j¼1
where i is the observation number (i = 1, … ,n) and j is the jth tree (j = 1, … ,m).
Here, the variable yi is the indicator variable AVAL, indicating whether an avalanche crosses the road. Each forecaster, j, in the ensemble makes forecasts
according to his own tree, Tj, and model, Mj—where Mj defines the parameter
values associated with the terminal nodes of Tj. It is a sum-of-trees model,
aggregating the forecasts of the m forecasters in the ensemble, each forecaster
being a weak learner.
This model seems particularly applicable to this situation. Recall the story of
the four forecasters at Red Mountain Pass in Colorado. The forecasters had
comparable performance. They each chose less than 10 variables out of the 31
available on which to base their forecasts. Only one of the chosen variables was
common among the forecasters. Here, aggregate is over an exogenous number of
forecasters, each with his own tree and his own selection of variables.
The trees for the m forecasters are generated independently. Each tree is generated, however, with a boosting algorithm conditional on the other m - 1 trees in
a Gibbs sampling process, consequently the term semi-independent. Given the m
trees generated in any iteration, the residuals are known and a new r2 distribution
is based on these residuals. An inverse gamma distribution is used for r2 and the
parameter distributions in the next iteration employ the r2 drawn from this
distribution.
A Markov Chain of trees is generated for each forecaster by means of a stochastic process. Given the existing tree, Tjk-1, for forecaster j at iteration k-1, a
proposal tree, T*, is generated. The generation of the proposal tree is a stochastic
process done according to the following steps:
Determine the dependent variable, Rjk, or the ‘‘residuals’’
for Y conditional on
P
the m-1 other trees, Rjk ¼ Y l6¼j g XjT ly ; M ly ; where l = k-1 if l [ j, and
l = k if l \ j.
A decision is made on whether the tree will be split as defined by the probability, a(1 ? d)b.9
8
This partition scores poorly but is only used to illustrate the concepts.
The default value for a, 0.95, is selected. This implies a high likelihood of a split at the root
node with a decreasing probability as the depth of the tree, d, increases. The default value of b is
2. However, b = 0.5 is used to obtain bushier trees (trees with more terminal nodes). The story
used in the text had forecasters using less than 10 variables, but at least 3.
9
11
Avalanche Forecasting: Using Bayesian Additive Regression Trees (BART)
221
Given a decision to split, a decision is made on the type of split. The types of
splits and their associated probabilities are: GROW (0.25), PRUNE (0.25),
CHANGE (0.4), SWAP (0.1). These are described in Chipman et al. (1998). At the
root node, there is only one option, GROW. The option CHANGE is feasible only
if the tree has depth greater than or equal to two. For each type of split, there are a
finite number of choices. GROW will occur at terminal nodes. CHANGE occurs at
a pair of internal nodes, one the child of the other.
The next decision concerns the variable on which the split is made and the
splitting rule, again among a finite number of choices. The variables are equally
likely. The number of potential splits depends on the variable selected, but for each
variable, the potential splits are equally likely.
Given this proposal tree, a posterior distribution is determined for each terminal
node based on a ‘‘regularization’’ prior designed to keep individual tree contributions small. Parameters are drawn from the posterior distribution for each terminal node.
The proposal tree is accepted or rejected by a Metropolis–Hastings
algorithm
ffi
with the probability of accepting T* equal to a ¼ min
qðT jk1 ;TÞ pðYjX;TÞpðTÞ
;1
qðT;T jk1 Þ pðYjX;T jk1 ÞpðT jk1 Þ
jk-1
where q(Tjk-1, T*) is the transition probability of going from T
to T* and q(T*,
Tjk-1) is the transition probability of going from T* to Tjk-1. The function q() and
the probabilities P(T*) and P(Tjk-1) are functions of the stochastic process generðYjX;TÞ
ating the tree. The ratio, pðpYjX;T
jk1 Þ is a likelihood ratio reflecting the data X and Y,
ensuring that the accept/reject decision is a function of the data.
Acceptance of a tree is dependent on there being a sufficient number of
observations in each terminal node of T*.
If the tree is accepted Tjk = T*, otherwise Tjk = Tjk-1.
The Markov Chain Monte Carlo (MCMC) is run for a large number of iterations
to achieve convergence. The individual forecaster’s trees are not identified. It is
possible that trees may be replicated among forecasters in different iterations. The
objective here is not parameter estimation but forecasting.
11.5 Results of the BART Application
11.5.1 Break-in Period
The MCMC is known to converge to a limiting distribution under appropriate
conditions, and a number of iterations are discarded to insure the process has
settled down to this distribution. It is not established, however, when this convergence is reached. The MCMC history of forecasts, for a number of dates in the
training data in Fig. 11.3a–d, is examined. In these figures, a break-in period of
5,000 iterations was used, with 50 trees or forecasters. Each point, in the history, is
the aggregation of the 50 forecasters for that iteration. A number of days were
selected to see how the process does in differing conditions. Although there is
222
G. Blattenberger and R. Fowles
Fig. 11.3 The MCMC history of forecasts for a number of dates in the training data
variation among the iterations, the forecasts ‘‘mix’’ well in that there is no functional trend among the iterative forecasts. The MCMC standard error varies among
the dates selected but is relatively uniform within each date.
11.5.2 Splitting Rules
Before discussion of the performance of the forecasting model, some of the
choices concerning the BART process were looked at. First the number of trees or
as called above the number of forecasters is specified. For comparison purposes,
50, 100, and 200 trees were used. Also, the parameters of the splitting rule for the
tree-generating process, P(split) = a(1 ? d)b, had to be specified. The default
value a = 0.95 was selected. This implies a high likelihood of a split at the root
node with a decreasing probability as the depth of the tree, d, increases.
11
Avalanche Forecasting: Using Bayesian Additive Regression Trees (BART)
Table 11.4 Average tree size per tree
parameter b Power = b
n tree
0.5
0.6
0.7
50
3.421
3.162
3.118
100
3.283
3.146
3.061
200
3.252
3.196
3.056
and iteration. Given n tree = number of trees and split
0.8
2.974
2.961
2.967
0.9
2.872
2.904
2.881
1
2.848
2.822
2.797
1.2
2.697
2.673
2.657
1.5
2.500
2.499
2.500
2
2.338
2.315
2.317
Distribution of Tree Sizes
in final iteration
0.35
Fraction of Forecasts
Fig. 11.4 The frequency
distribution of tree sizes
among forecasters within the
last iteration is pictured
223
50 Trees
0.3
100 Trees
0.25
200 Trees
0.2
0.15
0.1
0.05
0
1
2
3
4
5
6
7
8
9
10
11
Tree Sizes: Number of Terminal Node s
The parameter b relates to the bushiness of the tree. First, the average number of
terminal nodes per iteration and per tree for the first 3,000 iterations after the
break-in period was examined. These are given in Table 11.4.
The choice of 50 trees and b = 0.5 yields an average of 3.4 terminal nodes. To
be consistent with the Perl, a story on Red Mountain Pass was examined. While
Table 11.4 describes the average number of trees, there is substantial variation
among the forecasters in any single iteration. The frequency distribution of tree
sizes among forecasters within the last iteration is pictured in Fig. 11.4. While tree
size may vary substantially for any specific forecaster across iterations, the last
iteration should be representative for post break-in iterations.
This is consistent with each forecaster in the story making his decision based on
less than 10 variables.
11.5.3 Variable Choice
In the Red Mountain Pass example that the four forecasters had only one variable
in common in spite of the fact that their forecasts were comparably accurate. An
interesting comparison is the variable choice among the forecasters. This is
illustrated in Fig. 11.5 for 50 forecasters showing a box-and-whisker plot for
variable use among 50 forecasters in 3,000 postburn-in iterations. The vertical axis
gives the number of forecasters using each variable. A value of 50 would indicate a
variable used by every forecaster. No such variable exists. All variables on average
224
G. Blattenberger and R. Fowles
Fig. 11.5 Illustrates variable
choice for 50 forecasters
were used by at least five forecasters. This conforms again with our comments on
the redundancy of the variables and the Red Mountain Pass story.
The most commonly used variable was NART, the number of artificial
explosives used, and the least commonly used variable was HAZARD, a hazard
rating of the forecasters. It may be noted that the decision to use artificial
explosives more accurately reflects the forecasters’ evaluation of avalanche hazard
than the hazard rating itself.
SWARM, the presence of a warm period, and CHTEMP, the change in temperature, are also prominent variables, as is SZAVLAG, the recent occurrence of
many large avalanches. There are numerous indicators of snow depth and storm
size for forecasters to choose among. There is redundancy between TOTST and
TOTSTK60 relating to the depth of the snow pack. Similarly, redundancy exists
among INTSTK, SUMINT, and STMSTK, measures of storm activity, as well as
among DENSITY, WATER, and RELDEN, among WSPD, and SETTLE and
among the temperature variables TMIN, TMAX, CHTEMP, and SWARM. All are
selected by some forecasters with similar frequencies but no one dominates.
Although Fig. 11.5 illustrates variable choice for 50 forecasters, similar results
were obtained for 100 and 200 forecasters.
11.6 Realized Cost of Misclassification
Before turning to forecast performance in the test period, Fig. 11.3a–d illustrate
some relevant concepts. These figures illustrate the history of postbreak-in iterations on particular dates, a jagged black line. The actual event that occurred is
11
Avalanche Forecasting: Using Bayesian Additive Regression Trees (BART)
225
Table 11.5 Root mean square error for test period
Linear
Logit
BART 50
BART 100
BART 200
Guard station
0.165
0.162
0.397
0.165
0.163
0.161
shown by a dotted line at zero or one; the road closure is given by a dashed line
again at zero or one. The forecast for each date is the average of the iterative
values shown by a dotted dash line.
On 13 February 1995, the model predicted a low probability of an avalanche
crossing the road; this was correct, but the road was closed. On 1 January 1998, the
model predicted a moderate probability of an avalanche crossing the road; the road
was closed, but again there was no avalanche. On 27 December 2003, the model
predicted a low probability of an avalanche crossing the road; the road was not
closed, and there was no avalanche. On 28 January 2008, the model predicted a
high probability of an avalanche crossing the road; the road was not closed, but
there was an avalanche.
We now turn to the forecast performance of the BART model in the test period,
a common measure of forecast performance is root mean squared error, RMSE.
The RMSE values for the avalanche forecasting models are as follows
(Table 11.5).
The BART model with 100 forecasters wins on this criterion. However, as
noted earlier, all forecasting errors are not equivalent. This issue needs to be
addressed in evaluating the forecasts.
If one assumes that the forecasters act to minimize expected losses associated
with their road closure decision, the asymmetric loss function is:
Loss ¼ k p þ q
In this loss function, p represents the fraction of the time that an avalanche
crosses the road and it is open and q represents the fraction of the time that an
avalanche does not cross the road and it is closed. The term k is a scale factor that
represents the relative cost of failing to close the road when an avalanche occurs to
the cost of closing the road when an avalanche did not occur. Both p and q are
observable, while k is not. The decision rule to minimize expected loss implies an
implicit cutoff probability, k* = 1/(1 ? k), such that the road should be closed for
probabilities greater than k* and kept open for lower probabilities. In Blattenberger
and Fowles (1994, 1995) found a value of k = 8 to be consistent with the historical
performance of the avalanche forecasters and in line with revenue losses to the
resorts relative to loss of life estimates.10
10
Details are in Blattenberger and Fowles (1994, 1995). UDOT data indicate that, on average,
there are 2.6 persons per vehicle, 2.5 of which are skiers. Of these skiers, 40 % are residents who
spend an average of $19 per day at the ski resorts (1991 dollars). Sixty percent tended to be
nonresidents who spent an average of $152 per day (1991). A road closure results in a revenue
loss in 2005 of over $2.25 million per day based on average traffic volume of 5,710 cars during
the ski season.
226
G. Blattenberger and R. Fowles
Fig. 11.6 Compares RCMs for linear, logit, and BART predictions (from a 50 tree model)
To evaluate BART model performance, the RCM or loss was examined. It is
calculated as a function of the cutoff probability. Figure 11.6a compares RCMs for
linear, logit, and BART predictions (from a 50 tree model). The experts’ performance over the testing period as a horizontal line at 0.22 was also plotted. BART
performance is nearly uniformly lower than other models for cutoff probabilities
from 0.1 to 0.6. Figure 11.6b adds BART models with 100 and 200 forecasters for
comparison purposes.
All of the BART models outperform the logit and the linear models. They also
outperform the guard station decisions, although the guard station decisions are
immediate and are subject to certain legal constraints.11
11.7 Conclusion
This paper illustrates the advantage of using BART in a real-world decisionmaking context. By summing over many models, each contributing a small amount
of information to the prediction problem, BART achieves high out-of-sample
performance as measured by a realistic cost of misclassification. The philosophy
behind BART is to deal with a complicated issue—analogous to sculpting a
complex figure—by ‘‘adding and subtracting small dabs of clay’’ (Chipman et al.
2010a, b). This method seems well suited to the problem of avalanche prediction
where individual professional forecasters develop an intuitive approach and cannot
rely on a single analytic model.
11
The road must be closed while artificial explosives are used.
11
Avalanche Forecasting: Using Bayesian Additive Regression Trees (BART)
227
References
Abromeit D (2004) United States military for avalanche control program: a short history in time.
In: Proceedings of the international symposium on snow monitoring and avalanches
Blattenberger G, Fowles R (1994) Road closure: combining data and expert opinion. In: Gatsonis
et al. (eds.) Case studies in bayesian statistics, Springer, New York
Blattenberger G, Fowles R (1995) Road closure to mitigate avalanche danger: a case study for
little Cottonwood canyon. Int J Forecast 11:159–174
Bowles D, Sandahl B (1988) Avalanche hazard index for highway 210—little Cottonwood
Canyon mile 5.4–13.1. mimeographed
Chipman H, George EI, McCulloch RE (1998) Bayesian CART model search. J Am Stat Assoc
93(443):935–948
Chipman H, George EI, McCulloch RE (2009) BART: bayesian additive regression trees, http://
CRAN.R-project.org/package=BayesTree
Chipman H, George EI, McCulloch RE (2010a) BART: bayesian additive regression trees. Ann
Appl Stat 4:266–298
Chipman H, George EI, Lemp J, McCulloch RE (2010b) Bayesian flexible modeling of trip
durations. Transp Res Part B 44:686–698
LaChapelle ER (1980) Fundamental processes in conventional avalanche forecasting. J Glaciol
26:75–84
Perla R (1970) On contributory in avalanche hazard evaluation. Can Geotech J 7:414–419
Perla R (1991) Five problems in avalanche research. In: CSSA symposium
Part IV
Evidenced Based Policy Applications
Chapter 12
Universal Rural Broadband: Economics
and Policy
Bruce Egan
12.1 Introduction
The primary reason that current government policy is failing to achieve universal
rural broadband is rooted in its own past statutory and regulatory policy. The two
biggest roadblocks are the rural waiver provided for in The Telecommunications
Act of 1996 (hereafter cited as the 96 Act or Act) and the disastrous policy for
(mis) allocating radio spectrum. Unless and until substantial reform or repeal of
these policies occurs, the achievement of universal (affordable) rural broadband
will be elusive. The most recent government initiatives, including massive
‘‘stimulus’’ spending, will not result in a marked difference over what could be
achieved without it; what begins as a perfectly reasonable proposition for stimulating rural broadband, ends up being a struggle among entrenched special interests over pots of taxpayer money. The result is that a small amount of network
infrastructure investment and construction jobs will be added, but the industry
segments with the most growth will be lobbyists and bureaucrats. Fixing the
problem is straightforward: eliminate entry barriers from rural waivers, reduce
tariffs for network interconnection, target direct subsidies in a technologically
neutral fashion, and reform spectrum regulations.
12.2 The Political Economy of Rural Broadband: A Case
of Regulatory Schizophrenia
Nearly everyone agrees that universal rural broadband service is a laudable objective;
for years it is been the stated objective of government. Substantial taxpayer funds are
allocated to rural broadband projects, but progress is woefully inadequate; in large part
B. Egan (&)
Columbia Institute for Tele-Information, Columbia University, New York, USA
e-mail: began@wyoming.com
J. Alleman et al. (eds.), Demand for Communications Services - Insights and Perspectives,
The Economics of Information, Communication, and Entertainment,
DOI: 10.1007/978-1-4614-7993-2_12, Springer Science+Business Media New York 2014
231
232
B. Egan
due to policies that ignore economic cost and benefit and a variety of institutional
factors and lack of leadership. Government press releases are full of rhetorical grand
pronouncements about policies promoting investment in rural broadband networks.1 In
fact, real-world actions belie the vision. Politics trumps policy: this is the essence of
regulatory schizophrenia. If the government were really serious about promoting
investment in rural broadband, their first priority would be to get rid of long-standing
institutional barriers to entry including inefficient rural telco subsidy mechanisms,
rural competition waivers, high interconnection costs, and radio spectrum restrictions.
Leadership and statesmanship is required to overcome regulatory roadblocks.
This chapter discusses the efficacy of public investments and compares that
with what might be achieved if coupled with rational policies promoting economic
welfare. Wyoming, the highest-cost state in the lower 48, serves as a case study to
illustrate the vast differences in what is achievable versus actual (likely) results.
Numerous treatises have been written about the economic benefits of broadband
and the problems and opportunities posed by extending broadband into rural
areas.2 Unfortunately, a lot of the ‘‘research’’ is the product of a public or private
entity that is biased in one way or another by institutional constraints placed on
them by those funding or otherwise responsible for the work. This chapter highlights deficiencies in the research record and condenses myriad issues to promote a
fundamental understanding of institutional problems and the perverse incentives
created and makes recommendations to fix them.
There are two critical ingredients to a successful program to achieve universal rural
broadband: A high level of investment in network infrastructure and low prices for
consumer equipment and usage. Various government initiatives have been passed to
promote investment, provide public subsidies, and reduce usage prices for both consumers and service providers. In fact, if one could simply rely on the glowing rhetoric of
government initiatives, there would be no doubt that the United States would be rapidly
moving in the right direction. As usual, the devil is in the details and this is where the
breakdown occurs; invariably a host of regulatory roadblocks pop up to delay or
undermine the objective.3 The bottom line result is that little actually gets done.
If skeptical of this assertion, consider the evidence. Examples of regulatory
schizophrenia abound for everything from competition policy to radio spectrum
policy to pricing network usage; the result is higher costs, repressed demand and
attendant reduction in consumer welfare. The details of failed policies in the
1
For an update on broadband regulatory initiatives, see Federal Communications Commission
(FCC) (2011a).
2
For example, see the article and references in Dickes et al., 4th Qtr. (2010); for a
comprehensive global perspective including research references, see: International Telecommunication Union (2011).
3
Dickes et al. put it succinctly: ‘‘Unfortunately, the status quo system of broadband providers is
unlikely to offer service to the most rural communities or enhance existing service in already
underserved rural areas. Current suppliers operate under a complex array of government
regulations, subsidies, and market protection, which provide little incentive for these firms to alter
the status quo structure.’’
12
Universal Rural Broadband: Economics and Policy
233
following sections lay bare the hollow promises of policymakers’ claims of large
increases in jobs or subscribers.
12.3 The Infamous Rural Waiver
The 96 Act called for sweeping reforms to promote competition to traditional
telephone companies, even in rural areas. Section 251 of the Act imposed a host of
obligations on incumbent telcos to accommodate competitive entry and nondiscriminatory interconnection. But the government was already subsidizing smaller
rural telephone monopolies for their existence, and it was deemed necessary to
grant them a rural waiver lest they be subjected to ‘‘harmful’’ competition.
Section 252 of the Act provided for a grievance resolution process whereby
entrants could challenge a rural telco‘s decision not to comply with requirements
of Section 251. But the process was rather nebulous, causing confusion and drawn
out litigation, thereby perpetuating the barrier to entry. Rural telco subsidies and
the rural waiver provisions endure to this day.
On May 25, 2011, the Federal Communications Commission (FCC 2011b)
issued a declaratory ruling with a goal of limiting the use of the rural waiver. This
ruling is an important first step to clean up the hodge-podge of state-by-state
rulings regarding rural waivers: ‘‘Thus, we believe that a uniform, national policy
concerning the scope of the rural exemption is necessary to promote local competition, prevent conflicting interpretations of carriers’ statutory obligations under
the Act, and eliminate a potential barrier to broadband investment.’’
It is about time; this issue has been on the table and in need of a solution for
many years. Had this ruling been made years before, it would surely have made the
massive subsidies in the 2009–2010 stimulus spending more effective.
Rural telcos have every incentive to preserve the rural waiver and they have the
political clout to do it. In order to get rid of rural waivers and, in turn, costly and
time-consuming litigation, lawmakers must practice some statesmanship and take
on the task of crafting a new efficient subsidy mechanism.
Rural telcos are not only concerned about maintaining their federal subsidy,
they are acutely aware that a broadband supplier entering their market will
potentially cause customers to cancel phone service altogether. Once a subscriber
purchases broadband access, it is easy to make phone calls over the broadband
connection using voice over internet protocol (VoIP). The latest VoIP technology
is a near perfect substitute for traditional phone service and is much cheaper. Most
VoIP plans offer unlimited local and long-distance calling for a small flat–rate
monthly charge, much lower than rural phone bills with high call charges. It is
natural that incumbent rural telcos would try hard to keep broadband providers out
of their market because they risk losing major revenue streams and losing the
‘‘stranded’’ assets associated with subscriber phone lines.
Still, rural waivers and federal and state tariff rules allow small rural telcos to
charge outrageously high usage charges for long-distance calls and for
234
B. Egan
interconnecting carriers’ originating and terminating traffic. It is straightforward to
fix this: generate the entire subsidy required via a competitively neutral and technologically neutral revenue surcharge. Basic economics dictates that if one must
have a subsidy, that it be implemented in a way that does the least amount of damage
to public welfare. That means one needs a sustainable and nondiscriminatory subsidy
mechanism that generates a minimum but sufficient level of funds for rural telcos to
provide basic service. Once this is achieved, then regulators can mandate low (or
zero) tariffs for network interconnection allowing call prices to fall dramatically.
Many years ago, the FCC‘s policy analysis recommended switching to an
economically efficient subsidy system that did not collect money from usage
charges or carrier interconnection charges; this transition started way back in
1984.4 But progress has been painfully slow; subsidies continue to be collected via
usage-based charges and continue to hinder rural broadband deployment.
There is an economically efficient option called a revenue surcharge. This is a
much simpler and efficient subsidy regime. Rather than charging different tariff
rates for access lines or usage for different types of network providers, this
alternative would place a competitively neutral flat rate surcharge on the revenue
of all service providers, despite whether or not they are regulated and no matter
what technology they use. It is easier to force reporting of revenues than it is to
measure network lines and usage, both of which are subject to misreporting,
cheating and various arbitrage schemes carriers use to game the current system.
This problem is inscrutable, especially for small rural telcos with limited
resources; it is better to scrap the flawed system rather than spend the time and
effort to try to mitigate it. It is expensive and exceedingly difficult for a small telco
to examine and verify all traffic that originates or terminates on its network in
order to accurately bill for it. But something has to be done because it involves a
huge amount of money and traffic.5 For years, the FCC has struggled with these
problems and has yet to find a way to stop it. A solution to this problem is not in
the cards in an environment where carriers self-report their usage. Even though it
is obviously the right thing to do, it will take backbone and statesmanship for
lawmakers to challenge the status quo and propose a new revenue surcharge
regime to replace the current convoluted system of usage charges.
12.4 Radio Spectrum Policy
It is easy to see how reforming radio spectrum policy could go a long way toward
achieving universal broadband. First, due to topography in many rural areas,
broadband access via radio spectrum is the best technology choice. Second, since
4
For some history of the transition of subsidies from usage-based charges to fixed monthly
charges, see Congressional Research Service (2002).
5
For example, one small telco reports that about 20 % of traffic is unidentifiable. Private
communication from Ron McCue, President/C.O.O., Silver Star Communications.
12
Universal Rural Broadband: Economics and Policy
235
there is little to no demand for radio spectrum in remote sparsely populated areas,
there is no threat of harmful radio interference proscribed by the FCC.
So, what is the hold up? Government rules are; ever since Congress passed laws
requiring the FCC to auction licensed spectrum, it has continued to manage radio
spectrum with an eye toward maximizing auction revenues rather than consumer
welfare. This is another clear case of regulatory schizophrenia; righteous policy
rhetoric about doing the right thing is undermined by real-world institutional
roadblocks. Like most government policies, spectrum policy is greatly influenced
by entrenched special interests and their lobbies. There are industry fights over the
use of public airwaves for serving lucrative urban markets and little industry
support for rural initiatives. Again, the government must practice some statesmanship, take the lead, and make necessary reforms.
It is an understatement that progress has been slow. Almost a decade ago, the
FCC (2002) produced a Spectrum Policy Report and a number of reasonable
recommendations for spectrum reform in rural areas were proposed; yet, to this
day, no significant progress has been made. If anything, the situation has actually
gotten worse.
For example, in order to provision a broadband access facility to serve a rural
enclave, it is usually cost-effective to construct a single antenna with 360 coverage using licensed radio spectrum. This type of FCC license, called a P35, used
to exist for ‘‘wireless‘‘ cable TV service and permitted a radio coverage area for up
to 35 miles radius. However, the FCC eliminated this type of license and instead
folded these small area licenses into the much larger geographic areas licenses
typical of those used by large players (e.g., AT&T, Verizon) serving dense urban
markets.
Large players value licenses that cover large geographic areas and the FCC
accommodates them by expanding the license coverage areas. While it is fine for
the FCC to design auction licenses to maximize auction revenue, the unintended
consequences for rural areas is a disaster. The auction winners end up with
exclusive rights to serve vast geographic areas, but naturally they only want to
build networks where it is profitable to do so. People that reside in those vast rural
areas are the losers and, once the government auctions are over, it is difficult to
undo the damage.
Large players view it as a hassle to share their spectrum with others that may
want to serve rural enclaves within the license area. Any fees the auction winners
might receive from their ‘‘partner’’ would not justify the risk and potential legal
liability. Besides, larger players believe that, some day, a new technology might
come along that would allow for expanding their own coverage into rural areas. In
any event, the historical record is clear: When it comes to giving up your exclusive
rights to spectrum use, incumbents will fight tooth-and-nail to keep what they have
forever. Just the thought of setting a precedent for voluntarily sharing spectrum is
scary for incumbents. Lawmakers and the FCC know full well the powerful
industry resistance to spectrum sharing and should have set auction rules that
carved out licenses to promote rural network investment. It will not be easy to
remedy this situation, and true statesmanship will be required to overcome the
236
B. Egan
wrath of license winners or others who were previously endowed with exclusive
spectrum rights.
The FCC has noted the problem that large players do not have sufficient interest
or incentive to extend wireless access into small rural enclaves and has tried in
vain to mitigate the adverse effect of its spectrum policy. The FCC has studied the
problem and initiated policies for advocating the leasing of rural spectrum rights
and proposed some flexibility in rules governing maximum power for transmitters
in rural areas and relaxed interference parameters. None of these has had any
significant impact, so regulators have to step up and deal directly with the lack of
cheap licensed spectrum by allocating more to rural use or forcing sharing of
licensed spectrum.
Two policy reforms are called for. First, the FCC needs to revive the small area
antenna license. Second, idle spectrum must be made available for use in rural
areas. There is a physical abundance of unused radio spectrum in rural areas and it
makes sense to put it to good use. For example, there are no TV broadcasts in
many rural areas, so the FCC should allow rural broadband providers to use that
spectrum.
12.5 Public Subsidies
For some time, regulators and other government agencies have tried to increase
investment in rural telecom infrastructure via direct and indirect subsidies. Most
recently, as part of the American Recovery and Reinvestment Act of 2009 (ARRA
2009)—known as the stimulus package—Congress required the FCC to deliver a
National Broadband Plan. The National Broadband Plan (NBP) was released on
March 17, 2010 (FCC 2010). The NBP set a broadband availability goal that:
‘‘every American should have affordable access to robust broadband service, and
the means and skills to subscribe if they so choose,’’ and cited a ‘‘broadband
availability gap’’ of seven million housing units that do not have access to terrestrial broadband infrastructure capable of download speeds of at least 4 Mbps.
The FCC estimated that $24 billion in additional funding would be necessary to fill
the gap. This implies a subsidy of $3,429 per household to fill the gap.
Historically, the primary source of rural subsidies was the universal service
fund (USF) and its various targeted programs. While not explicitly designed to
subsidize rural broadband, funds provided by the USF high-cost program, lowincome program, schools and libraries program, and rural health care program
included network investments that support broadband.6
6
The Universal Service Administrative Company (USAC) administers the FCC’s USF Program.
For a basic tutorial on USF programs, see USAC and the USF: An Overview, March 2011, http://
www.usac.org/_res/documents/hc/pdf/training-2011/USAC-and-High-Cost-Overview-March2011.pdf.
12
Universal Rural Broadband: Economics and Policy
237
The FCC recently announced major reforms in its USF fund to redirect subsidies from promoting plain old telephone service (POTS) to broadband access.
The new Connect America Fund (CAF) was announced on November 18, 2011:
‘‘The CAF—with an annual budget set at no more than $4.5 billion, the same as
the current universal service funding level—is expected to help connect 7 million
Americans to high-speed internet and voice in rural America over the next 6 years,
generating approximately 500,000 jobs and $50 billion in economic growth over
this period. Main Street businesses across the country will benefit from the
opportunity to sell to new customers throughout the US.’’7
The FCC‘s CAF Order does not provide any detailed technical analysis to
validate its forecast of new broadband subscriptions in rural areas or additional
jobs it creates. Whether or not the FCC’s overall forecasts are correct, its plan to
transition from USF to CAF represents a courageous attempt to implement major
reforms in numerous areas of competition policy and probably qualifies as the
most ambitious undertaking, the Commission has ever attempted. There are lots of
refreshing statesmanlike pronouncements in the Order and it will take real
statesmanship to see it through the political and judicial challenges it faces. One
can only hope that the FCC’s wide-ranging proposals in the CAF Order and Notice
actually come to pass, the sooner the better.
Do not hold your breath; the various transition mechanisms that have to be
developed and implemented are complex and will only occur slowly over time, if
at all. As usual, the FCC tempered its CAF plan reforms in the case of politically
powerful rural telcos and there remains some wiggle room for small telcos that
currently qualify for rural waivers to continue to delay implementation.8 But
apparently that is not enough; in an attempt to block proposed reforms, the small
telco industry lobbies have already filed a petition to reconsider the CAF Order.9
Considering everything the FCC is attempting to do in the transition to CAF, it
is worth taking a step back and to imagine a simpler, less onerous, and much less
costly regulatory framework. This is akin to simplifying the tax code, if policymakers can take the political heat. The FCC rules are so complex that it is a full-
7
FCC 11–161, Report And Order And Further Notice Of Proposed Rulemaking Adopted:
October 27, 2011 Released: November 18, 2011. Full text is available at: http://www.fcc.gov/
document/fcc-releases-connect-america-fund-order-reforms-usficc-broadband.
8
ibid. In particular, the FCC declined to review or make any changes in state-by-state carrier of
last resort (COLR) rules, and the amount of subsidy from the old USF fund to small rate-of-return
telcos will remain the same at $2B per year, through 2017. Also, see p. 42 ‘‘Waiver. As a
safeguard to protect consumers, we provide for an explicit waiver mechanism under which a
carrier can seek relief from some or all of our reforms if the carrier can demonstrate that the
reduction in existing high-cost support would put consumers at risk of losing voice service, with
no alternative terrestrial providers available to provide voice telephony.’’
9
The petition urges the FCC to reconsider key aspects of the CAF Order: sufficiency of budget
for high-cost universal service, capping mechanisms, and waiver standards. See Petition For
Reconsideration and Clarification of The National Exchange Carrier Association, Inc.;
Organization for The Promotion and Advancement of Small Telecommunications Companies;
and Western Telecommunications Alliance. December 29, 2011.
238
B. Egan
time job just to understand a fraction of them. Fundamentally, the CAF plan is to
get rid of barriers to entry and subsidize broadband where it is otherwise
unprofitable to provide—sounds simple enough. But, if the transition requires that
you have to do that within the institutional and political constraints of all the past
layers of incremental regulatory programs, you end up torpedoing the objective. It
is better to start with a zero-based budgeting approach, or perhaps terminating, and
grandfathering, all the old programs—phasing them out while the new simpler
rules guide the future.10
Other government subsidy programs are administered by the United States
Department of Agriculture (USDA) Rural Utilities Service (RUS). For many years,
it offered limited grants and preferential loan terms to rural telephone companies to
expand and improve network facilities for POTS; since 1994, it required funded
projects to be ‘‘broadband capable.’’ Since 2002, the RUS Rural Broadband
Access Loan and Loan Guarantee Program directed funds to its client companies
to expand broadband networks. All of this pales in comparison however to the
stimulus package funding.
12.6 The Stimulus Package
ARRA allocated over $7 billion in subsidies to stimulate rural broadband coverage. The purpose of the program was to ‘‘provide access to broadband service to
consumers residing in unserved areas of the United States’’ and to ‘‘provide
improved access to broadband service to consumers residing in underserved areas
of the United States.’’ The funds are awarded for project proposals in two separate
but related programs, National Telecommunications and Information Agency
10
To drive the point home for those who do not work on this full time, the following passage
from the Executive Summary of the CAF plan document (p. 12 of 751!) should suffice.
27. Alongside these broadband service rules, we adopt reforms to: (1) establish a framework to
limit reimbursements for excessive capital and operating expenses, which will be implemented no
later than July 1, 2012, after an additional opportunity for public comment; (2) encourage
efficiencies by extending existing corporate operations expense limits to the existing high-cost
loop support and interstate common line support mechanisms, effective January 1, 2012; (3)
ensure fairness by reducing high-cost loop support for carriers that maintain artificially low enduser voice rates, with a three-step phase-in beginning July 1, 2012; (4) phase out the safety net
additive component of high-cost loop support over time; (5) address Local Switching Support as
part of comprehensive ICC reform; (6) phase out over three years support in study areas that
overlap completely with an unsubsidized facilities-based terrestrial competitor that provides
voice and fixed broadband service, beginning July 1, 2012; and (7) cap per-line support at $250
per month, with a gradual phasedown to that cap over a three-year period commencing July 1,
2012. In the FNPRM, we seek comment on establishing a long-term broadband-focused CAF
mechanism for rate-of-return carriers, and relatedly seek comment on reducing the interstate rateof-return from its current level of 11.25 %. We expect rate-of-return carriers will receive
approximately $2 billion per year in total high-cost universal service support under our budget
through 2017.
12
Universal Rural Broadband: Economics and Policy
239
(NTIA) Broadband Technology Opportunities Program (BTOP)11 and RUS
Broadband Initiatives Program (BIP).12 Stimulus spending was supposed to be for
so-called ‘‘shovel ready’’ projects and funded projects must be substantially (i.e.,
67 %) complete within 2 years and fully complete in 3 years. As of April 2011,
only 5 % of funds were spent, so it is not likely that all project deadlines will be
met.13
Congressional oversight and monitoring was included in ARRA. Specifically,
NTIA/BTOP must maintain a Web site and make detailed project descriptions
available in a transparent manner. NTIA/BTOP is required to provide quarterly
reports on the status of its funding program.14 In contrast, ARRA did not provide
any mandates for transparency for the RUS/BIP and it shows; there is virtually no
project status detail provided on its Web site. However, RUS/BIP also submits
quarterly reports to Congress with summary application data.15
The Congressional Research Service (CRS) provides some oversight support to
Congress and on April 19, 2011, published a status report (Kruger 2011).
According to the report, ‘‘as of October 1, 2010, all BTOP and BIP awards were
announced. In total, NTIA and RUS announced awards for 553 projects, constituting $7.465 billion in federal funding. This included 233 BTOP projects (totaling
$3.936 billion) and 320 BIP projects (totaling $3.529 billion).’’
12.7 Efficacy of ARRA
It remains to be seen if ARRA broadband subsidies result in a substantial boost in
short-term economic activity, especially considering that the money is being spent
rather slowly and before any fundamental institutional reforms have been made to
mitigate the aforementioned regulatory roadblocks. The CRS report states that the
primary issue for Congress is ‘‘to ensure that the money is being spent wisely and
will most effectively provide broadband service to areas of the nation that need it
most, while at the same time, minimizing any unwarranted disruption to private
11
For details, see http://www2.ntia.doc.gov/.
For details, see http://www.rurdev.usda.gov/utp_bip.html.
13
Kruger (2011), p. 9.
14
See the latest quarterly report: ‘‘Broadband Technology Opportunities Program (BTOP)
Quarterly Program Status Report,’’ submitted to the Committee on Appropriations United States
Senate, the Committee on Appropriations United States House of Representatives, the Committee
on Commerce, Science and Transportation United States Senate, and the Committee on Energy
and Commerce United States House of Representatives, December 2011, National Telecommunications and Information Administration, US Department of Commerce.
15
See the last report available on the RUS/BIP Web site: ‘‘Broadband Initiatives Program
Quarterly Program Status Report,’’ submitted to The Committee on Appropriations United States
Senate and The Committee on Appropriations US House of Representatives December 27, 2010,
US Department of Agriculture Rural Development Rural Utilities Service. See http://
www.rurdev.usda.gov/supportdocuments/BIPQuarterlyReport_12-10.pdf.
12
240
B. Egan
sector broadband deployment.’’ The concern is that public subsidies not be used to
replace or compete against market investments made by existing broadband network companies. There is some research supporting such a claim.16
Recognizing the concern, NTIA and RUS implemented a public notice response
(PNR) procedure whereby existing companies have 30 days to inform the agency
that a certain funding application includes an area already covered and therefore
does not meet goals of the program. Needless to say, the majority of applications
triggered a PNR from an incumbent operator, putting the agency in the position of
making a judgment one way or another. This is a sticky wicket indeed, and there is
no easy solution; the incentives on both sides of the issue are to maximize profit
and/or subsidies, neither of which is consistent with the overriding public policy
objective to provide consumers with broadband at the least cost of supply and the
lowest price for usage. This has resulted in a struggle among major players to
protect or promote their own turf; many jobs were created for bureaucrats to
administer the subsidies and prevent fraud, and for lobbyists fighting over pots of
taxpayer money.
Once the $7 billion plus stimulus package and National Broadband Plan was
announced, Web sites began popping up advocating the policy position of special
interest groups. With names like ‘‘Save Rural Broadband‘‘ and ‘‘Broadband for
America’’ it all sounds like ‘‘mom and apple pie.’’ Sometimes, it is not obvious
exactly who founded any particular organization or who is funding it. Certain other
Web sites represent organizations (e.g., LinkAmerica Alliance) that receive subsidies on behalf of public and public/private entities but proclaim their independence from any particular group. There is usually no identification of funding by
any corporate sponsor(s) provided on the Web site. Such is the nature of lobbying,
but one has to be wary of accepting policy analysis and recommendations from
anonymous sources.
12.8 Measuring Success
The primary measures of success are jobs created and increased rural broadband
subscriptions. But nowhere in ARRA progress reports submitted to Congress is
there any serious attempt to quantify increased broadband subscriptions and
associated jobs created. There is scant evidence of increased jobs or subscriptions
that can be attributed to the stimulus program over that that would have occurred
without the program. Analytically, this is not an easy task, but is nevertheless
16
For example, one recent study claims that ‘‘The evidence indicates that RUS’ history of
funding duplicative service has continued under BIP, and that the current program is not a costeffective means of achieving universal broadband availability.’’ See ‘‘Evaluating the CostEffectiveness of RUS Broadband Subsidies: Three Case Studies,’’ Jeffrey A. Eisenach, George
Mason University School of Law, Kevin W. Caves, Navigant Economics, April 13, 2011. It
should be noted that the cable industry lobby (NCTA) supported this study.
12
Universal Rural Broadband: Economics and Policy
241
Table 12.1 BTOP awards by grantee entity type
Entity type
Number of awards
Total awards (%)
Government
Nonprofit
For-profit
Higher education
Tribe
Total
38
25
24
11
2
100
89
58
55
25
6
233
Source Department of Commerce, NTIA
essential, as noted in the CRS report: ‘‘Evaluating the overall performance and
impact of broadband programs is complex. Not only must the validity of the
agency estimates be assessed; it is also necessary to take into account broadband
deployment that might have occurred without federal funding.’’
NTIA and RUS released early estimates of increased broadband subscriptions
after the awards were announced.17 Without presenting any detailed analysis,
RUS/BIP forecasts increased subscriptions of 2.8 million households and 364,000
businesses and 25,000 immediate jobs created. NTIA/BTOP estimated potentialincreased subscriptions at 40 million residential and 4.2 million for small business.
In future, policymakers need to seek out independent objective sources for rigorous analysis.18
Suffice it to say that unless and until the aforementioned institutional linchpins
to success are mitigated, there is relatively little more that government deficit
spending can accomplish. While the money may eventually get spent, it is doubtful
that it will significantly increase rural broadband subscriptions over what they
would have been anyway.
Increased broadband subscriptions cannot occur until construction of physical
connections linking a subscriber premise to the core network. Therefore, if the
government’s objective were focused on private sector job creation, the biggest
immediate positive impact on the economy would be projects that directly expand
connections to small businesses.
Most of the BTOP awards went to government entities and nonprofits, and most
of the funds went to ‘‘middle-mile’’ projects, not ‘‘last-mile’’ subscriber connections (Table 12.1).19
17
The Broadband Technology Opportunities Program: Expanding Broadband Access and
Adoption in Communities Across America, Overview of Grant Awards, p. 19, NTIA/BTOP,
December 14, 2010. Advancing Broadband: A Foundation for Strong Rural Communities, p. 3–4,
USDA/BIP, January 2011.
18
For example, Katz provides references to academic studies including a summary of studies
relating broadband investment to employment and economic growth, p. 2, Fig. 1, Studies of the
Employment Impact of Broadband, in: ‘‘Estimating Broadband Demand And Its Economic
Impact In Latin America,’’ Prof. Raul L. Katz, Columbia Business School, Proceedings of the 3rd
ACORN-REDECOM Conference Mexico City, May 22–23rd 2009.
19
Tables reproduced from CRS Report (2011).
242
B. Egan
On the other hand, two-thirds of BIP grants and loans went to for-profit entities
and another 22 % went to rural cooperatives (Table 12.2).
This is no surprise since RUS grants and loans have traditionally been targeted to incumbent rural telephone companies or their contractors. This itself is
another institutional arrangement that is troubling. Historically, RUS refused to
make loans or grants to any applicant that planned to invest in facilities within
an incumbent’s local exchange territory, especially if the incumbent was a past
recipient of RUS funds. This policy is blatantly anticompetitive, but was viewed
as necessary since small rural telcos already were being subsidized for their
existence.
Almost 75 % of the BTOP projects were for deployment of wireline (fiberoptic) technology and only about 8 % for wireless. For BIP projects, about 72 %
were for wireline investments and 17 % for wireless. It is impossible to know how
spectrum policy reform might have boosted the wireless portion of the investment,
but it surely would be substantial if cheap or even free small market licenses and/
or spectrum sharing were available.
Most digital network upgrades to traditional rural telco networks also serve to
make them broadband capable; indeed since 1994 RUS required that funds granted
to telcos be used to construct ‘‘broadband capable’’ networks, and in 2002, the
RUS initiated the Rural Broadband Access Loan and Loan Guarantee Program.
While the RUS/BIP no longer has a policy of denying applications for entities that
wish to enter an incumbent’s service territory, it does require that the territory the
new entrant wishes to enter is unserved or ‘‘underserved.’’ Whether the effect of
these anticompetitive policies produce a net benefit to rural consumers remains an
unanswered question. After all, if the government were not convinced that
monopoly was the right business model for rural telecom, it would not have passed
the rural waiver portion of the 1996 Act that mandated competition for most telcos.
There can be no doubt that the RUS bias toward its incumbent telco clients,
coupled with the federal rural competition waiver, not to mention the FCC uni-
Table 12.2 BIP infrastructure awards by entity type
Number of entity type
Awards
Total grant
($millions)
Total loan
($millions)
Total award
($millions)
For-profit corporation
Cooperative or mutual
Public entity
Nonprofit corporation
Indian tribe
Total
544
486
123
20
17
1191
1727
1226
332
87
51
3425
202
65
13
8
9
297
1183
740
209
67
34
2233
Source US Department of Agriculture, Rural Utilities Service, December 27, 2010, RUS Quarterly ARRA Report, p. 5, available at http://www.rurdev.usda.gov/supportdocuments/
BIPQuarterlyReport_12-10.pdf
12
Universal Rural Broadband: Economics and Policy
243
versal service subsidies, constitutes a formidable barrier to entry for competitive
broadband companies wishing to enter rural markets.20
12.9 Wyoming: A Policy Case Study
Since supply and demand conditions vary greatly from state to state, a meaningful
economic analysis of broadband policies requires a localized view. Having
researched universal service issues for over 30 years and having lived in Wyoming
for 20 years, this author is somewhat uniquely qualified to address issues regarding
universal rural broadband in Wyoming.21
Of all the states in the lower 48, Wyoming is arguably the best case for evaluating the efficacy of universal broadband policies. Wyoming has the highest cost
and highest prices for basic phone service and it is the least populous state with the
lowest population density per square mile. Wyoming’s land mass features a
challenging mountainous topography, and it has the least rural broadband
availability.22
12.10 Institutional Considerations
Wyoming, like all states, must contend with all of the aforementioned federal
roadblocks as it tries to achieve universal broadband coverage. In addition,
Wyoming has its own unique institutional roadblocks that are a combination of
legacy statutes, regulations, and industry structure. Fortunately, the goals and
20
For example, the CRS Report states ‘‘Until 2011, the USDA Office of Inspector General (OIG)
had not reviewed the BIP program, instead leaving that review to the Government Accountability
Office (GAO). OIG has previously reviewed (in 2005 and 2009) the existing RUS Rural
Broadband Access Loan and Loan Guarantee Program and made a number of criticisms,
primarily that too many loans were made in areas with preexisting broadband service and in areas
that were not sufficiently rural.’’
21
Author was a member of the governor’s Wyoming Telecommunications Council
(2003–2007); the Council objective was to implement a statewide universal broadband access
policy. Past research includes ‘‘Toward a Sound Public Policy for Public High-Speed Information
Networks,’’ Columbia Institute for Tele-Information, Research Working Paper #282, Columbia
Business School, September 1988; ‘‘Bringing Advanced Telecommunications to Rural America:
The Cost of Technology Adoption,’’ Columbia Institute for Tele-Information, Research Working
Paper #393, Columbia Business School, October, 1990 and Telecommunications Policy,
February, 1992; ‘‘The Case for Residential Broadband Communication Networks,’’ draft,
Columbia Institute for Tele-Information, Research Working Paper #456, Columbia Business
School, January 1991; ‘‘Improving Rural Telecommunications Infrastructure,’’ The Center For
Rural Studies, Nashville, TN (1995) and TVA Rural Studies, University of Kentucky.
22
Wyoming has 67.3 % of rural population without broadband access versus 28.2 % for the US,
See FCC report (2011a), Appendix B, p. 25.
244
B. Egan
solutions at the state level are the same as at the federal level, namely increase
investment and reduce demand prices for broadband access.
Just as at the federal level, Wyoming lacks the leadership and statesmanship
required in order to reform outdated institutions and rules. There is no significant
activity within state government to direct broadband investment policy.23 Realizing the important contribution that broadband makes to economic growth and
productivity, many other states have designated a responsible agency (and budget)
to design policies to promote it.
Wyoming state government has a particularly troubling institutional arrangement. By statute, the Wyoming Public Service Commission (PSC) is charged with
responsibility for championing universal telephone service and is proscribed from
regulating broadband. The result is that PSC policies favor small rural telcos,
sometimes at the expense of good broadband policy. In short, the supply and
demand for traditional telephone regulation are alive and well in Wyoming. If
and when telephone subscribers in rural Wyoming bypass the phone network and
begin making phone calls over broadband connections, the PSC will have nothing
to regulate.24 Indeed, the PSC has already filed comments with the FCC
requesting that it reconsider those portions of its plan to reduce or eliminate
usage-based interconnection charges and redirect USF toward broadband and
away from POTS.
The PSC views the FCC‘s plan to promote universal broadband as a threat to
small rural telcos: ‘‘We have no quarrel with the FCC’s general vision for a
broadband future, but our practical experience is that this vision falls short of the
universal service required by federal statute.’’25 In particular, the PSC is opposed
to the FCC’s plan to dramatically reduce per-minute charges paid by long-distance
companies to small rural telcos, and, in turn, prices for long-distance calls. Given
the current institutional arrangements that exist, the PSC has a point. The PSC
approves and administers the (high) state tariff for interconnection and is simply
doing its job to protect it, especially since it has no statutory mandate to promote
universal broadband. But the bigger issue is what is right for the future. Besides
slowing down broadband investment, it is absolutely clear that the old way of
generating rural subsidies from usage-based charges is inefficient, anticompetitive,
23
There is some state government activity to direct federal subsidy funds for rural health care,
libraries, and education, but these are administered on a case-by-case basis by responsible
agencies. The chief information officer (CIO) has responsibility for operations and procurement
of state government telecom systems, but has no statutory authority for broadband policy
development. The CIO also has responsibility for meeting data production requirements of the
National Broadband Plan and has hired an outside consultant to do so.
24
Except for electricity, gas, and water.
25
Further Inquiry into Certain Issues In the Universal Service-Intercarrier Compensation
Transformation Proceeding, Reply Comments of The Wyoming Public Service Commission,
September 6, 2011, p. 4. The full document is available at: https://prodnet.www.neca.org/
publicationsdocs/wwpdf/9711wypsc.pdf.
12
Universal Rural Broadband: Economics and Policy
245
and a huge welfare loss for consumers.26 Consumer welfare is highest when call
charges are lowest and any policy that promotes low prices for calling is a good
policy.
Wyoming state government needs to reform its institutions so that incentives
for all agencies are aligned to promote broadband. It can be done without necessarily harming small telcos by implementing an efficient system for recovering
so-called stranded investment, created when subscribers switch from expensive
phone calls over POTS to cheap or even free calls over the internet.27
The writing is on the wall; Wyoming already has many more cell phones than
telephone lines and it will just as surely be the same in the future for broadband
lines. Government needs to embrace the transition to telephone alternatives, not
delay it. Wyoming law needs to be changed. Either the PSC needs to be allowed to
enact policies to actively promote broadband, as the FCC is allowed to do, or a
new agency needs to take on the task.
The office of the Governor does have a policy advisor for Energy and
Telecommunications, but the lucrative energy segment garners the attention
while telecommunication gets short shrift. Wyoming, with no state income tax,
enjoys annual state budget surpluses due to its huge energy sector revenues.
Significant progress toward universal broadband is achievable if only a small
fraction of the annual energy sector surplus were redirected to investment in
broadband. Yet, for lack of leadership, the proposition is not even being
considered. Broadband needs and deserves its own champion in state
government.
Another institutional problem that needs to be overcome is the ownership,
operation, and administration of state telecom network infrastructure. Historically,
and for good reason, economic development in large but sparsely populated
western states relied on transportation infrastructure and electrification. These two
are related since public rights-of-way for utilities usually follow road and rail
transport routes. As a result, the Wyoming Department of Transportation (WYDOT) was granted ownership and control of state rights-of-way and public telecom network infrastructure. As a practical matter, WYDOT road projects
dominate its budget and resources; telecom is merely an afterthought. Transportation infrastructure projects must comply with a regimented bureaucratic process
and timeline that is not consistent with relatively rapid and flexible deployment
that is typical for broadband technology. WYDOT planning for upgrades to roads
and bridges is based on a 5–20+-year time horizon. Telecom network technology
planning and deployment, especially wireless, occurs on a much shorter time
26
For example, see the numerous references to past studies in Elleg. Jerry, ‘‘Intercarrier
Compensation and Consumer Welfare,’’ Journal of Law Technology and Policy, Vol. 2005, No.
1. Welfare losses due to Universal Service subsidies derived from usage-based charges are
discussed in pp. 118–123.
27
This is nothing new; such policy practices have been in place for years in many jurisdictions.
246
B. Egan
horizon and at much less cost.28 Thus, to the extent that WYDOT has to ‘‘sign off’’
on a given broadband infrastructure project, the result is an unacceptably slow
process or, more likely, no deployment at all.
To this day, almost all of Wyoming’s vast territory has no cell service available.
Outside of city limits (and there are not many cities), there is limited cell phone
coverage, even along the interstate highways. Yet, tourism is a major industry, and
this lack of wireless service can be frustrating for travelers that possess laptops or
other mobile devices. Years ago, the governor’s Wyoming Telecom Council
suggested equipping highway rest stops with Wi-Fi so travelers could use their
laptop computers or other mobile devices. At the time, even most Wyoming
airports did not have Wi-Fi.
WYDOT approval was required. WYDOT officials said that they might
consider such a project if they could work it into its 7-year plan. Given this
institutional dynamic, no broadband investment proposal—regardless of its merits—would get past the discussion stage.
In summary, Wyoming needs to place control over state telecom assets and
infrastructure in a separate agency.
12.11 Telecom Law, Prices, and Public Welfare
In 2007, the state legislature revised the 1995 Telecommunications Act.29 This
was a substantial revision of rules governing competition in the industry, but it did
not achieve any significant progress on the broadband front. The result of the 1995
Act and the 2007 Act (revising the 1995 law) was deregulation of services and
substantial increases in monthly tariffs for basic phone service coupled with
decreases in tariffs for toll calls and interconnection.30 The bill was basically a
negotiated industry compromise between small and large telcos and did little to
promote significant infrastructure investment.
The best way to gauge the public benefit of any given universal service policy is
economic welfare. Technically, economic welfare is the sum of consumer surplus
and producer surplus. Consumer surplus is high when prices are low, and producer
surplus is high when supply prices (costs) are low. Thus, policies that promote
public welfare should result in low prices for both suppliers and consumers.
Usually, the forces of competition serve to drive down costs and prices, but not in
the case of universal service; government policy must be relied on to minimize the
level of subsidies required to achieve low (so-called affordable) prices. The system
28
Indeed, during my tenure as a government advisor during the legislative budget session, it
became immediately obvious that a reallocation of a small amount of road improvement funds in
a single year could go a long way toward funding a long-lived rural broadband infrastructure.
29
For a summary of legislation, see the 2010 Annual Telecommunications Report, Wyoming
PSC at:http://psc.state.wy.us/htdocs/telco/telco10/2010%20Annual%20Telecom%20Report.pdf.
30
ibid. Table 7, p. 14, provides a summary of tariff increases from 1995 to 2009.
12
Universal Rural Broadband: Economics and Policy
247
of subsidies and prices in Wyoming, and throughout rural America, is a good (bad)
example of how not to achieve a high level of economic welfare.
To illustrate the point, Wyoming‘s prices for basic phone service are far above
the national average, even for the largest phone company in the state and even
after accounting for USF subsidies.31 Rural consumer prices for basic telephone
service in Wyoming range from about $18/mo to $80/mo.32 Table 12.3 provides
average monthly telephone prices and subsidies for Wyoming rural consumers for
the largest phone company, CenturyLink (formerly Qwest).33
CenturyLink serves more rural consumers than any other telco in Wyoming and
even after USF credits of $34.25/mo are applied, the consumer pays $49.50/mo.
This far exceeds what the average consumer in the US pays, prompting the
Wyoming PSC to engage in a lengthy litigation to remedy what it views as a
violation of Section 254 of the 1996 Act.34
Besides the obvious problem that there are numerous confusing line items that
appear on customer bills, examination of the data in Table 3 reveals why current
policy is bad for economic welfare. The source of funds for the federal subsidy of
$28.70/mo includes usage-based charges for interconnection called ‘‘carrier access
charges’’ and ‘‘intercarrier compensation charges’’ (ICC). As mentioned previously, this is an inefficient and unsustainable way to generate subsidies.
A preferred way to generate a subsidy is a revenue surcharge. Indeed, the
Wyoming PSC itself instituted a state universal service subsidy ($5.55/mo) generated via a revenue surcharge on all retail telecom revenue in Wyoming.35 Yet,
the Wyoming PSC saw fit to petition the FCC to reconsider its plan to phase out
usage-based charges to generate USF funds. This is a case of regulatory schizophrenia and a spectacular example of regulatory hypocrisy.
The most effective way for consumers to avoid paying Wyoming’s high prices
for basic phone service is to obtain a broadband connection. By far, the biggest
immediate benefit to households and businesses with a broadband connection is
VoIP. For a nominal flat fee, VoIP lets consumers make unlimited phone calls,
even long-distance calls, to any other phone for no charge and even international
calls for a penny or two a minute. Consumers that have a computer with a
microphone can make VoIP calls to any other similarly equipped computer for free
31
ibid.
ibid. Table 7, p. 14.
33
Source notice of inquiry regarding issues raised by the February 23, United States Court of
Appeals for the Circuit Qwest II Decision, Comments of the Wyoming Public Service
Commission, May 8, 2009, Table 12.1, p. 11.
34
47 U.S.C. §254 calls for a federal USF that is sufficient to provide rate comparability between
rural and urban areas. For details on the federal court proceedings, ref. ftn. 25, p. 8, Federal
Universal Service Issues.
35
The Wyoming PSC requires all telecom providers to pay 1.2 % of annual retail revenue into
the state USF. See Order Establishing The Wyoming Universal Service Fund Assessment Level,
May 13, 2011, http://psc.state.wy.us/htdocs/wyusf/wusf-assessment2011.pdf.
32
248
B. Egan
Table 12.3 Wyoming rural residential price for basic service
Basic residential access line rate
Federal universal service fund credit
Wyoming universal service fund credit
Net residential rate subject to mandatory surcharges and taxes
Federal subscriber line charge
Federal universal service fund surcharge
Wyoming universal service fund surcharge
Telecommunications relay system surcharge
Wyoming lifeline program surcharge
E911 emergency calling system tax
Federal excise tax
Wyoming state sales tax
Total basic residential service rate to customer
$69.35
($28.70)
($5.55)
$35.10
$6.50
$3.51
$0.69
$0.06
$0.15
$0.75
$1.05
$1.68
$49.50
and, if equipped with a camera, video calls are also cheap or free.36 If you have a
broadband connection but do not want to involve your computer to make calls, you
can use cheap alternatives like ‘‘Magic Jack Plus’’ that provides unlimited free
local and long-distance phone calls using your existing phone set and your old
phone number.
While surfing the Web at broadband speeds is a great benefit to any household,
getting monthly phone service for almost no charge is an obvious bonanza for the
average consumer. In large swaths of Wyoming where affordable broadband is not
available, consumers are denied the benefits of VoIP that their more urban
counterparts routinely enjoy. Essentially, this is akin to a tax on where you happen
to live. In many rural areas of Wyoming, telco customers pay high per-minute
prices for long-distance calls. Similarly, if a telco customer tries to switch to a
discount long-distance provider, they will have to pay high interconnection fees or
‘‘access charges.’’ The high charges are permitted because of rural waivers that
exempt rural telcos from the low tariffs imposed on larger telcos.
Since VoIP customers can reduce and even eliminate their existing monthly
phone charges, telcos are financially harmed. Incumbent telcos fight back in order
to retain their subsidies and stay profitable.
For example, if a rural telco basic service subscriber asks to have their phone
number reassigned to a competitive long-distance company (some of which use
VoIP), it can refuse to do it.37 Larger carriers, like CenturyLink, cannot refuse to
do so because they do not enjoy a rural waiver from federal requirements forcing
36
For example, Skype phone service allows for free domestic phone calls for a small ($50)
annual fee and nearly free international calls. Also, Facebook is ubiquitous and free and allows
calling anyone that is online, including video calls using Skype.
37
Small rural telcos do offer subscribers that wish to keep their phone number a choice of PSCapproved companies to designate as their primary long-distance provider, but they must pay for
the privilege via relatively high interconnection charges.
12
Universal Rural Broadband: Economics and Policy
249
low interconnection fees and local number ‘‘portability.’’38 Many consumers who
want to switch to a discount long-distance service do not want to change their old
phone number and this is often a deal breaker. Consumers are not familiar with the
law and do not know who to complain to when a request to transfer their phone
number is denied.
In its traditional service territory, small rural telcos do not face any competitive
threat from cable TV companies offering broadband via a cable modem. Even if it
did, due to rural waivers and regulatory loopholes, there may not be any
enforceable statute requiring small telcos to accommodate competitors with reasonable interconnection terms or prices.
Silver Star Communications, a small telco based in Star Valley Wyoming,
provides a good anecdote of how a small rural telco can exercise some leverage
from exchanges where it enjoys a basic service monopoly. Silver Star is known as
a rather progressive local telco that has aggressively pursued expansion into
nearby telco service areas, especially CenturyLink. Larger telcos, like CenturyLink, cannot take advantage of federal grants and loans, rural waivers or other
regulatory loopholes.
In Alpine Wyoming, with no cable competitor, Silver Star charges its basic
service subscribers $46/mo for local calling and $0.105/min for long distance.39
Contrast this with the situation only 30 miles up the road in the lucrative and
wealthy enclave of Jackson Hole, served by CenturyLink. CenturyLink’s customers easily avail themselves of competitive discount long-distance carriers using
VoIP or they can cancel phone service and switch to cable modem service and still
keep their old phone number. For example, cable provider Optimum offers relatively low-priced broadband service, including the option of phone service featuring unlimited local and long-distance calls for only $15/mo. Of course, once a
broadband connection is purchased, there are other ‘‘unbundled’’ VoIP alternatives
costing even less.
Primarily funded by government subsidies, Silver Star is constructing a fiberoptic broadband network in Jackson Hole, allowing CenturyLink’s customers to
obtain their phone service using Silver Star’s cheaper VoIP service. This story is
but one example of business strategies that small rural carriers use to exploit
subsidies. It highlights the widespread price discrimination that exists in Wyoming
and the need for reform to level the playing field.
Silver Star’s strategy is a good one and it would be wrong to suggest that it is
engaging in anticompetitive behavior, it has operated within certain regulatory
parameters for decades, and it continues to provide high-quality (albeit subsidized)
38
Technically, small rural telcos are supposed to comply with federal regulations mandating
number portability, but, as a practical matter, some do not.
39
Like Qwest’s basic service tariff in Table 12.3, Silver Star’s nominal tariff rate for local
service in Alpine Wyoming is only $26.45/mo., but is nearly double that after all the add-ons
from regulatory fees and taxes. The $0.105/min. price is the stand-alone tariff rate, and the rate
drops down to $0.055/min. with a ’’bundled‘‘ service package. Silver Star also offers its own
broadband service for between $40 and $100/mo. (depending on speed).
250
B. Egan
service. No small rural telco can be faulted for acting in its own best interest in
pursuit of maximum profit within the institutional environment imposed on it.
The solution is to promote competitive broadband solutions like VoIP everywhere in the state without sacrificing universal service for POTS. The federal
government’s broadband initiative and CAF plan are designed to shift current
POTS subsidies into favoring broadband. This should eventually allow every rural
household in America to save money with VoIP.
Wyoming state government needs to get on the ‘‘broad-bandwagon’’ and start
reforming its antiquated and highly discriminatory policies. Indeed Wyoming may
be the only state in America that still maintains a state subsidy for local phone
companies. It will not be easy for lawmakers to reform the system because
incumbent local telcos will be suspicious that the result will be financially devastating despite promises to the contrary, but the process must begin because it is
the right thing to do for citizens of Wyoming.
12.12 Supply-Side Analysis
There are very few rigorous cost studies for rural broadband. The primary reason is
that it is difficult to gather reliable data at a sufficiently granular level to yield
meaningful results. Fortunately, Wyoming state government commissioned such a
study, completed in 2006 by Costquest.40.
A rigorous analysis relies on geographic and topographic data for households
and businesses. This type of ‘‘bottom-up’’ study has previously been employed by
the FCC to administer its USF high-cost fund. These type studies invariably
demonstrate that there is no such thing as a representative ‘‘average’’ household or
average cost per broadband connection. Every household, or enclave of similarly
situated households, especially in remote areas, features a unique combination of
geography, topography, and the nearest existing network infrastructure. Based on
the specific location and circumstances, the cost to construct a physical broadband
connection is calculated and summed across all locations to arrive at a total cost.
On top of that, on-going operating expenses must be factored in to arrive at a
reliable estimate of total annual costs to provide service. This type of cost modeling is extremely data intensive, but is nevertheless the proper approach.
The Wyoming broadband cost study results reflect the ‘‘augmentation’’
investment required to build or otherwise upgrade an existing network connection
to provide broadband access. Both wired and wireless technologies were
40
‘‘Costs and Benefits of Universal Broadband Access in Wyoming,’’ October 24, 2006,
Costquest Associates. Study documentation is available at:http://www.costquest.com/uploads/
pdf/CostsAndBenefitsofUniversalBroadbandAccessInWyoming.pdf. A follow-on case study on
the cost of wireless broadband: ’’Targeting Wireless Internet Services at Areas with Lower
Investment Thresholds‘‘ is available at: http://www.wyoming.gov/loc/04222011_1/Documents/
Statewide_IT/Broadband%20Information/Wireless_BB.pdf, Costquest, 2006
12
Universal Rural Broadband: Economics and Policy
251
Table 12.4 Estimated augmentation investment by technology
Technology type
Per customer mean up front
capital cost ($)
Per customer median up front
capital cost ($)
Cable
Telco
Wireless
8,533
1,115
1,243
18,932
4,570
1,324
considered for each household located in rural areas without broadband access.
The cost model was designed to select the technology solution based on least cost.
The study showed that about 20 % of Wyoming households were located in areas
with no broadband access; 90 % of those could be served by wireless access
technology, and it was often the least cost solution. The results are summarized in
Table 12.4.41
Both the mean and median per household costs are provided to show how highcost locations skew the average result. The relatively few high-cost locations pull
the average cost up.
Another bottom-up study by National Exchange Carrier Association (NECA)
employs data based on a representative sample of rural telco lines and engineering
parameters to arrive at a broad average cost per broadband connection, as if an
existing rural telephone line was already in place and needed to be upgraded to
handle broadband service.42 The results of the study yield an average cost of
$3,270/line, and the range is $493/line for relatively short lines (\18 kft) up to
$9,328/line for long lines.
Even though it is possible to get good objective estimates of the lowest cost to
provide broadband access to rural households and businesses, the result is a
seemingly endless battle among special interests that stand to lose or gain financially. Ultimately, it always boils down to a question of who gets the subsidy to
provide broadband service and every major player hires lawyers, consultants, and
lobbyists to either skew the results in their favor or attack the industry segment that
happens to employ the lowest cost technology solution.
As always, once an objective accurate cost estimate is produced, it will take
courage and statesmanship on behalf of regulators to take the political heat and
target the subsidy precisely to minimize the cost to taxpayers. One method regulators can employ to sort out the least cost solution for any given area is to
auction off the rights to serve all subscriber locations in that area, including a
commitment to a date certain for completion of network construction. The low
bidder would be granted a subsidy equal to their bid, or second lowest bid, and
regulators would oversee the process.43 As before with the stimulus package, the
41
ibid. p. 26. Even though the cost model included satellite as an alternative wireless
technology, it is not included here because it is not capable of providing high-quality VoIP.
42
‘‘NECA Rural Broadband Cost Study: Summary of Results,’’ June, 2000. Available at http://
www.ics.uci.edu/*sjordan/courses/ics11/case_studies/NECA%20rural%20bb.pdf.
43
See Alleman et al. (2010) and the reference cited therein.
252
B. Egan
success of this process would be much greater, and the cost to taxpayers much
lower, if the government would first reform rural spectrum policy and force (or
bribe) national network operators to offer connections to the core network at low
(or zero) tariff rates for interconnection to the internet. One of the largest, if not the
largest, operating cost for local broadband companies is for network interconnection, and this is a regulatory challenge that must be overcome. Broadband
access connections are not good if core network companies do not offer physical
interconnection points at low prices.
12.12.1 Federal Stimulus Funds: A Missed Opportunity
It is clear that the state of Wyoming requires the most subsidy per capita to achieve
universal rural broadband, so it makes sense that federal stimulus funds targeting
rural broadband should end up in Wyoming. One obvious disadvantage of not
having a broadband advocate in state government is missing out on federal government grants to promote rural broadband. For example, of the over $7B in
stimulus funds available for broadband infrastructure projects, none was awarded
to Wyoming-based public or public/private entities. Surely, had there been a
broadband champion in state government to drum up projects, more federal funds
would have been requested and awarded for Wyoming entities. Of the over $3B of
RUS/BIP money doled out, none was for a Wyoming applicant.44
Of the over $4B NTIA/BTOP money, only a small award of some $4M was
made to a Washington State nonprofit consortium run by Ohio-based Costquest
that filed grant applications on behalf of the Wyoming Office of the CIO. It turns
out that Costquest, the consultant/contractor that performed the Wyoming
broadband cost study, and the primary beneficiary of the BTOP funds, was the
force behind the application.45 A careful review of this grant reveals that it is not
unique for Wyoming. In fact, grants of similar size for the same purpose were
made to all 50 states, Samoa, Guam, and the US Virgin Islands, in order satisfy
data requirements associated with NTIA’s State Broadband Initiative and the
Broadband Data Improvement Act. The only thing unique to Wyoming is that it is
one of the few states that chose to hire an out-of-state private entity to apply for the
funds and do the work. Had there been a state employee responsible for broadband,
this BTOP grant would likely have created more jobs in Wyoming. There should
44
A list of BIP awards by state is in the report to Congress at: http://www.rurdev.usda.gov/
supportdocuments/BIPQuarterlyReport_12-10.pdf.
45
For detailed information of the NTIA/BTOP grant, see ‘‘Wyoming Broadband Data and
Development Grant Program, LinkAMERICA/Puget Sound Center for Teaching, Learning and
Technology Designated entity on behalf of the State of Wyoming,’’ See http://www.ntia.doc.gov/
legacy/broadbandgrants/BroadbandMapping_Wyoming_091130.pdf. BTOP funds were also used
to develop a Web site that tracks project activity. See http://www.linkwyoming.org/lwy/
Default.aspx.
12
Universal Rural Broadband: Economics and Policy
253
have been many more applications dedicated to Wyoming-specific projects
employing Wyoming residents.
A portion of this BTOP grant allowed Wyoming’s chief information officer to
hire Costquest to conduct a survey of broadband network infrastructure and vendors. Interestingly, the report concludes that state government leadership is lacking
and needs to be established to maximize the likelihood of universal affordable
broadband.46
A more substantial, but still smallish, amount was awarded to a Wyoming telco,
Silver Star, for construction of a fiber-optic trunk line to boost internet speeds and
extend high-speed broadband service to five Wyoming counties and to attract new
broadband customers, mostly in CenturyLink’s service territory in and around the
lucrative and wealthy enclave of Jackson Hole. It is worth noting that no other
Wyoming telco applied for stimulus funds, including CenturyLink. This lack of
Wyoming rural telco applications for federal money to build broadband infrastructure is puzzling, but it is likely that Silver Star is taking advantage of a
subsidy that will provide a competitive advantage that should pay off handsomely.
12.13 Conclusions
Reflecting on the Wyoming experience, even though Wyoming represents a unique
challenge for universal rural broadband access, it is illustrative of the problems
confronting all states with a significant rural population. It is clear that significant
reforms to current regulations are the linchpin for overcoming problems and
implementing effective solutions, and that, in turn, requires new leadership and
statesmanship from federal and state policymakers.
The primary reason that current government policy is failing to achieve universal rural broadband is rooted in its own past statutory and regulatory policy.
The two biggest roadblocks are the rural waiver provided for in the 1996 Act and
the disastrous policy for (mis)allocating radio spectrum. Unless and until substantial reform or repeal of these policies occurs, the achievement of universal
(affordable) rural broadband will be elusive. The most recent government initiatives, including massive ‘‘stimulus’’ spending, will not result in a marked difference over what could be achieved without it; what begins as a perfectly reasonable
proposition for stimulating rural broadband, ends up being a struggle among
entrenched special interests over pots of taxpayer money. The result is that a small
amount of network infrastructure investment and construction jobs will be added,
but the industry segments with the most growth will be lobbyists and bureaucrats.
46
‘‘Report of Findings: Wyoming Broadband Interviews,’’ LinkWyoming, July 2010, p. 20
concludes ‘‘Strong leadership is needed to build awareness about the benefits of broadband in
Wyoming. An advocacy effort is needed to improve fixed and mobile broadband access in
Wyoming.’’ The full report is available at: http://www.linkwyoming.org/lwy/docs/WYReport8
July2010.pdf.
254
B. Egan
Fixing the problem is straightforward: Eliminate entry barriers from rural waivers,
reduce tariffs for network interconnection, target direct subsidies in a technologically neutral fashion, and reform spectrum regulations.
References
Alleman J, Rappoport P, Banerjee A (2010) Universal service: a new definition? Telecommun
Policy 34(1–2):86–91 (ISSN 0308-5961), (Feb–Mar)
American Recovery and Reinvestment Act (ARRA) of 2009 (Public Law No. 111-5)
Congressional Research Service (2002) Telephone bills: charges on local telephone bills, 12 June
2002
Dickes LA, David Lamie R, Whitacre BE (2010) The struggle for broadband in rural America.
Choices, a publication of the Agricultural & Applied Economics Association, 4th Qtr.
Federal Communications Commission (FCC) (2011a) Bringing broadband to rural America:
update to report on a rural broadband strategy. 17 June 2011, GN Docket No. 11-16
Federal Communications Commission (2011b) FCC 11-83, Declaratory Ruling, WC Docket No.
10-143, GN Docket No. 09-51, CC Docket No. 01-92, Adopted: 25 May 2011 Released: 26
May 2011
Federal Communications Commission (2002) FCC spectrum policy task force report. ET Docket
No. 02-135, Nov 2002
Federal Communications Commission (2010) Connecting America: the national broadband plan.
Available at http://www.broadband.gov/plan/, 17 Mar 2010
International Telecommunication Union (ITU) (2011) Broadband: a platform for progress, a
report by the broadband commission for digital development, June 2011
Kruger LG (2011) Background and Issues for Congressional Oversight of ARRA Broadband
Awards. Congressional Research Service, 19 Apr 2011
The Telecommunications Act of 1996 (1996) P.L. No. 104-104, 110 Stat. 56
Chapter 13
Who Values the Media?
Scott J. Savage and Donald M. Waldman
13.1 Introduction
Media can be crucial for democracy. Because news and current affairs can promote
political awareness and ideological diversity, many societies have charged policy
makers with ensuring there are opportunities for different, new and independent
viewpoints to be heard (‘‘diversity’’), and that media sources respond to the
interests of their local communities (‘‘localism’’). In the U.S., the FCC traditionally limited the amount of common ownership of radio and television stations, and
the amount of cross-ownership between newspapers, radio and television stations
serving the same market. When ownership limits prevent share from concentrating
around a few corporations, theory predicts that competition between many independent media sources can promote diversity of opinion, and incent owners to
respond to their local communities.
More recently, legislators and the FCC have focused their attention on market
forces, for example, consumer preferences and new media, such as satellite radio
and television, the internet, and smartphones, in order to deliver their competitive,
diversity and localism goals. The Telecommunications Act of 1996 (‘‘Act’’)
relaxed the limit on the number of radio and television stations a firm could own
nationwide, and permitted greater within-market common ownership by allowing a
firm to own more local radio stations. The Act also required the FCC to review its
ownership rules every four years to ‘‘determine whether any of such rules are
necessary in the public interest as the result of competition.’’ Given the increase in
choices through new media, supporters of greater ownership concentration argue
that traditional media should be free to merge and use the efficiencies to provide
more diverse and local programming. Opponents question whether such
S. J. Savage (&) D. M. Waldman
University of Colorado at Boulder, Boulder, CO, US
e-mail: Scott.Savage@Colorado.EDU
D. M. Waldman
e-mail: waldman@colorado.edu
J. Alleman et al. (eds.), Demand for Communications Services - Insights and Perspectives,
The Economics of Information, Communication, and Entertainment,
DOI: 10.1007/978-1-4614-7993-2_13, Springer Science+Business Media New York 2014
255
256
S. J. Savage and D. M. Waldman
efficiencies are achievable, and argue that consolidated media corporations are not
flexible enough to serve the interests and needs of local and minority communities.
Furthermore, many segments of the population do not have access to new media
and even if they did, most of the original news on the internet, for example, is
originated by newspapers, radio and television.1,2
Evaluation of these arguments requires, among other things, measurement of the
societal benefits that arise from increased media diversity and localism. Policy makers
may want to use the most recent estimates of demand to measure consumer satisfaction with their local media environment. Because they do not have identical preferences, they may also want to see how consumer valuations vary with age, education,
gender, income, and race. The economic construct of willingness-to-pay (WTP)
provides a theory-based, dollar measure of the value consumers place on their local
media environment, as well as the amount they would be willing-to-pay for
improvements in the individual features that comprise their environment. Since media
environment is a mixture of private and public goods, indirect valuation methods, such
as those used in the environmental and transportation choice literature, are
appropriate.
This chapter uses data from a large nationally representative survey conducted
during March, 2011 to estimate consumer demand for their local media environment, described by the offerings from newspapers, radio, television, the internet,
and smartphone. Household data, obtained from choices in a real market and an
experimental setting, are combined with a discrete-choice model to estimate the
marginal WTP for improvements in four local media environment features. They
are the: diversity of opinion in reporting information (DIVERSITY OF OPINION);
amount of information on community news and events (COMMUNITY NEWS);
coverage of multiculturalism, that is, ethnic, gender, and minority related issues
(MULTICULTURALISM); and the amount of advertising (ADVERTISING). Consumer satisfaction with diversity in media markets is measured by their WTP for
DIVERSITY OF OPINION and MULTICULTURALISM. Consumer satisfaction
with local programming in media markets is measured by their WTP for COMMUNITY NEWS. The full cost of their media environment is measured by their
monthly payments for media sources (COST) and the amount of advertising that
comes with their media environment.
Results show that the average price for a media environment was about $111
per month and the average consumer switching cost was about $26 per month.
1
U.S. Census Bureau (2009) data show that 64 % of households had internet access at the end of
2009. Data from Pew Internet and American Life surveys show that about 78 % of adult
Americans use the internet at May, 2010 (See http://www.pewinternet.org/Static-Pages/TrendData/internet-Adoption.aspx). About 24 % of the 234 million mobile phone subscribers owned a
smartphone as of August, 2010 (ComScore 2011).
2
During 2009, Pew Research Center (2010) monitored 53 Baltimore newspapers, radio and
television stations, their associated web sites, as well as internet-only web sites. They found that
traditional media accounted for 93 % of the original reporting or fresh information on six major
news stories during the week of July 19–25.
13
Who Values the Media?
257
Diversity of opinion and community news are important features of the local
media environment. The representative consumer is willing-to-pay $13 per month
for more viewpoints in the reporting of news and current affairs, and $14 per
month for more information on community news. Consumers also value more
information that reflects the interests of women and minorities, although the
willingness-to-pay is relatively small at about two dollars per month. Consumers
have a distaste for advertising and are willing-to-pay eight dollars per month for a
decrease in the amount of space and/or time devoted to advertising in their overall
media environment. WTP for diversity of opinion and community news increase
with age, education and income, while WTP for multiculturalism decreases with
age. Nonwhite respondents value the multiculturalism feature of their local media
environment. Specifically, nonwhite males and females are willing-to-pay about
$3.50 and six dollars per month, respectively, for more information that reflects the
interests of women and minorities.
We review the previous literature and then describe the experimental design,
survey questionnaire and data. We next outline the random utility model of media
environment choice, present demand estimates and calculate consumer valuations.
13.2 Review
Numerous studies in the social sciences examine new technologies and the consumption of news in media markets. Baum and Kendall (1999) present ratings data
that showed the share of households who watched prime-time presidential television appearances declined from 48 % in 1969 to 30 % in 1998. Two explanations are offered for this trend: the rise of political disaffection; and the growth of
cable television. Using National election study (NES) data, and controlling for
demographics and political affection, Baum and Kendall estimate the effect of
cable television on the individual’s probability of viewing the 1996 presidential
debate. They find that cable subscribers were less likely to have viewed the second
debate and conclude that because they have more viewing choices, cable subscribers with an entertainment preference do not stay tuned to the President.
Because of the increased availability of entertainment, Prior (2002) argued that
people with a preference for entertainment now consume less political information. He uses data from the NES and Pew Media Consumption Surveys from 1996
and 2000 to examine the relationship between cable television and the internet, and
knowledge about congressional house incumbents. Using a logistic regression
model that controls for demographics and political knowledge, Prior finds that
among people who prefer entertainment, greater access to new media is associated
with lower recall of house candidates’ names and their voting record.
Using survey data from over 16,000 adults in the Washington, D.C. area
between 2000 and 2003, Gentzkow (2007) estimated how the entry of online
newspapers affected the welfare of consumers and newspaper firms. Estimates
from a structural model of the newspaper market, comprised of The Washington
258
S. J. Savage and D. M. Waldman
Post’s print and online versions and The Washington Times, suggest that the online
and print versions of the Post are substitutes. The online newspaper reduced print
readership by 27,000 per day at a cost of $5.5 million in print profits. For consumers, the entry of the online newspaper generated a per-reader surplus of $0.30
per day, equivalent to about $45 million in annual consumer welfare.
Byerly et al. (2006) interviewed 196 subjects in the D.C. area during 2006 to
investigate the consumption of news by minorities. They found that commercial
television and newspapers were the most important sources of local news and
information, while radio and the internet were among the least important. Subjects
who identified the internet as a new media source indicated that it was a supplement to other traditional media, rather than a sole source of news. The most
popular preferences for important media sources were ‘‘completeness of information’’ and ‘‘a stronger focus on local issues with a minority angle.’’
Nielson media research (NMR) and Pew Internet and American Life provide
results from periodic surveys of households that provide a trend for studying
preferences and new technologies in media markets. For example, NMR (2007)
surveyed over 100,000 households during May and June, 2007 and found that new
media, such as cable television and the internet, have made substantial inroads into
traditional media’s market share. Cable news channels were the most important
household sources for breaking news, in-depth information on specific news and
current affairs, and national news, while the internet was the second most
important source. Broadcast television stations and local newspapers remain the
most important sources of local news and current affairs.
Purcell (2011) provided survey results from 2,251 households that show that
almost half of all American adults get at least some of their local news and
information on their cellphone or tablet computer. These mobile local news
consumers are relatively younger, have higher income, live in urban areas, and
tend to be parents of minor children. One-quarter report having an ‘‘app’’ that
helps them get information about their local community. Because local app users
also indicate they are not necessarily more interested in general or local news than
other adults, these findings suggest that the convenience of mobile news consumption, rather than quantity, is an important aspect of their preferences.
In summary, previous studies provide insights on consumer preferences for
news and current affairs, and how demand is affected by technology change. Many
of these studies, however, use attitudinal questions to describe general trends in
news consumption and media use. Moreover, most were based on data prior to
2007 and typically measure outcomes for only one of the media sources that
comprise the local media environment. This chapter uses the methodology
described by Savage and Waldman (2008) and Rosston et al. (2010), and survey
data obtained during March, 2011, to estimate consumer valuations for improvements in the diversity and localism features of their local media environment.
13
Who Values the Media?
259
13.3 Data
13.3.1 Experimental Design
The WTP for local media environment features are estimated with data from an
online survey questionnaire employing repeated discrete-choice experiments. The
questionnaire begins with the cognitive buildup section that describes the
respondent’s local media environment in terms of the offerings from newspapers,
radio, TV, the internet, and smartphone. Respondents are asked questions about
their media sources, how much information they consume from each source, the
cost of their media sources, and the quality of the four different features of their
media environment described in Table 13.1.3
Cognitive buildup is followed by the choice experiments. Information from the
cognitive buildup questions is used to summarize each respondent’s actual ‘‘status
quo’’ (SQ) media environment at home in terms of the media sources they use to
get their information, the levels of the features of their environment: DIVERSITY
OF OPINION, COMMUNITY NEWS, MULTICULTURALISM and ADVERTISING
features, and their COST. A table summarizing the sources and features of the
respondent’s actual media environment at home is presented before the choice
task.4 The respondent is then instructed to answer the eight choice scenarios within
the choice task. In each choice scenario, a pair of new media environment options,
A and B, is presented. The two options provide information on news and current
affairs from the same set of media sources indicated by the respondent during
cognitive buildup, but differ by the levels of the features. Respondents indicate
their preference for choice alternative A or B. A follow-up question is then presented that asks respondents to make an additional choice between their preferred
alternative, A or B, and their actual SQ media environment at home. See Fig. 13.1
for a choice scenario example.
Market data is used from newspapers, radio and television stations, internet and
mobile telephone service providers, a pilot study and focus groups to test and
refine our descriptions of the features for choice alternatives A and B.5 Measures
developed by Huber and Zwerina (1996) were used to generate an efficient
3
Respondents were asked to consider what is available in their local media environment, rather
than what they usually view or listen to. This represents a statement about the amount and quality
of information programming being produced by media sources for their consumption.
4
Contact the principal author for an example.
5
The first focus group, with a hard-copy version of the survey, was held on December 9, 2010,
in the Economics building at the University of Colorado at Boulder. Two men and two women, a
local service employee and three staff members of the Economics Department, took the survey
under supervision of the principal investigator and answered detailed questions regarding how
they interpreted the questions and what they were thinking when they answered them. The second
focus group, with an online survey, was facilitated by RRC Associates in Boulder on February 2,
2011. The group consisted of five diverse individuals with respect to age, gender, and internet
experience, who completed the survey sequentially in the presence of a professional facilitator.
260
S. J. Savage and D. M. Waldman
Table 13.1 Media Environment Features
Characteristic
Description
COST
The total cost of monthly subscriptions to all of the household’s
media sources, plus any contributions to public radio or public TV
stations.
DIVERSITY OF
The extent to which the information on news and current affairs in the
OPINION
household’s overall media environment reflects different
viewpoints.
Low: only one viewpoint.
Medium: a few different viewpoints.
High: many different viewpoints.
COMMUNITY NEWS
The amount of information on community news and events in the
household’s overall media environment.
Low: very little or no information on community news and events.
Medium: some information on community news and events.
High: much information on community news and events.
MULTICULTURALISM The amount of information on news and current affairs in the
household’s overall media environment that reflects the interests
of women and minorities.
Low: very little or no information reflecting the interests of women
and minorities.
Medium: some information reflecting the interest of women and
minorities.
High: much information reflecting the interests of women and
minorities.
ADVERTISING
The amount of space and/or time devoted to advertising in the
household’s overall media environment.
Low: barely noticeable.
Medium: noticeable but not annoying.
High: annoying.
nonlinear optimal design for the levels of the features that comprise the media
environment choice. A fractional factorial design created 72 paired descriptions of
media environment, A and B, that were grouped into nine sets of eight choice
questions. The nine choice sets were rebalanced to ensure that each household
faced a range of costs that realistically portrayed the prices for media sources in
their local media environment. For example, a respondent who indicated that they
pay nothing for their local media environment was exposed to a range of costs that
included zero dollars per month. Accordingly, COST1 ranged from $0 to $50 for
households that indicated that the total cost of their actual media environment at
home was less than or equal to $30 per month. COST2 ranged from $5 to $100 for
households that indicated that their total cost was greater than $30 but less than or
equal to $70 per month. COST3 ranged from $5 to $150 for households that
indicated that their total cost was greater than $70 but less than or equal to $120
per month. COST4 ranged from $10 to $200 for households that indicated that their
total cost was greater than $120 but less than or equal to $180 per month. COST5
13
Who Values the Media?
261
[Fix upthewhen
we have
sample
… Knowledge
Inc.provide
(KN) administered
the
1. Consider
following
two full
media
environment
options, ANetworks
and B, which
news and current
affairs from your media sources: radio, television, and the internet. The two options differ by the
levels
of diversity
of opinion,
community
news,
multiculturalism,
advertising,
and
by unlisted
cost.
online
survey.
KN panel
members
are drawn
by random
digit dialing
of listed
and
For this
first question,
weahighlight
in to
the50
levels
of the five
in red.
For
telephone
households,
with
success the
ratedifferences
of about 45
percent.
For features
incentive,
panel
some of these five features, there may be no difference. Check the media environment option you
would prefer.
members
are rewarded with points for participating in surveys, which can be converted to cash
Click here to review a summary of the levels of all the features.
or various
prizes.
KN
contacted
a gross
sample
799 over
panel
members
To seenon-cash
the description
of an
individual
feature,
place
your of
cursor
that
feature on January
24, 2003 informing them about the Internet
service choice experiment. By February
12, 2003,
Option A
Option B
Diversity
of opinion
Low
575
complete
questionnaires were obtained
with a effective unit response rateMedium
of 32.4 to 36
Community news
Medium
Low
Multiculturalism
Low
Low
percent (i.e., 575/79945 to 50 percent). 209 of the 575 questionnaires were excluded by us
from
this analysis because they had beenHigh
randomly assigned an additional Internet
access
Advertising
Medium
Cost
$45 per month
attribute
as part of another study. Of$25
theper
366month
completed questionnaires remaining
for use in
Option A is less expensive and has more
Option B has less advertising
this study, 325 respondents
answered
all eight Internet
access
for anofitem
information
on community
news and
eventschoice
andquestions
more diversity
opinion
Select therate
option
you percent. The median completion time for each mail questionnaire was
response
of 88.8
prefer
I prefer option A
I prefer option B
about 19 minutes. ]
2. Since you currently have a media environment at home, we also ask if you would actually switch
to the
media environment,
you have chosen.
Consider
the features
yourthe
actual
A selection
of sample B,
demographics,
along
with similar
data of
from
U.S.media
Census
environment. Would you switch to the option B you chose previously?
Bureau (2003), are presented in Table 2. The sample covers 44 states. The typical respondent
Click here to review a summary of the levels of all the features.
To see the description of an individual feature, place your cursor over that feature.
is a white, 50 year old male with either some college (no degree), who resides in a household
Your
media environment
with 1.7 other members. He was
employed
last month at a location outsideOption
of theBhome, and
Diversity of opinion
Medium
Medium
has average annual household income $65,095. The sample is similar to the U.S. population
Community news
Medium
Low
with
respect to geographic coverage, respondent's
age, gender, employment status
Multiculturalism
Low
Low and
Advertising
High
Medium
Cost
$135 per month
$45 per month
Select the option you
prefer
I prefer option A
I prefer option B
Fig. 13.1 Choice scenario example
262
S. J. Savage and D. M. Waldman
ranged from $10 to $250 for households indicating that their cost was greater than
$180 per month.6
The nine choice sets were randomly distributed across all respondents. Upon
completion of their cognitive buildup questions, an online algorithm calculated
each individual’s total cost of their local media environment and assigned the
appropriate cost range for the choices experiments, either COST1, COST2, COST3,
COST4, or COST5. To account for order effects that could confound the analysis,
the order of the eight A-B choices questions within each of the nine choice sets
were also randomly assigned across all respondents.
Because some of the data are from choice experiments, we need to be concerned with hypothetical bias and survey fatigue. Hypothetical bias arises when
the behavior of the respondent is different when making choices in a hypothetical
market versus a real market. For example, if the respondent does not fully consider
her budget constraint when making choices between hypothetical options A and B,
WTP may be overestimated, because the cost parameter in the denominator of the
WTP calculation (see Eq. 13.3 below) will be biased toward zero and the marginal
utility (MU) parameter in the numerator will be biased away from zero. This bias
is less of a concern in this study as opposed to studies that ask consumers to value
environmental goods or advanced telecommunications services that are not provided in markets. Because most consumers have typically paid for some of their
different media sources in actual markets, they should have a reasonable understanding of their preferences for their local media environment, and how their
choices are constrained by their budget and time. Nevertheless, recent papers by
Cummings and Taylor (1999), List (2001), Blumenschein et al. (2008) and Savage
and Waldman (2008) have proposed methods for minimizing this source of bias.
This chapter follows Savage and Waldman by employing a follow-up question that
asks respondents to make an additional choice between their new choice, A or B,
and their actual media environment at home. This additional nonhypothetical
market information is then incorporated into the likelihood function that is used to
estimate utility parameters.
Survey fatigue can arise from a lengthy questionnaire and make estimates from
later scenarios differ from earlier scenarios. Carson et al. (1994) review a range of
choice experiments and find that respondents are typically asked to evaluate eight
choice scenarios. Savage and Waldman (2008) found there is some fatigue in
answering eight choice scenarios when comparing online to mail respondents. To
minimize survey fatigue in this study, the cognitive burden has been reduced by
dividing the choice task into two sub groups of four choice scenarios. Here, the
6
The limit of $250 per month is the total cost for a media environment with a seven-day
subscription to a premium newspaper, such as the San Francisco Chronicle ($25), a ‘‘All of XM’’
subscription to satellite radio ($20), a premier subscription to cable or satellite television ($110),
a subscription to very-fast internet service ($45), an unlimited data subscription for a smartphone
($30), and $10 monthly memberships to both NPR and PBS.
13
Who Values the Media?
263
respondent is given a break from the overall choice task with an open-ended
valuation question between the first and second set of four scenarios.7
13.4 Survey Administration
Knowledge Networks Inc. (KN) administered the household survey online. KN
panel members are recruited through national random samples, almost entirely by
postal mail. For incentive, panel members are rewarded with points for participating in surveys, which can be converted to cash or other rewards.8 An advantage
of using KN is that it obtains high completion rates and the majority of the sample
data are collected in less than ten days. KN also provides demographic data for
each respondent. Because these data are previously recorded, the length of the field
survey is shortened to less than 20 min, which ensures higher quality responses
from the respondents.
During the week of March 7, 2011, KN randomly contacted a gross sample of
8,621 panel members by email to inform them about the media environment
survey. The survey was fielded from March 11 to March 21. A total of 5,548
respondents from all 50 states and the District of Columbia completed survey
questionnaires for a response rate of 64.4 %. The net sample was trimmed by
eliminating: 341 respondents with a completion time of less than six and one-half
minutes; 46 respondents who skipped any questions in the choice task; 14
respondents who indicated that they pay $500 or more per month for the media
sources within their local media environment; eleven respondents who provided
incomplete cost information; and five respondents who provided incomplete
information on the features of their media environment.9 The median completion
time for our final sample of 5,131 respondents with complete information was
about 16 and three-quarter minutes. The panel tenure in months for final sample
7
For a robustness check, the baseline estimates of utility in Table 13.4 below for the bivariate
probit model were compared with estimates on the data for the hypothetical A-B choices only, as
well as with estimates on the data for the first four and second four choice questions, and similar
results were obtained.
8
Unlike convenience panels that only include volunteers with internet access, KN panel
recruitment uses dual sampling frames that includes both listed and unlisted telephone numbers,
telephone and non-telephone households, and cellphone-only households, as well as households
with and without internet access. If required, households are provided with a laptop computer and
free internet access to complete surveys, but they do not participate in the incentive program. See
Savage and Waldman (2011) for a detailed description of panel recruitment and non-response.
9
The pilot study and focus groups indicated that the minimum time needed to complete the
survey was about six or seven minutes. Because they may be shirking, the 341 respondents were
removed in the survey with a completion time of less than six and one-half minutes. Evidence
from KN suggests that this behavior is not specific to the survey style or content. The sample’s
distribution of interview duration in minutes is similar to other KN surveys with median
completion times ranging from seven to 19 min.
264
S. J. Savage and D. M. Waldman
respondents ranged from one to 136, with a mean of 41.18 and standard deviation
of 31.33. See Dennis (2009) for a description of the panel survey sampling
methodology.
Savage and Waldman (2011) present a selection of demographics for the U.S.
population, for all KN’s panel members, and for panel members who were invited
to participate in this survey. The demographics for all KN panel members are
similar to those reported by the United States Census Bureau (2009). Apart from
race and employment status, the demographics for the gross sample of panel
members invited to participate in this study and the final sample of respondents
who completed questionnaires also are similar to those reported by the Census
Bureau. However, estimates from the probit model that compares respondents’
characteristics between the gross sample and the final sample also indicate
potential differences in age, gender, education, and internet access between our
final sample and the population. We remedy this possible source of bias in our
results from step one and step two by estimating with weighted maximum likelihood. See Savage and Waldman (2011) for the probit model estimates and the
procedures used to develop the poststratification weights.
13.5 Media Environment at Home
Table 13.2 presents summary statistics for respondent’s media sources. Columns
two and column three show that about 94 % of sample respondents watch television, about 81 % listen to the radio, and about 80 % use the internet. About
45 % of respondents read a paper or online newspaper regularly, and about 24 %
of sample respondents own a smartphone. On average, television viewers spend
about 1.9 h on a typical day watching television to get information on news and
current affairs, radio listeners spend about 1.4 h listening to the radio to get
information on news and current affairs, and internet users spend about one hour
online (e.g., MSN, Yahoo, radio and TV station web sites, journalists’ blogs) to get
Table 13.2 Summary Statistics for Media Environment Sources
Media source
Obs
Sampie share (%)
Mean
s.d.
Min
Max
Newspaper
Radio
Satellite radio
Television
Cable television
Satellite television
Ownlntemet
Smartphone
0
0
0
0
0
0
0
0
24
24
24
24
24
24
24
24
2,342
4,154
558
4,856
2,736
1,381
4,135
1,270
45.6
81.2
10.9
94.6
53.4
27.0
80.6
24.8
1.015
1.423
1.522
1.953
1.976
2.071
1.074
0.580
1.766
1.873
2.221
2.172
2.210
2.197
1.659
1.344
Obs is number of observations. Sampie share is the percentage of the sample that uses the media
source. s.d. is standard deviation. Min is minimum value. Max is maximum value. Own Internet
is home internet service not provided by KN
13
Who Values the Media?
265
Table 13.3 Summary Statistics for Levels of Media Environment Features
Feature
Obs
Mean
s.d.
Min
DIVERSITY OF OPINION
5,131
2.09
0.655
1
Max
3
COMMUNITY NEWS
MULTICULTURALISM
ADVERTISING
COST ($ per month)
CONTRIBUTION ($ annual)
BUNDLE
3
3
3
447
1,500
1
5,131
5,131
5,131
5,131
535
3,688
1.99
1.83
2.29
111.2
111.5
0.576
0.711
0.705
0.682
76.03
161.5
0.494
1
1
1
0
0.25
0
1 = ‘‘low’’, 2 =‘‘medium’’ and 3 =‘‘high’’ for DIVERSITY OF OPINION, COMMUNITY
NEWS, MULTICULTURALISM, and ADVERTISING. CONTRIBUTION is value ofcontributions to public radio and public television stations during the past 12 months. BUNDLE = 1
when subscription television service is bundled with internet service and/or other telephone
services. Obs is number of observations. s.d. is standard deviation. Min is minimum value. Max is
maximum value
information on news and current affairs. Newspaper readers also spend about an
hour a day reading the newspaper, while smartphone owners use their phone to go
online for about 0.6 h to get information on news and current affairs online.10
Summary statistics for media environment features are presented in Table 13.3
These data indicate that, on average, the levels of the DIVERSITY OF OPINION,
COMMUNITY NEWS, MULTICULTURALISM and ADVERTISING features were
about ‘‘medium.’’ About 58 % of respondents indicated that they bundled their
subscription television service with the internet and/or telephone service. The price
(or, COST) for the typical media environment ranged from zero to $447 per month,
with an average of $111.20 per month. About 10 % of the sample indicated that
they have contributed to public radio stations and/or public TV stations during the
past twelve months at an average of $9.30 per month.
13.6 Econometric Model
13.6.1 Random Utility Model
The random utility model is used to estimate marginal utilities and calculate WTP.
Survey respondents are assumed to maximize their household’s utility of the media
environment option A or B conditional on all other consumption and time allocation decisions. A linear approximation to the household conditional utility
(U) function is:
10
The most popular media combinations are radio, television and the internet, about 30 % of
sample respondents, and newspaper, radio, television and the internet, about 26 % of sample
respondents.
266
S. J. Savage and D. M. Waldman
U ¼b1 COST þ b2 DIVERSITY OF OPINION þ b3 COMMUNITY NEWS
þ b4 MULTICULTURALISM þ b5 ADVERTISING þ e
ð13:1Þ
where b1 is the marginal disutility of COST, b2, b3 and b4 are the marginal utilities
for DIVERSITY OF OPINION, COMMUNITY NEWS and MULTICULTURALISM;
b5 is the marginal disutility of ADVERTISING and e is a random disturbance.
The utility of each media environment U* is not observed by the researcher.
What is known is which option has the highest utility. For instance, when a
respondent chooses the new media environment option A over B and then the SQ
over A, it is assumed that UA [ UB and USQ
[ UA . For this kind of dichotomous
choice data, a suitable method of estimation is maximum likelihood (i.e., a form of
bivariate probit) where the probability of the outcome for each respondent-choice
occasion is written as a function of the data and the parameters. For details on the
econometric model, see Savage and Waldman (2011).
13.6.2 Willingness-to-Pay
The marginal utilities have the usual partial derivative interpretation; the change in
utility, or, satisfaction, from a one-unit increase in the level of the feature. Given
‘‘more is better’’, the a priori expectation for DIVERSITY OF OPINION, COMMUNITY NEWS and MULTICULTURALISM is b2, b3, b4 [ 0. For example, an
estimate of b2 = 0.2 indicates that a one-unit improvement in DIVERSITY OF
OPINION, measured by a discrete improvement from ‘‘Low = 1’’ to ‘‘Medium = 2’’, increases utility by 0.2 for the representative household. A higher cost
and a higher amount of advertising provide less satisfaction so b1, and b5 \ 0 are
expected.
Since the estimates of MU, such as an increase in utility of 0.2 described above,
do not have an understandable metric, it is necessary to convert these changes into
dollars. This is done by employing the economic construct of WTP. For example,
the WTP for a one unit increase in DIVERSITY OF OPINION (i.e., the discrete
improvement from ‘‘Low’’ to ‘‘Medium’’) is defined as how much more the local
media environment would have to be priced to make the consumer just indifferent
between the old (cheaper but with only one viewpoint) media environment and the
new (more expensive but with a few different viewpoints) media environment:
b1 COST þ b2 DIVERSITY OF OPINION þ b3 COMMUNITY NEWS
þ b4 MULTICULTURALISM þ b5 ADVERTISING þ e
¼ b1 ðCOST þ WTPD Þ þ b2 ðDIVERSITY OF OPINION þ 1Þ ð13:2Þ
þb3 COMMUNITY NEWS
þ b4 MULTICULTURALISM þ b5 ADVERTISING þ e
13
Who Values the Media?
267
where WTPD is the WTP for an improvement in DIVERSITY OF OPINION.
Solving algebraically for WTPD in Eq. 13.2 gives the required increase in cost to
offset an increase of b2 in utility11:
WTPD ¼ b2 =b1
ð13:3Þ
For example, estimates of b2 = 0.2 and b1 = -0.01 indicate that the WTP for
an improvement in diversity of opinion from ‘‘Low’’ to ‘‘Medium’’ is $20 (= -0.2/
0.01).
This approach to estimating consumer valuations is used for all other features of
the local media environment. The WTP for COMMUNITY NEWS, MULTICULTURALISM and ADVERTISING is the negative of the ratio of its MU to the
marginal disutility of COST.
13.7 Results
The discrete-choice data described above are used to estimate a bivariate probit
model of household utility from their local media environment. Since each pair of
binary choices, A versus B, and A or B versus SQ, for each choice occasion
represents information on preferences, the starting maximum sample size for
econometric estimation is n = 5,031 9 8 = 40,248. Because there are some
demographic differences between our final sample and the population, the random
utility model is estimated with weighted maximum likelihood, where the contribution to the log likelihood is the poststratification weight times the log of the
bivariate probability for the individual choice occasion.
13.8 Baseline Results
Table 13.4 reports weighted maximum likelihood estimates of the baseline model
of household utility. MU parameters, asymptotic t-statistics for the marginal
utilities (t), WTP calculations (WTP) and standard errors for the WTP calculations
(s.e.) are presented in columns two through five. The estimate of the ratio of the
standard deviation of the errors in evaluating the hypothetical alternatives to the
errors in the SQ alternative, k = 1.49, is greater than one. Respondents appear to
have more consistency in choice when comparing the new media environment
options than when comparing a new option to their SQ alternative.
11
The discrete-choice model actually estimates b2/r and b1/r, where r is the scale parameter.
The WTP calculation is not affected by the presence of the scale parameter because –(b2/r)/(b1/
r) = -b2/b1.
268
S. J. Savage and D. M. Waldman
Table 13.4 Basefine Estimates ofUtility
MU
DIVERSITY OF
OPINION
COMMUNITY NEWS
MULTICULTURALISM
ADVERTISING
COST
CONSTANT
k
Likelihood
Respondents
WTP
s.e.
0.160
44.83
$13.06
$1.35
0.171
0.022
-0.100
-0.012
0.319
1.487
-1.092
5,131
50.45
6.18
23.37
129.7
35.21
67.53
$13.95
$1.82
$8.18
$1.35
$1.30
$1.33
MU is estimate of marginal utility. t is t ratio for MU estimate. WTP is estimate of
willingness to pay. s.e. is standard error of WTP estimate. k is the estimate of the ratio of the
standard deviation of the errors in evaluating the status quo alternative to the errors in evaluating
the hypothetical alternatives. Likelihood is mean log likelihood
Because consumers may have heterogeneous preferences for unmeasured
aspects of media environment alternatives, utility is estimated with a constant to
capture differences in tastes between the SQ and new A and B media options.
Holding all other features of the media environment constant, the difference in
utility between the SQ and the new media environment option can be interpreted
as the consumer’s disutility from switching from the SQ to the new media environment. Dividing this difference by the marginal disutility of COST provides an
estimate of the average consumer switching cost, here, about $26 (= 0.319/0.012)
per month. Another way of examining switching costs is by comparing them to
respondent’s annualized average monthly cost of their media environment, here
$1,334 (= 111.2 9 12). The estimated switching cost is about 23 % of annual
consumer expenditures on the media sources that comprise their media environment. For comparison, Shcherbakov (2007) estimated that switching costs comprise about 32 and 52 % of annual expenditures on cable and satellite television
services, respectively.
The data fit the baseline model well as judged by the statistical significance of
most parameter estimates. The marginal utility parameters for DIVERSITY OF
OPINION, COMMUNITY NEWS, and MULTICULTURALISM are positive and are
significant at the one percent level. The marginal utility parameters for COST and
ADVERTISING are negative and statistically significant at the one percent level.
The estimated signs for these media features imply that the representative consumer’s relative utility increases when: the information on news and current affairs
from different viewpoints is increased; the amount of information on community
news and events is increased; the amount of information on news and current
affairs reflecting the interests of women and minorities is increased; the amount of
space and/or time devoted to advertising is decreased; and the dollar amount the
household pays per month for their media environment is decreased.
13
Who Values the Media?
269
DIVERSITY OF OPINION and COMMUNITY NEWS are important features of
the local media environment. Consumers are willing-to-pay $13.06 per month for
different viewpoints in the reporting of news and current affairs and $13.95 for
more information on community news and events. Consumers also value MULTICULTURALISM, although the willingness-to-pay for this feature is less precisely estimated. The results show that consumers would be willing-to-pay an
additional $1.82 per month for more information that reflects the interests of
women and minorities. As expected, consumers have a distaste for ADVERTISING. The representative consumer would be willing-to-pay $8.18 per month for a
marginal decrease in the amount of advertising they have to listen to or view.
13.9 Heterogeneous Preferences
Because they do not have identical preferences, it is possible that individual
consumer’s WTP for their media environment varies with observable demographics. For example, women and nonwhite households may have stronger
preferences for MULTICULTURALISM, and, because of a higher opportunity cost
of time, higher income households may have a stronger distaste for ADVERTISING. Differences in the marginal utility of all features to different households are
estimated by estimating the random utility model on various subsamples of the
data according to age, education, gender, income, and race. These estimates of the
random utility model for demographic subsamples are available from Savage and
Waldman (2011).
WTP for more information on community news and events increases with age,
from $8.96 to $20.78 per month. WTP for more information that reflects the
interests of women and minorities decreases with age, with the 60 years and over
group placing no value on this particular feature. Younger consumers have less
distaste for advertising. Respondents aged 18–44 years are willing-to-pay about
five or six dollars per month for a decrease in the amount of advertising in their
media environment, whereas respondents 45 years and over are willing-to-pay
about nine or twelve dollars per month.
WTP for diversity of opinion, information on community news and events, and
information that reflects the interests of women and minorities increases with years
of education. Respondents with no college experience do not value information
that reflects the interests of women and minorities. Moreover, they are willing-topay about four or six dollars per month for a decrease in the amount of advertising
in their media environment compared with educated respondents who are willingto-pay about nine or ten dollars per month.
Valuations for the diversity of opinion, information on community news and
events, and (less) advertising all increase with income. Low-income respondents
do not value information on news and current affairs that reflect the interests of
women and minorities, however, middle- and high-income respondents are
270
S. J. Savage and D. M. Waldman
willing-to-pay about $1.50 to $2.50 per month for more information that reflects
the interests of women and minorities.
The WTP for the diversity of opinion, information on community on news and
events and less advertising are similar across male and female respondents.
However, while females are willing-to-pay about three dollars per month for
information on news and current affairs that reflects the interests of women and
minorities, males place no value on this type of information from their local media
environment. White respondents are WTP more for diversity of opinion, information on community news and events, and less advertising that nonwhite
households. White consumers do not value information on news and current affairs
that reflect the interests of women and minorities. In contrast, nonwhite consumers
are willing-to-pay about five dollars per month for more information that reflects
the interests of women and minorities. This relationship is explored further by
estimating the random utility model on subsamples of white versus nonwhite
males and white versus nonwhite females. The results are similar in flavor to those
reported for the male and female subsamples. Nonwhite males are willing-to-pay
$3.48 per month for more information that reflects the interests of women and
minorities, white females are willing-to-pay $1.52 per month, and nonwhite
females are willing-to-pay $6.16 per month.
13.10 Conclusions
This study estimated consumer demand for their local media environment,
described by the offerings from newspapers, radio, television, the internet, and
smartphone. Results show that the average price for a media environment was
about $111 per month and the average consumer switching cost was about $26 per
month. The representative household is willing-to-pay $13 per month for more
different viewpoints in the reporting of information on news and current affairs,
and $14 per month for more information on community news and events. Consumers value more information that reflects the interests of women and minorities,
although WTP for this is only about two dollars per month. Consumers have a
distaste for advertising and are willing-to-pay eight dollars per month for a
decrease in the amount of advertising in their media environment.
Two goals of U.S. media policy ensured that there are opportunities for different, new, and independent viewpoints to be heard (‘‘diversity’’) and that media
sources respond to the interests of their local communities (‘‘localism’’). By
estimating consumer valuations for their local media environment, this study sheds
some demand-side light on these goals. An interesting empirical extension would
be to link measures of media market structure to consumer valuations of diversity
and localism. For example, demand estimates could be used to calculate the effects
on expected consumer welfare from a merger of two television stations that results
in quality differences in diversity and localism between the pre- and postmerger
markets.
13
Who Values the Media?
271
References
Baum M, Kernell S (1999) Has cable ended the golden age of presidential television? Am
Political Sci Rev 93(1):99–113
Blumenschein K, Blomquist G, Johannesson M (2008) Eliciting willingness to pay without bias
using follow-up certainty statements: comparisons between probably/definitely and a 10-point
certainty scale. Econ J 118:114–137
Byerly C, Langmia K, Cupid J (2006) Media ownership matters: localism, the ethnic minority
news audience and community participation. In: Does bigger media equal better media? Four
academic studies of media ownership in the United States, Benton Foundation and Social
Science Research Council, http://www.ssrc.org/programs/media
Carson R, Mitchell R, Haneman W, Kopp R, Presser S, Ruud P (1994) Contingent valuation and
lost passive use: damages from the Exxon Valdez. Resources for the future discussion paper,
Washington, D.C
ComScore (2011) ComScore reports August 2010 U.S. mobile subscriber market share. http://
www.comscore.com/Press_Events/Press_Releases/2010/10/
comScore_Reports_August_2010_U.S._Mobile_Subscriber_Market_Share/(language)/engUS. Accessed on 31 March 2011
Cummings R, Taylor L (1999) Unbiased value estimates for environmental goods: a cheap talk
design for the contingent valuation method. Am Econ Rev 89:649–665
Dennis M (2009) Description of within-panel survey sampling methodology: the knowledge
networks approach. Government Acad Res Knowl Networks
Gentzkow M (2007) Valuing new goods in a model with complementary online newspapers. Am
Econ Rev 97(3):713–744
Huber J, Zwerina K (1996) The importance of utility balance in efficient choice designs. J Mark
Res 33:307–317
List J (2001) Do explicit warnings eliminate hypothetical bias in elicitation procedures? Evidence
from field auctions for sportscards. Am Econ Rev 91:1498–1507
NMR (2007) How people get news and information. FCC Media Ownership Study #1,
Washington, D.C
Pew Research Center (2010) How news happens—still: a study of the news ecosystem of
Baltimore. Pew Research Centre Publication, http://pewresearch.org/pubs/1458/newschanging-media-baltimore. Accessed 31 March 2011
Prior M (2002) Efficient choice, inefficient democracy? The implications of cable and internet
access for political knowledge and Voter Turnout. In: Cranor F, Greenstein S (eds)
Communications policy and information technology: promises, problems, prospects. The MIT
Press, Cambridge
Purcell K (2011) Trends to watch: news and information consumption. Presented to the Catholic
Press Association, Annual Meeting, 24 March 2011
Rosston G, Savage S, Waldman D (2010) Household demand for broadband internet in 2010.
B.E. J Econ Policy Anal Adv 10(1), Article 79. Available at: http://www.bepress.com/bejeap/
vol10/iss1/art79
Savage S, Waldman D (2008) Learning and fatigue during choice experiments: a comparison of
online and mail survey modes. J Appl Econ 23(3):351–371
Savage S, Waldman D (2011) Consumer valuation of media as a function of local market
structure. Final Report to the Federal Communication Commission’s 2010 Quadrennial Media
Ownership proceeding—MB Docket No. 09-182. May 30, 2001. Available at: http://
www.fcc.gov/encyclopedia/2010-media-ownership-studies
Shcherbakov O (2007) Measuring consumer switching costs in the television industry. Mimeo,
University of Arizona
United States Census Bureau (2009) American Factfinder. United States Census Bureau,
Washington, D.C
Chapter 14
A Systems Estimation Approach to Cost,
Schedule, and Quantity Outcomes
R. Bruce Williamson
14.1 Introduction
The typical United States (US) weapons system has grown in cost, capability, and
the time required for development and production. There is also a strong chance
that fewer units of that weapon system will be acquired than were initially contracted. Recent US fixed wing aircraft programs, such as the F-22 Raptor or the
F-35 Joint Strike Fighter, are two examples of programs that have greatly reduced
delivery quantities, much increased per unit costs, and prolonged development and
production schedules. These programs are not unusual in the history of US
weapons acquisition. Cost growth, schedule growth, and quantity change are
interrelated factors in some manner in most defence acquisition programs, but the
empirical work to substantiate the magnitude and direction of the interaction of all
three factors working together has been scarce.
There is an evidence that the interrelationship creating the problem begins early
in the life of a program, but often propagates as the program moves toward
completion, or in rare instances, cancelation. The U.S. General Accounting Office
(1992) noted that
In weapons acquisitions, optimistic cost estimates are rewarded because they help win
program approval, win contract awards, and attract budgetary income. The consequences
of cost growth are not directly felt by an individual program because they are ‘accommodated’ through stretch-outs and quantity changes and by spreading the pain across
many programs (GAO 1992; 40).
Inevitably, weapon systems will be affected by Congressional decisions
resulting from Federal budget priorities, reduced strategic requirements (e.g., the
end of the Cold War), or Congressional actions to perpetuate a defence program
that the Department of Defence (DoD) may actually want to terminate, as well as
by DoD decisions to spread budget in such a way make room for additional
R. Bruce Williamson (&)
National Defense Business Institute, University of Tennessee, Knoxville, Tennessee, US
e-mail: rbwilliamson@utk.edu
J. Alleman et al. (eds.), Demand for Communications Services - Insights and Perspectives,
The Economics of Information, Communication, and Entertainment,
DOI: 10.1007/978-1-4614-7993-2_14, Springer Science+Business Media New York 2014
273
274
R. Bruce Williamson
desirable programs.1 As the final decision authority, ‘‘the temptations for Congress
to compromise by trimming a little out of each program instead of cancelling
whole programs is enormous’’ (Tyson et al. 1989). When this happens, program
managers in the DoD recalculate how many units they can buy and what current
program expenditures they can defer by shifting current development or production into the future, when the budget might be more plentiful. Aspects of the
Congressional, DoD, and contractor relationship also tend to encourage short-term
decision-making, so that the full consequences of the way in which cost, schedule,
and quantity choices interact to create out-of-control major defence acquisition
programs (MDAPs) may not be recognized for months or years, often after the
program staff, contractor teams, and Congressional supporters have retired or
moved to different civilian or military jobs.
Defence acquisition programs are certainly not unique in facing difficulties in
effective management of large-scale programs. Government acquisitions in public
works have also had similar, economically inefficient outcomes. Foreign governments have similar issues with major defence programs (Chin 2004). Because of
the large number of concurrent, large-scale programs in one Federal Department,
however, excessive cost growth or schedule growth for a specific system like the
F-35 aircraft impacts not only that weapon system, but also, the 90–100 other
MDAPs running concurrently. The combined effect across many programs
aggravates existing budget instabilities and steadily widens the gap between
planned program expenditure ex ante and required expenditure ex post. Learning
from the interaction between cost growth, schedule growth, and quantity change in
the management of MDAPs would benefit the DoD and Congress and enable a
better look at opportunities forgone from other weapons systems not funded,
delayed or assigned lower priority than prudent national security concerns would
warrant. It would have application to similar public investment programs, such as
infrastructure projects that require continuous funding approval.
The remainder of the chapter is organized as follows. The next section reviews
previous studies. The third section presents the methodology, followed by a section describing the data sample used. This is followed by an analysis and discussion of results, and a concluding section with recommendations.
14.2 Previous Studies
Peck and Scherer (1962) and Scherer (1964) appear to have been the first to apply
rigorous economic theory to understand the trade-off in measures of schedule
variance and cost variance in weapons program management. They estimated a
1
Defense Science Board (1978). The consequence is that for many large programs, when future
year defence acquisition expenditures are charted against current year expenditures, the pattern
begins to resemble the ‘‘bow wave’’ ahead of a ship.
14
A Systems Estimation Approach to Cost, Schedule, and Quantity Outcomes
275
simple correlation of schedule variance and cost variance in weapons programs of
0.57, significant and positive. Peck and Scherer (1962) noted ‘‘the greater the
schedule slippage in a program, the larger the cost overrun tends to be.’’ Scherer
(1964) understood quantity decisions (i.e., how much to buy) to be ‘‘implicit’’ in
cost estimates but stopped short of a systems approach to explain possible endogeneity issues.2 Scherer (1964) also proposed a concurrent trade-off between time,
schedule, and weapons quality, but this insight was not pursued empirically.
Defence acquisition research since Peck and Scherer (1962) and Scherer (1964)
has focused separately on cost growth, schedule growth or quantity change. The
analyses usually rely on descriptive measures and seldom employ multivariate
methods to control for other correlated factors.3 The result in the literature has been
a large number of differing point estimates for different time intervals, data sets,
samples, types of MDAPs, types of contract mechanisms, and so on. At least one of
these studies has determined there is no empirical support for the relationship:
…we could find no relationship between cost growth and schedule slip. Both of these
variables are measures of program success, and two hypotheses about their relationship are
commonly asserted: (1) that there are tradeoffs between cost and schedule growth such
that a program can incur one and not the other, and (2) that they in fact occur together in
problematic programs. Even though these two hypotheses are opposed to each other, we
can find no support for either in our data (Drezner and Smith 1990; 45).
This is a surprising statement, not least because of the analysis of Peck and
Scherer (1962) and Scherer (1964), but also because Tyson et al. (1989) had
previously validated in an ordinary least squares (OLS) econometric framework
that there was a connection:
Development schedule growth, production stretch, and development schedule length are
the major drivers of total program cost growth. Production stretch in particular has
increased cost growth by 7–10 % points per unit increase in stretch (for example, by
doubling the production schedule length while keeping quantity constant) (Tyson et al.
1989; ix).
Among many studies reviewed by this researcher, it appears that only one study
(Tyson et al. 1994) implemented a systems approach to account for the potential
simultaneity between cost and schedule growth in their model. The authors
2
For Peck and Scherer’s analysis, ‘‘Implicit in the notion of cost is the aspect of quantity. The
cost per unit of a weapon system determines the quantity of systems which can be obtained with
any specified amount of resources’’ (Peck and Scherer 1962; Note 1, 251). This is in line with the
prevailing industry and military practice of normalizing cost for any quantity changes in
comparisons (e.g., quantity-adjusted cost). This has the tendency of casting quantity change as a
predictor of cost change, but not the reverse, in which quantity change is understood as the
dependent variable. See Scherer (1964).
3
Drezner et al. (1993) is one fairly common example of research practices in reviewing the key
factors affecting cost growth, including budget trends, system performance, management
complexity, and schedule-related issues. Their reliance on univariate and bivariate views of
selected acquisition report (SARs) data unfortunately allows important explanatory factors to be
confounded.
276
R. Bruce Williamson
considered cost and schedule changes affecting tactical missile acquisition programs of the previous two decades.
The Tyson et al. (1994) analysis is the first example of an econometric systems
approach (two-stage least squares) for the relationship between program cost growth
and program schedule growth while controlling for observed quantity change in the
regression.4 The first-stage regression predicted program schedule growth, while
the second stage used the predicted schedule growth as an explanatory variable in the
regression on predicted cost growth. Although the sample sizes were minimal, the
estimates improved compared with a single-equation model of cost growth as a
function of schedule and quantity change. The specific results are not easily compared
because the data set involved only a small number of a one program type, but the
analytical insight to pursue a systems approach is an empirical landmark in the study
of the interaction of cost growth, schedule growth, and quantity change in MDAPs.
14.3 Methodology
A complete systems econometric approach might consider as dependent variables
three measurable program phenomena: cost growth from budgeted amounts,
schedule slip, and quantity change. At least theoretically, three additional program
phenomena are known anecdotally but lack objective measures. One of these
additional phenomena is the alteration during a program’s life in capability, reliability, quality, or performance, which are implicit in the term ‘‘performance’’ that
will be used here. Two other important phenomena could also be mentioned, but
lack objective measurement: first, the level of risk assumed with the development
of new technology on a program; and second, the experience and leadership
manifested by the program management team on the project. This is a total of six
phenomena and explanatory factors on which a consistent set of metrics would be
needed for a complete systems approach on how critical trade-offs are made during
the life of a defence acquisition program.
In practice, program risk and program management experience have no metrics
that a researcher might access. Further, the portion of SARs which would
acknowledge some aspects of performance trades, such as those from program
evaluations, are not publicly available. This leaves the present analysis with only
three publicly available metrics of weapon acquisition program success, that is,
cost growth, schedule growth, and quantity change.
The econometric technique used in this analysis compares single-equation OLS
methods on cost growth, schedule change and quantity change independently, with
that of a systems estimation approach that considers that some portion of the error
4
See Tyson et al. (1994) ‘‘Examining the relationship between cost and schedule growth in
development, we concluded that it was not appropriate to consider the major independent variable
[Development Schedule Growth] to be exogenous…Therefore we estimated the tactical missile
development relationship as a simultaneous system of equations.’’
14
A Systems Estimation Approach to Cost, Schedule, and Quantity Outcomes
277
in estimation is common to all three equations. The specific methods are described
in Kmenta (1986) and Zellner (1962). Zellner (1962) considered the possibility of
a relationship between the dependent variables which is not already incorporated
in the set of separate regression equations that explain the dependent variables of
interest. In fact, there could be mutually relevant information being omitted from
each equation in the set. This would be reflected in correlation between the
regression disturbance terms, and as a consequence, the separate equation OLS
coefficient estimates would no longer be the most efficient or unbiased. The systems technique used here follows Zellner’s seemingly unrelated regression method
(SUR) approach (Kmenta 1986).
The increase in estimation efficiency from the additional explanatory information in SUR should reduce the standard error of the coefficient estimates, if
there is cross-equation correlation of the disturbance terms. If there is no correlation between the error terms, then the OLS single-equation estimator is as good
as the SUR estimator (Greene 2000). SUR estimation efficiency improves the
greater the correlation is between the equation disturbance terms.
The value in improved estimates of the effects of program factors on each other
is threefold. First, it sheds light on the size and direction of quantitative interactions of program phenomena that result from trade-offs made to satisfy budget,
schedule, and quantity requirements. Second, it improves qualitative insight that
these kinds of trade-offs are a regular occurrence in the course of a MDAP. Third,
it allows for the testing of policy change and acquisition reform variables in a
systems multivariate setting to answer the long debated question whether any
defence acquisition reform has produced a beneficial result in reducing cost
growth, schedule growth, or quantity shrinkage.
14.4 Data
This study uses published SARs on MDAPs in a data set from RAND Corporation
from March 1994.5 RAND authors were consistent in methods used to adjust
estimated baseline and future costs by quantity changes, in restating all program
cost data in constant 1996 dollars, and in documenting many data issues in both
(Jarvaise et al. 1996) and the SARs methodology monograph by Hough (1992).
Missing values are a significant shortcoming of the RAND data set, and one not
easily rectified.6 Data from McCrillis (2003) and McNicol (2006) were used to
5
RAND released their version of SARs data (Jarvaise et al. 1996); the electronic version is
available on the RAND website (www.rand.org) as an electronic data appendix to the Jarvaise
et al. (1996) monograph.
6
For example, schedule data—perhaps a single missing planned date for MS B—impacts
several variables calculated using that data as an end point for one interval, the start point for the
next interval, and various ratios of program phase length and schedule slip ratios that could be
calculated from that variable. MDAPs that began reporting by the 1969 SAR had only
278
R. Bruce Williamson
supplement the RAND data to permit testing of basic program risk characteristics
as well as controlling for the potential effects of DoD and Congressional acquisition reforms during the time period considered.
The SARs rely on event-specific baseline estimates that are made at the program’s first, second, and third milestones.7 These formal baseline estimates occur
at the Concept Development stage, or program Milestone A, called a Planning
Estimate; a second baseline estimate is made at Milestone B at the start of the
engineering and manufacturing development (EMD) phase, called a Design Estimate; and at Milestone C a third baseline estimate, called a Production Estimate, is
reported at the start of actual production. As a program proceeds, there is a Current
Estimate supplied in the SAR either in the (December)year-end SAR summaries or
the quarterly SAR updates.
The variables of interest in this analysis are derived from DoD program estimates posted at the baseline points mentioned above. The dependent variables in
the analysis will measure relative change in key program factors, instead of
absolute dates, final quantity, or unit cost; the independent variables applied must
also have plausible explanatory relationships with the relative change in the
dependent variables.
Quantity change (variable name QTYCHG) is defined as the number of units to
be purchased as measured at program’s latest available Current Estimate or last
program baseline estimate, relative to the quantity that was expected to be purchased at the initial Design Estimate baseline. Here, quantity changes are
expressed as a ratio, the numerator being the latest quantity estimate, and the
denominator being the quantity baseline estimate for the program phase(s) of
interest in the analysis. The ratio of latest Current Estimate quantity to the Design
Estimate quantity at Milestone B has approximately 29.5 % of programs
exceeding a ratio of 1.0; 15.3 % of programs are close to a ratio of 1.0; and about
55.2 % of MDAPs have ratio less than 1.0.
MDAPs with quantity change ratios above 1.0 include programs like the F-16
aircraft, which was set for 650 aircraft at Milestone B, but eventually saw final
production of 2,201 aircraft with foreign sales. Programs with a quantity change
ratio in the neighborhood of 1.0 produce what was originally planned. Programs
with ratios less than 1.0 include those programs which had cuts in later quantities
relative to the initial order. This is the largest category of MDAPs. Most MDAPs,
(Footnote 6 continued)
fragmentary or inconsistent data covering their early years before the SARs began. See Hough
(1992) and Jarvaise et al. (1996). For the present analysis, an MDAP sample of 244 programs is
reduced by 50 % or more in multivariate estimation procedures as a result of the missing endpoints on schedule data.
7
The discussion is simplified to focus on the essential concepts required rather than burden the
reader with DoD program terminology and acronyms. The reader is referred for additional
technical detail of acquisition cycles, program structure and weapon program reporting metrics to
the studies by Hough (1992) and Jarvaise et al. (1996).
14
A Systems Estimation Approach to Cost, Schedule, and Quantity Outcomes
279
for a variety of reasons, never produce the quantities they were expected to produce at the time of contracting.
Schedule change (variable SCHEDCHG) measures the relationship between the
program’s actual schedule to complete relative to the planned schedule from the
first available baseline estimate. Schedule is measured in months, and the variable
is constructed as a percentage difference. Schedule change is often referred to as
‘‘schedule slip’’ as well. When a program is ‘‘stretched out’’ by Congressional or
DoD budget decisions, it means that schedule slip has been built into a revision in
the program’s planning (U.S. Congressional Budget Office 1987).
In the sample used here, only 8.3 % of programs had negative schedule growth,
that is, they were completed in less time than expected. A much larger proportion of
programs (31.5 %) had no schedule growth at all, and 60.2 % of MDAPs had
positive schedule growth, taking months or years longer to complete than expected.
Cost growth (variable COSTCHG) is defined as the quantity-adjusted percentage
ratio between a cost estimate at the last available reporting or baseline, versus the
cost estimate at the initial SAR baseline for the program phase(s) of interest.
Inflation is removed as a confounding factor in order to isolate program real cost
growth, and cost growth is normalized for quantity change that may have occurred
during the program. This procedure permits reasonable comparison of per unit costs
across programs, despite there being numerous changes in planned quantities to be
purchased and the budget required to purchase those changed quantities.8,9 The
variable used here is calculated from the latest available program cost reported in a
Current Estimate, relative to the initial Design Estimate baseline cost.
The Cost Growth variable in this analysis can be negative, zero, or a positive
percentage change in real cost between the last Current Estimate of program cost
and the Milestone B baseline cost estimate, again, adjusted for quantity change.
The sample used here has 42.1 % of programs with negative real cost growth;
5.1 % of programs have real cost growth of zero; and 52.8 % of defence programs
have positive real cost growth, all after adjustments for quantity change.
The modeling variables used in this analysis are given with their summary
descriptive information in Table 14.1. The characteristics for both the full data
sample as well as the final estimation sample data set are shown. The primary
difference in the two sample sizes results from the unfortunately high proportion
(56 %) of missing values for the schedule slip variable (SCHEDCHG). Combined
8
Because there have been many different purposes in calculating cost growth, it is possible to
find cost analyses that focus on a specific time period, i.e., Development cost growth, or
Production cost growth. This analysis calculates cost growth from the Design Estimate to the last
available Current Estimate, or to project completion, whichever happens first, and is a more
comprehensive measure of overall program cost growth.
9
All SAR cost estimates throughout the life of a defence contract must be adjusted (or
normalized) for effective cost (total and per unit) given the changing number of units to be
purchased. How to best achieve this has been a topic of lengthy debate and refinements among
defence analysts. The best exposition is in Hough (1992, 30–41), as well as in the earlier report by
Dews et al. (1979, Appendix A).
67
21.03
14.59
28.22
-5.27
149.71
2.38
7.28
67
1.28
0.99
1.13
0.00
5.69
1.94
4.04
67
20.57
5.62
46.10
-52.03
195.39
1.63
3.43
67
0.28
0.00
0.45
0.00
1.00
1.01
-1.02
232
12
0.05
0.30
0.00
0.46
0.00
1.00
0.87
-1.26
67
0.07
0.00
0.26
0.00
1.00
3.34
9.45
244
0
0.00
0.06
0.00
0.24
0.00
1.00
3.67
11.59
67
0.15
0.00
0.36
0.00
1.00
2.04
2.22
244
0
0.00
0.13
0.00
0.33
0.00
1.00
2.25
3.10
159
85
0.35
29.47
4.22
115.11
-77.81
1071.21
6.76
54.45
108
136
0.56
18.13
7.71
28.73
-11.92
149.71
2.52
7.37
Full sample N
Count missing
Percent missing
Mean
Median
Standard deviation
Minimum
Maximum
Skewness
Kurtosis
Estimation sample
Estimation Sample N
Mean
Median
Standard deviation
Minimum
Maximum
Skewness
Kurtosis
172
72
0.30
1.03
0.89
1.01
0.00
5.69
1.97
4.76
in major defence acquisition program analysis
COSTCHG
PEO
MUNITION
FXDWING
Percentage
Binary
Binary
Binary
Dependent
Independent
Independent
Independent
Table 14.1 Descriptive summary for variables of interest
Variable name
QTYCHG
SCHEDCHG
Variable type
Ratio
Percentage
Variable use in analysis
Dependent
Dependent
67
0.38
0.00
0.49
0.00
1.00
0.49
-1.81
244
0
0.00
0.41
0.00
0.49
0.00
1.00
0.39
-1.88
NAVY
Binary
Independent
67
28.59
28.50
8.09
13.00
45.00
0.14
-0.91
228
16
0.07
29.02
29.00
8.56
10.00
50.00
0.02
-0.69
B1950SYR
Time indicator
Independent
280
R. Bruce Williamson
14
A Systems Estimation Approach to Cost, Schedule, and Quantity Outcomes
281
Table 14.2 Correlation matrix for dependent variables
COSTCHG
SCHEDCHG
QTYCHG
COSTCHG
SCHEDCHG
QTYCHG
1.000
1.000
0.279
0.321
1.000
-0.00495
with the effect of missing values in other variables, the net result is that there are
only 67 MDAP programs available for econometric analysis over the 1965–1994
period.
The full sample and the estimation sample are not appreciably different in terms
of the distributions of the key variables in the two samples. Distributions for the
quantity, schedule or cost growth variables are moderately right skewed and
thinner tailed than normal distributions,10 although the estimation sample is less
affected than the full sample by these characteristics.
Definitions for the regression exogenous variables are the following:
B1950SYR: Calendar year of program start minus 1950.
FXDWING: 1 = fixed wing aircraft procurement program, 0 = otherwise.
MUNITION: 1 = Munitions program, 0 = otherwise.
NAVY: 1 = Navy program; 0 = otherwise.
PEO: 1 = program started at the time of, or following, the creation of the DoD
program executive offices (PEO) in 1987; 0 = otherwise.
Table 14.2 displays the simple correlation between the three dependent variables for the estimation sample. There is no apparent correlation between schedule
change and quantity change, although the modest correlation between cost growth
and both of the other dependent variables will be investigated in the analysis that
follows. As noted earlier, Peck and Scherer (1962) calculated a simple correlation
of 0.57 between cost growth and schedule growth with a different sample of major
programs from the 1950s, while the correlation in the present sample is only 0.28.
The degree of correlation is a suggestion of the potential gain from the use of
systems methods in place of single-equation methods, as discussed previously.
The estimation and results using this (Table 14.1) data set are now examined.
14.5 Analysis and Results
Table 14.3 presents results of three single-equation OLS models in the center
column, with the SUR system estimates provided in the rightmost column. The
dependent variables are listed on the left side of the table. Table 14.4 on the
following page provides goodness of fit measures that accompany the equation
estimation results in Table 14.3.
10
A measure of kurtosis = 3 and skewness = 0 would describe a normally distributed variable.
282
R. Bruce Williamson
Table 14.3 Results of the OLS single-equation and SUR system equation estimatesa
Dependent variable
Independent variables
Single-equation
SUR system
coefficients
coefficients
(standard error)
(standard error)
COSTCHG
(Cost model)
CONSTANT
FXDWING
PEO
B1950SYR
SCHEDCHG
(Schedule model)
CONSTANT
FXDWING
PEO
NAVY
QTYCHG
(Quantity model)
CONSTANT
PEO
MUNITION
a
-24.98
(29.12)
21.54
(18.34)
-25.31
(24.13)
1.75
(1.26)
34.41
(6.66)
-26.43
(5.02)
-18.91
(5.99)
-10.10
(6.20)
1.59
(0.19)
-0.75
(0.20)
-1.13
(0.22)
-48.10
(17.67)
34.30
(13.60)
-35.26
(19.16)
2.59
(0.82)
34.37
(6.23)
-27.55
(4.86)
-18.96
(5.78)
-9.53
(5.51)
1.61
(0.18)
-0.75
(0.20)
-1.39
(0.22)
Robust-White heteroscedasticity correction used.
The regressions were estimated on several variables of interest to understand sign
and significance, prior to the comparison with a systems approach using SUR estimation. The variables have minimal collinearity, are not serially correlated, and are
adjusted with robust estimation to create heteroscedastic-consistent standard errors.11
Turning now to the SUR estimates in the right-hand columns of Table 14.3, the
standard error of the SUR coefficient estimates is reduced (and thus t-statistics
improved) from the standard error of the OLS coefficient estimates.
Overall measures of goodness of fit are presented in Table 14.4. The models
have low R2 overall suggests the models could be improved and sample issues
resolved, although R2 has limited value in appraising SUR or systems models.
11
Only two variables (B1950SYR and the PEO dummy variable) show even mild collinearity,
but the condition index is 10.86. All other variables produce condition indexes under a value of
2.97. The Belsley-Kuh-Welsch guidelines are that a condition index value greater than 15
indicates a possible problem and an index greater than 30, a serious problem with
multicollinearity (Belsley et al. 1980). Serial correlation in a traditional sense is not present,
although it is possible that there may be a cohort effect for groups of programs going through the
same program stage in the same years, but with limited sample available, the sample partitioning
would leave few degrees of freedom for hypothesis testing.
14
A Systems Estimation Approach to Cost, Schedule, and Quantity Outcomes
Table 14.4 Model comparative diagnostics
Dependent variable
Sample N
Time period
Equation 1
Cost model
Equation 2
Schedule model
Equation 3
Quantity Model
R-squared
S.E. of regression
Mean dependent variable
S.D. dependent variable
Variance of residuals
R-squared
S.E. of regression
Mean dependent variable
S.D. dependent variable
Variance of residuals
R-squared
S.E. of regression
Mean dependent variable
S.D. dependent variable
Variance of residuals
283
Single-equation
models
SUR system
model
67
1965–1995
0.054
45.98
21.16
46.19
2114
0.19
26.12
21.18
28.41
682.1
0.15
1.06
1.29
1.13
1.12
67
1965–1995
0.05
44.93
21.16
46.19
2019
0.19
25.33
21.18
28.41
641.5
0.15
1.04
1.29
1.13
1.08
The coefficients from the SUR estimation are substantially larger for the SUR cost
growth model, and less so for the schedule growth and quantity change models,
relative to single-equation coefficient estimates. Reduced variance of the estimation residuals and the greatly improved explanatory power of the variables used for
comparison suggest that there are efficiency gains in using the SUR approach
versus three independent OLS equations.
The next step is to use the improved coefficient estimates of the SUR model to
evaluate the sensitivity of the model to changes in key measures. In order to
calculate the sensitivity of the model and its effect on the dependent variables, one
needs to use the estimated coefficients and substitute in the values from the sample
means on factors one wishes to hold constant, while experimenting with setting
binary dummy variables at either their zero or one settings. The difference in the
predicted values of the dependent variable is the predicted impact on the model
when whatever condition the dummy variable represents is present versus when it
is not present.
For example, to determine the impact of fixed wing aircraft on cost growth from
the SUR cost growth equation, the following is calculated:
Step 1: Predicted average cost growth with fixed wing aircraft programs
present = -48.1 ? (34.3 * 1) ? (-35.36 * 0.28) ? (2.59 * 28.59)=
50.5% average increase in cost growth with fixed wing aircraft programs
Step 2: Predicted average cost growth without fixed wing aircraft programs
present = -48.1 ? (34.3 * 0) ? (-35.36 * 0.28) ? (2.59 * 28.59)=
16.2 % average increase in cost growth without fixed wing aircraft
programs
284
Step 3:
R. Bruce Williamson
Net predicted impact on average cost growth from having fixed wing
aircraft programs = (Step 1 result) - (Step 2 result) = 50.516.2 % = 34.3 %
net increase in cost growth due to fixed wing aircraft programs
The same process is used to calculate the net impact of the other independent
variables in Table 14.5 by substituting the appropriate values for the coefficient
estimates and the sample mean value for the independent variables that are being
held constant in the comparison.
Fixed wing aircraft programs (FXDWING), on average, produce a net impact of
34.30 % on cost growth. They are also associated on average with a 27.55 %
reduction in schedule slip. The two outcomes may have much to do with program
priorities accepting more cost growth in order to keep vital aircraft programs on
schedule. Munitions programs (MUNITION) are significant only in the quantity
change equation in the SUR model, where the net effect of a munitions program on
quantity change is -1.39. One interpretation is that munitions programs tended to
face significant quantity reductions when programs have exceeded budgets in this
time period.
The Navy service variable (NAVY) has a predicted net effect on schedule slip
of -9.53 %, which may be the result of program decisions where some cost
growth was preferable to schedule slip during this period.
The variable (PEO) for the creation of PEO in 1987 plays a significant role in
each equation.12 It is included as a policy variable of interest worth any reduction in
estimation efficiency.13 In two of three equations in the SUR model, the PEO
variable significantly reduces predicted cost and schedule growth. The predicted net
effect from the creation of the PEO in 1987 was a 35.3 % net reduction in average
cost growth from the situation before 1987; for the schedule model, the creation of
the PEO correlates with a net 18.96 % reduction in schedule slip compared with the
pre-1987 period. In the quantity equation of the SUR model, the creation of the PEO
produces a net increase of 0.17 in the quantity change ratio compared with the
situation before 1987. The simplest explanation overall is that the creation of the
PEO significantly increased fidelity in program execution to the originally contracted cost, schedule, and quantity. In short, there is clear evidence that the
acquisition reform that created the PEO in 1987 actually worked as intended.
12
Other variables defined by McNicol (2006) and McCrillis (2003) also related to time periods
in which DoD cost estimation requirements were tightened, or in which DoD budgets were
plentiful, but coefficient estimates for these additional variables were statistically insignificant in
estimation. In contrast, in the cost growth equation the highly significant PEO variable coefficient
has a p value of 0.066, in contrast to p-values of 0.001 and 0.000 in the schedule change and
quantity change equations, respectively.
13
Including the variable PEO in each equation is a compromise because the right hand side
variables across the set of equations in the SUR model are more similar, which reduces the gain
from using SUR methods. Ideally, each equation in a SUR system should have dissimilar
explanatory variables, but correlated dependent variables (Greene 2000).
Fixed wing aircraft programs
Program executive offices
Fixed wing aircraft programs
Program executive offices
Navy programs
Program executive offices
Munitions
50.52
– 4.16
– 2.11
7.72
15.50
0.75
0.01
%
%
%
%
%
Predicted change in
dependent variable
with explanatory variable
of interest set to 1.0
16.22
31.09
25.44
26.68
25.03
0.58
1.40
%
%
%
%
%
Predicted change in
dependent variable with
explanatory variable
of interest set to 0.0
34.30
-35.26
-27.55
-18.96
-9.53
0.17
-1.39
%
%
%
%
%
Predicted net change
in dependent variable
Note All other variables on the right-hand side of the regression equations are held at their sample average value (for the estimation sample) reported in
Table 14.1
Quantity growth (ratio)
Schedule growth (percent)
Cost growth (percent, constant $)
Table 14.5 Predicted net change in SUR equation dependent variables
Dependent variable
Binary (dummy) independent variables
14
A Systems Estimation Approach to Cost, Schedule, and Quantity Outcomes
285
286
R. Bruce Williamson
The variable not included in Table 14.5 is the year of program start
(B1950SYR). Because of the way the variable is defined, one can estimate the net
effect of the variable on predicted cost growth by calculating the effect on cost
growth in say, 1970, and then repeat the same process for 1980. The difference in
the predicted cost growth estimated at two points in time is the net cost growth
during that interval. Using this approach, there is a net real cost growth of 25.94 %
between 1970 and 1980. This result is particularly surprising because it is real cost
growth not accounted for elsewhere. A secular trend in program cost growth of
averaging 2.6 % a year may reflect quality differences, performance, and technology, which is not otherwise controlled for by the cost or program measures in
the SUR model. One interpretation is that the better technology in weapons programs appears to cost more over time, in real terms, all else held constant. The
result would have to be compared with other technology sectors, but even with
decreases in component costs for similar performance, the implication is that the
integrated technology in weaponry increases, not decreases in real cost over time.
14.6 Conclusions
A systems approach using SUR finds efficiency improvements to be had over the
standard OLS approach used in previous research. In addition to the unsurprising
findings on fixed wing aircraft programs, munitions programs, and Navy programs,
the SUR systems model provides good support for the finding that the PEO have
actually achieved what they were intended to in the time period covered by the
model, that is, they reduced cost growth, reduced schedule slip, and maintained
acquisition quantities, compared with the time period before the PEO came into
existence in 1987. The matter of acquisition reform effectiveness is frequently
debated, and empirical findings showing a particular reform has worked is infrequent in defence economics. A second unexpected result was identifying a real
cost growth trend of about 2.6 % per year in the data, with the possibility that this
might reflect implicit quality change over time in weapons systems.
Although the systems approach to SARs data improved the parameter estimates
on all tested variables, it also suggests ways to improve the analysis. A full systems
model with endogenous variables included on the right-hand side was not tested, in
part because specification error becomes even more important. Future work will
investigate full information systems models for potential improvements in fit and
predictive power, but a key aspect in that will be resolving issues with data. In
addition, this paper did not investigate cross-equation restrictions, but the approach
might prove useful in other systems model specifications that test service-specific
or weapon system-specific samples.
Returning to the theoretical framework stated earlier, more work needs to be
done on the key metrics of MDAP outcomes which are not captured, or captured
poorly in the SARs data. There are other data sources internal to the DoD that need
to be considered, although there may be some compatibility issues with SARs data
14
A Systems Estimation Approach to Cost, Schedule, and Quantity Outcomes
287
due to definitional differences and intended use. An extended theoretical framework, and the data on MDAPs to test it, would improve one’s understanding of the
interaction of cost, schedule, quantity, performance, risk, and management experience in a way that could genuinely improve outcomes of defence acquisition
programs and help policy makers identify defence acquisition reforms that actually
work.
References
Belsley D, Kuh E, Welsch R (1980) Regression diagnostics: identifying influential data and
sources of collinearity. John Wiley, New York, NY
Chin W (2004) British weapons acquisition policy and the futility of reform. Ashgate, Burlington,
VT
Defense Science Board (1978) Report of the acquisition cycle task force. Office of the Under
Secretary of Defense for Research and Engineering, Washington D.C
Dews E, Smith G, Barbour A, Harris E, Hesse M (1979) Acquisition policy effectiveness:
department of expense experience in the 1970s. RAND Corporation, Santa Monica, CA,
R-2516
Drezner JA, Smith G (1990) An analysis of weapon system acquisition schedules. RAND
Corporation, Santa Monica, CA, R-3937
Drezner JA, Jarvaise JM, Hess R, Hough WPG, Norton D (1993) An analysis of weapons system
cost growth. RAND Corporation, Santa Monica, CA, MR-291
Greene WH (2000) Econometric analysis, 4th edn. Prentice Hall, Englewood Cliffs
Hough PG (1992) Pitfalls in calculating cost growth from selected acquisition reports. RAND
Corporation, Santa Monica, CA, N-3136
Jarvaise JM, Drezner JA, Norton D (1996) The defense system cost performance database.
RAND Corporation, Santa Monica, CA, MR-625
Kmenta J (1986) Elements of econometrics, 2nd edn. Macmillan, New York
McCrillis J (2003) Cost growth of major defense programs. Annual Department of Defense Cost
Analysis Symposium (ADoDCAS). Williamsburg, VA, 30 Jan
McNicol DL (2006) Cost growth in major weapon procurement programs. Institute for Defense
Analysis, Alexandria, VA, IDA P-3832
Peck M, Scherer FM (1962) The weapons acquisition process: an economic analysis. Harvard,
Boston
Scherer FM (1964) The weapons acquisition process: economic incentives. Harvard, Boston
Tyson K, Nelson JR, Om N, Palmer PR (1989) Acquiring major systems cost and schedule trends.
Institute for Defense Analysis, Alexandria, VA, IDA P-2201
Tyson K, Harmon B, Utech D (1994) Understanding cost and schedule growth in acquisition
programs. Institute for Defense Analysis, Alexandria, VA, IDA P-2967
U.S. Congressional Budget Office (1987) Effects of weapons procurements stretch-outs on costs
and schedules. Washington, D.C
U.S. General Accounting Office (1992) Defense weapons systems acquisition. Washington, DC.
HR 93-7
Zellner A (1962) An efficient method of estimating seemingly unrelated regressions and tests for
aggregation bias. J A Stat Assoc 57(298):348–368
Part V
Conclusion
Chapter 15
Fifty Years of Studying Economics
Lester D. Taylor
15.1 Introduction
I have studied economics for more than 50 years and have always considered
myself an econometrician in the original sense of econometrics, as ‘‘The use of
mathematical and statistical methods to quantify the laws of economics.’’ Over this
time, I have, inter alia, estimated huge numbers of demand equations for the
purpose of acquiring estimates of price and income elasticities, played around with
nonleast-squares methods of estimation, puzzled over what capital is and how
money comes into being and thought long and hard about how one might ground
the theory of consumer choice in the neurosciences.1
Fifty years of thinking about how economies are organized and function is
obviously a long time, and, in these notes, I want to single out and discuss a set of
principles and relationships from my experience that I feel are not only invariant
with respect to time and place, but, even more importantly, are relevant to the
understanding of some of the major questions and issues, social as well as economic, that currently face the U.S. and the world at large. Topics to be discussed
include:
•
•
•
•
Two important practical concepts from the theory of consumer choice;
Unappreciated contributions of Keynes in the General Theory;
Fluid capital and fallacies of composition;
Transfer problems;
1
Recently, I had the pleasure of spending an evening with a young economics Ph.D. from
Mexico who will be joining the University of Arizona economics department this fall. At one
point during the evening, he asked what I thought economics had to contribute to society and why
one should study it as a discipline. With hindsight, I don’t feel that I gave a good answer, and, in
these notes and comments, I would like to try to do better. I am grateful to Harry Ayer and
Richard Newcomb for comments and suggestions.
L. D. Taylor (&)
University of Arizona, Tucson AZ 86718, USA
e-mail: ltaylor@email.arizona.edu
J. Alleman et al. (eds.), Demand for Communications Services - Insights and Perspectives,
The Economics of Information, Communication, and Entertainment,
DOI: 10.1007/978-1-4614-7993-2_15, Springer Science+Business Media New York 2014
291
292
L. D. Taylor
• Lessons from the financial meltdown of 2007–2009;
• A tax program for economic growth;
• A proposal for revamping social security.
15.2 Two Important Concepts from Demand Theory
For years, the conventional (i.e., neoclassical) theory of consumer choice has been
one of the great prides of economics, for, among other things, it provides a rigorous and elegant mathematical underpinning for the common-sense notion of a
law of demand, that there is an inverse relationship between the price of a good
and the amount of the good that a consumer is willing to buy. Macroeconomic
theories over the years come and go, and to a lesser extent, the same is true of
theories of production, but not the theory of consumer choice. For, although there
have periodically been questions concerning its underlying assumptions, the theory
has essentially retained (at least in mainstream economics) its present form since
the early 1930s. It is, in short, one of the great invariants (along with the theory of
least-squares estimation) in the core education of an economist. Nevertheless, a
generation from now the theory of consumer choice taught to first-year graduate
students will be quite different than at present, in that on-going research in
behavioral economics will have come to fore and consumer and demand theory
will be solidly grounded in the brain sciences.
This said, two concepts that, no matter what form future demand theory may
take, will be just as central then as they are now, namely the notion of a budget
constraint and the existence of income and substitution effects. Budget constraints
are simply manifestations of the fact that resources are scarce and their importance
to enlightened discussion really needs no comment. It is simply an inescapable fact
that demand analysis in the future will, in one way or another, entail the allocation
of a given amount of income among alternative ends.
Turning now to income and substitution effects, let us imagine that, from a
point of equilibrium, the price of a good that households consume increases.
Assuming that income is fixed, there will be two effects: (1) an income effect
arising from the fact that the given amount of income will now not be able to
purchase the same amount of goods and (2) a substitution effect arising from the
fact that, with the change in relative prices, consumers will direct expenditures
away from the now relatively more expensive good. In traditional demand theory,
the income effect is described as a movement to a different indifference curve
(lower in the case of a price increase), while the substitution effect is described by
a movement along a given indifference curve. Interestingly, however, these effects
are not dependent upon a grounding in utility theory and indifference maps but can
be derived, at least in rough form, from the budget constraint. To see this, assume
(for simplicity) that there are just two goods and let the budget constraint be given
by
15
Fifty Years of Studying Economics
293
y ¼ p 1 x 1 þ p2 x 2
ð15:1Þ
for prices p1, p2 and goods x1, x2. The total differential of the budget constraint will
then be given by
dy ¼ p1 dx1 þ x1 dp1 þ p2 dx2 þ x2 dp2 :
ð15:2Þ
Assume now that only p1 changes, so that dp2 = 0. Hence,
dy ¼ p1 dx1 þ x1 dp1 þ p2 dx2 :
ð15:3Þ
For the substitution effect, assume that dy = x1dp1 (which allows for the original budget set to be purchased if desired), so that
0 ¼ p1 dx1 þ p2 dx2
ð15:4Þ
dx2 =dx1 ¼ p1 =p2
ð15:5Þ
dx2 x1 =x2 dx1 ¼ p1 x1 =p2 x2 :
ð15:6Þ
g21 ¼ w1 =w2 ;
ð15:7Þ
from which we can obtain:
or in elasticity terms:
Therefore,
where w1 denotes the budget share of good 1 and similarly for w2. Thus, we see
that the substitution effect, calculated as the cross-elasticity from just the budget
constraint (with no assumptions about an underlying utility function or whatever),
is equal to the negative ratio of the two budgets shares.
Obviously, this implies that the substitution will be larger the larger is the
budget share of x1 relative to that of x2.
For the income effect, we have from expression (15.3), assuming dy = 0,
0 ¼ p1 dx1 þ x1 dp1 þ p2 dx2 :
ð15:8Þ
If we now assume that dx1 = 0, we will have:
0 ¼ p1 dx1 þ p2 dx2 :
ð15:9Þ
With nominal income fixed and if the amount spent on x1 does not change, then
there will be a decrease in the amount that can be spent on x2 of
dy ¼ p1 dx1 :
ð15:10Þ
Hence, from expression (15.9), the income effect on x2 will be
dy ¼ p2 dx2 :
Therefore,
ð15:11Þ
294
L. D. Taylor
dx2 =dy ¼ 1=p2
ð15:12Þ
After multiplying both sides by y/x2, the income effect can be written in elasticity terms as:
gyx2 ¼
dx2 y
dy x2
ð15:13Þ
¼ y=p2 x2
ð15:14Þ
¼ 1=w2
ð15:15Þ
Thus, we see that the income effect on x2 of an increase in the price of x1
(measured as an elasticity) is equal to the negative of the reciprocal of the budget
share of x2. Again, this is an expression that is derived from just the budget
constraint. While this result might seem a bit arcane, all that is being stated is that
the size of the income effect on second goods is directly related to the budget share
of the good whose price has not changed. Compare food, for example, which
accounts for about 15 % of consumers’ budgets, with jewelry, which accounts for
less than 1 %. The (negative) income effect of a 10 % increase in the price of food
will obviously be much larger on transportation expenditures (say) than will a
10 % increase in the price of jewelry. Other obvious examples of goods for which
price increases have large negative income effects are housing and motor fuel.
15.3 Unappreciated Contributions of Keynes
in the General Theory
The economic crisis of 2007–2009 was, at base, a financial crisis which quickly
spilled over to the real economy. And while, for the most part, central banks reacted
in the appropriate manner to stem the financial crisis, the resulting deep recession has
been fought with traditional Keynesian tools. However, despite several years of
unprecedented low-interest rates and record budget deficits, recovery is slow and
unemployment is extremely high. Why is this? The problem has been that the contributions of Keynes (1936) in the General Theory that are most relevant to the
current situation have been (and continue to be) ignored. These overlooked insights
are to be found in Chaps. 12 and 17 of the General Theory.2
The main focus of Keynes in these chapters is on the role of money in the face
of uncertainty (uncertainty, not risk; risk can be characterized in terms of a
probability distribution, while uncertainty cannot). Money, viewed as universal
purchasing power, is not only a medium of exchange but is also an asset that can
be used to transfer purchasing power over time. And the characteristic of money
2
Equally overlooked has been the contributions of the late Hyman Minsky, whose penetrating
insights into the stability of monetary economies also take their cues from those chapters.
15
Fifty Years of Studying Economics
295
that most differentiates it from other assets (whether financial or real) is that its
price in the future (abstracting from inflation) is known today. One does not have
to convert money held as an asset into money as a medium of exchange through a
market. Its price is always one, with no transaction costs. This is the basis,
emphasized by Keynes in Chap. 12 of the General Theory, of the perfect
‘‘liquidity’’ of money.
Financial panics, as in the one of 2007–2009, involve the interaction of
uncertainty, confidence, and trust with money. While it might be thought that
confidence and trust are pretty much the flipside of uncertainty (in the sense that
presence of the former indicates the absence of the latter), this is not the case.
Uncertainty arises from the inherent unknowingness of the future and is essentially
ever present, while confidence and trust involve how this unknowingness is
approached in the making of decisions. Uncertainty is thus (at least in a sense) an
objective feature of the real world, while confidence and trust entail how economic
agents, both individually and en masse, react to and deal with uncertainty.
A frequently overlooked fact is that an economy has what in effect are two
aggregate markets, a goods market, and an assets market. In the goods markets,
flows are priced, while in most asset markets, it is stocks that are priced. The
distinction is important, for while an increase in price leads to an increase in
quantity supplied in a goods market, this is not possible in an asset market in which
the stock is fixed. For assets like bonds, equity shares, and commodities, an
increase in price, brought about (say) by position-taking motivated by enterprise or
speculation, is likely to catch the attention of gamblers motivated only by the
prospect that the price will be higher tomorrow than it is today. Although in time,
supply in these circumstances can increase, the delay in response can be considerable, thus providing for what can be an ample interval for bubbles to form.
Asset markets are orderly when there is broad distribution of price expectations
centered around the current price. Bubbles form when most expectations (though
not all, for otherwise there would be no sellers) are that prices are going to be
higher tomorrow than they are today. A bubble will be at a point of bursting when
expectations pileup on the current price. At this point, even one participant
deciding to sell can trigger a mass exit. Prices go into free-fall, and confidence—
and even trust—can collapse in the face of ever-present uncertainty. And when this
happens, only money provides the security of knowing what its price will be
tomorrow. Such was the situation in the fall of 2008 when, among other things,
banks would not roll-over loans (despite being able to borrow from the FED on
extremely favorable terms) to even their most credit-worthy borrowers.
If there ever was a time that the U.S. economy was caught-up in Keynes’s
liquidity trap, the fall of 2008 was it. To see how (and why) uncertainty can have
such a devastating effect on the real economy, let us now turn to what is another of
the fundamental tools in economics, namely the present-value formula for evaluating a stream of prospective quasi-rents:
X
PV ¼
ðRi Ci Þ=ð1 þ qÞi ¼ 1 to n;
ð15:16Þ
296
L. D. Taylor
where Ri denotes the revenues expected in period i, Ci denotes the out-of-pocket
costs of producing those revenues, q denotes the discount rate, and n denotes the
investment horizon. Per Fisher and Keynes, the necessary condition for the
investment to be undertaken is obviously for
PV C;
ð15:17Þ
where C represents the cost (or price) of the investment. Uncertainty enters into a
present-value calculation in four different places: in the choice of n and q and in
the expectations of Ri and Ci.3
Specifically, increased uncertainty (or, equivalently, a decrease in confidence)
will lead to:
•
•
•
•
An increase in the discount rate q;
Reduced estimates of Ri;
Increased estimates of Ci;
A shortened investment horizon n.
Investments that might previously have been undertaken with enthusiasm can,
as a result, no longer now even be considered.
15.4 Fluid Capital and Fallacies of Composition
Let us start by viewing production as having two sides (almost, but not quite, as in
standard National Income Accounting): a goods side and an income side 4. Production not only creates goods, but it also generates income. These two sides of
production will be equal in value. The goods side will be valued at current prices,
while income on the income side can be viewed as an aggregate of claim tickets to
the goods measured in money. Per usual, two things can be done with income: it
can be spent for consumption or it can be saved. For the claim tickets spent on
consumption, the goods consumed obviously disappear, as do the claim tickets that
were tendered in their purchase.
However, for the claim tickets that are saved, goods from production equal in
value continue to exist. It is at this point that capital, or fluid capital (Taylor 2010),
comes into existence. To repeat: a stock of capital comes into existence with
3
If the investment is purchased in a spot market, C will be known. However, if it involves new
construction over some horizon, then uncertainty will enter into the calculation of C as well,
almost certainly to increase it.
4
At one point before the first edition of my book on capital and money came out Taylor (2010), I
sent a few chapters to H. S. Houthakker for comment, which, interestingly, he declined to do, by
saying that he had never felt that he understood capital theory. I thought: if a mind like Henk’s
does not feel comfortable talking about capital, then what chance have I? Nevertheless, I felt that
I had a pretty good idea about how to think about and define capital, and it is to this that I now
want to turn.
15
Fifty Years of Studying Economics
297
saving that consists of goods on one side that are equal in value to the unexercised
claim tickets (i.e., saved income) measured in money on the other side. This is the
‘‘capital’’ that is available to finance current production, investment in new productive capacity, or to finance consumption in excess of current income.
The total stock of unconsumed goods in an economy and associated claim
tickets represent the pool of fluid capital, because, like water, it is free to flow into
any project or use to which it is directed.5 As income is almost always received as
money, holders of new savings have to decide the form in which these are to be
held, whether continuing in money (but now viewed as an asset, rather than a
medium of exchange) or in some nonmoney asset. The task of banking system and
other financial intermediaries is to transfer the unused claims to the goods in the
pool of fluid capital from those who do not wish to exercise them to those who, for
a price, do.6
Since production takes time, materials have to be bought and wages paid, all in
advance of the revenues that finished production will generate. Production
accordingly has to be financed, which usually is done in the form of short-term
loans from the banking system. Money is thus constantly in the process of being
created and destructed, created when new loans are made and destructed upon
repayment. Note, however, if all of production costs are financed by bank loans,
and some of the income generated is saved, revenue will not be sufficient to repay
the original loans.
This shortfall will be equal in value to the income that is saved. Obviously, for
the banking system to be willing to roll-over production loans, this situation
requires a growing economy. Finally, it is to be noted that the part of income that is
saved represents money that is not destructed, and hence, in effect, becomes a
stock of ‘‘permanent’’ money.
When capital formation is looked at in this way, several constraints on an
economy’s asset values become readily apparent. The first is that assets, including
money, all take their values from the ‘‘claim-ticket’’ side of the pool of fluid
capital. This being the case, it follows that the aggregate value of all assets in an
economy can never be greater than the aggregate value of this pool evaluated in
current prices. And a corollary is that, since money is the ultimate asset in terms of
knowing its value in the future, this aggregate value also bounds the real value of
the stock of money. The second constraint that the pool of fluid capital imposes on
assets is that, over any period of time, asset prices in the aggregate cannot increase
faster than the amount of current saving, or, equivalently in percentage terms,
5
Fluid capital can be seen as the ‘‘putty’’ in the ‘‘putty-clay’’ models of the 1960s and 1970s.
It is useful to note that this approach to defining capital provides a root concept of capital for
relating to one another the numerous concepts of capital that are found in the literature. To give a
few examples: physical capital is fluid capital that has been embodied in produced means of
production; working capital is fluid capital that is being devoted to financing current production;
sunk capital represents that part of the original value of investment in physical capital that has not
yet been repatriated into fluid capital through depreciation charges; finally, financial capital
connotes immediate access to purchasing power (that is, to money).
6
298
L. D. Taylor
faster than the rate of growth of fluid capital. The only way that asset prices in the
aggregate can increase faster than this is for money to be created in excess of that
needed to finance current production and the initial finance of investment in new
produced means of production. The upshot is that existing assets should never be
mortgaged in order to finance the purchase of other existing assets.7
A third consequence of the constraint imposed by the pool of fluid capital is that
there can be no such thing as an aggregate real-balance effect. To assume that there
can be, as is done in deriving an aggregate demand curve in standard macroeconomic textbooks, involves a fallacy of composition. Real-balance effects can exist
for individuals, but not for the economy overall. The reason is that the real value of
the stock of money is bounded by the value of the pool of fluid capital in current
prices, so that if goods’ prices should fall without limit, so too will the real value of
the stock of money, no matter whether it is of the outside (fiat) or inside (bank)
variety. The Pigou-Haberler-Patinkin real-balance effect on the stock of fiat money
pushing the consumption function sufficiently upward to reach full employment (in
an economy with perfectly flexible prices) simply does not exist.
A fourth feature of this way of viewing capital formation is that it is clear that
consumption in excess of current flow income is capital-consuming. Payments
from pensions and retirement accounts are obviously viewed as income to the
recipients, but for the economy as a whole, they are simply transfers out of the
pool of fluid capital. As such transfers are consumed, both goods and claims are
annihilated, and fluid capital is reduced. The capacity of the economy to produce is
not directly affected, but the capital available for funding current production and
financing new investment is necessarily reduced.
15.5 Wealth Transfers
Keynes made his international reputation in 1920 when he resigned his Treasury
position at the Versailles Conference to return to London to write his famous
polemic, The Economic Consequences of the Peace (1921). As the Allied Powers,
and especially France, viewed Germany as the aggressor in the 1914–1918 war,
Germany was made in the Treaty of Versailles to pay for the damage that had been
done in France and the Low Countries. This was the first of what was to become
many Twentieth Century ‘‘transfer’’ problems, by which is meant a transfer of
wealth from one group or country to another. Other notable twentieth century
examples of transfer problems the ‘‘tax’’ imposed on oil consumers by the OPEC
oil cartel in the 1970s, and the re-unification of Germany following fall of the
Berlin Wall in 1989.
7
Among other things, imposition of such a rule on the banking system would foreclose positionfinancing loans to hedge funds and the trading desks of both bank and nonbank financial
institutions. These entities should speculate and gamble with their own money, not with that
which has been newly created.
15
Fifty Years of Studying Economics
299
While Keynes did not use the language of ‘‘fluid capital,’’ he clearly had such in
mind in arguing that the reparations being imposed on Germany represented a
charge on that country’s pool of fluid capital that could not be met through the
writing of a check. The only way that the reparations could be extracted from the
German economy was over a long period of time through surpluses in the balance
of trade—i.e., by German nationals consuming less than what they produced.
Keynes’s point in The Economic Consequences of the Peace was that the wealth
transfer that the Treaty was about to impose was too draconian for the German
economy ever to be able to execute. As was often the case, Keynes was right.8
The OPEC oil price increases in 1970s, on the other hand, represented a transfer
of wealth from oil consumers to oil producers, which involved a reduction in real
income for countries that were net consumers of petroleum. This cut in real income
came about because of the fact that substitution away from higher petroleum prices
in production and consumption was limited in the short run, so that the bulk of
adjustment had to take the form of negative income effects. For some groups in the
U.S., these income effects were initially huge. A family living in New England, for
example, with an income (say) of $10,000 and pre- and post-OPEC heating-oil
expenditures of $600 and $1,600 per year, respectively, would have seen a
reduction in real income of 10 %. For the U.S. at large, the reduction in real
income occurred largely through inflation, with the general price rising just a bit
faster than wages and salaries. Put another way, the increase in the pool of fluid
capital during the stagflation 1970s was less than what it would have been in the
absence of the OPEC oil price increases.
Finally, the transfer of wealth that occurred in the re-unification of Germany
following the fall of the Berlin Wall was from the citizens living in West Germany
to those living in the East Germany. The transfer came about essentially in two
forms. The first was triggered by the pegging at par of the East German Mark with
the Deutsche Mark for most of the financial assets held by East German residents,
while the second was brought about by making available to East Germans the same
health and retirement benefits as given West Germans. As the ‘‘free’’ market
exchange rate between the East and West German Marks was about five to one, the
pegging of the two marks at par obviously involved a substantial transfer of wealth
from West to East, a transfer, incidentally, that is still playing itself out.
Another transfer of wealth that is still playing out, but which has ever been
viewed as a transfer as such, is the opening of U.S. markets to finished consumer
goods after World War II, but without equal reciprocity for U.S. goods. Since most
of the world’s productive capacity at the end of the war was in the U.S., and
similarly for much of the world’s GDP, the stage was set, by setting exchange rates
that greatly overvalued the U.S. Dollar, for the Japanese and Western European
economies to hasten their rebuilding through exports into the American market.
8
At essentially the same time that Keynes was writing Consequences, Schumpeter (1954)
published a paper in German that gives the best treatment of the transfer problem that I have seen
in the literature.
300
L. D. Taylor
While this was an enlightened policy, and represented an important lesson learned
from the disastrous aftermath of World War I, it gave rise to asymmetric trade
practices that continue to the present. Because of nontariff trade barriers in most
countries that have large positive trade balances with the U.S., it is virtually
impossible for high-valued U.S. manufactured consumer products to gain entry
into foreign markets. The transfer from U.S. consumers that was meant to hasten
the countries ravaged by World War II back on their feet is now greatly to the
benefit of the emerging economies. To put it another way, asymmetric trade
policies create both transfers and transfer problems.
15.6 Lessons from the Financial Meltdown of 2007–2009
The last four years have not been particularly kind to the economics profession,
particularly to macro economists. However, at this point, I want to turn to not
so much what went wrong, but more to the lessons from the financial meltdown of 2007–2009 that are to be drawn. Most of the points have already been
alluded to.
Lesson No. 1
Asset prices cannot, over a sustained period of time, increase faster than the
increase in the economy’s pool of fluid capital.
We have seen that assets take their value from the savings embedded in the pool
of fluid capital, and, accordingly, can only increase in line with its increase, which
(for a constant saving rate) will be at the same rate as the rate of growth in the
economy. Asset prices that increase faster than this over a long interval of time
involve bubbles and can only be sustained by the creation of money.
Lesson No. 2
There should no ‘‘casinos with banks’’ in financial markets. In other words,
financial institutions should not be allowed to trade using newly created money
from bank loans.
Most trading activity involves the exchange of money for some other asset in
pursuit of capital gains. However, while a capital gain can be viewed as income to
an individual, for the economy as a whole, the only income generated is represented by brokerage fees. The rest is simply a transfer of existing assets between
parties. Let parties trade all they want, but with their own money.
Lesson No. 3
As a corollary to above, new money should be created by banks only through
loans to fund current production and to provide bridge finance for investment in
new productive capacity.
In other words, commercial banks should follow the venerable ‘‘real-bills’’
doctrine of money creation.
Lesson No. 4
As a further corollary to Lesson no. 2, commercial banking should once again
be separated from investment banking as under the Glass–Steagall Act .
15
Fifty Years of Studying Economics
301
Lesson No. 5
As a third corollary to Lesson no. 2, the Federal Reserve Bank (FED) should
monitor asset prices as well as goods’ prices, both specifically and in the aggregate, in order to assess that any run-up in asset prices is the result of the normal
economic activity rather than through the creation of money.
Lesson No. 6
The essential function of a Central Bank during a financial panic is to act as a
lender-of last resort.
Confidence collapses in the face of uncertainty in a panic, and money is the
secure asset of choice. The Central Bank should make it available in the quantities
demanded, though at a price (as Bagehot famously said). However, this should be
done through the Bank’s discount window, rather than via open-market purchases.
Once liquidity preference begins to lessen, the liquidity injected should similarly
begin to be withdrawn. The FED behaved admirably in the fall of 2008 in dealing
with the panic, but whether it will be successful in destroying the liquidity created
is another matter.
15.7 A Program for Economic Growth
The big problem in the U.S. economy today is that the economy is still in the
strong grip of uncertainty, which unprecedented low-interest rates and record
budget deficits have not been able to stem. What is needed is something major to
jolt expectations and get Keynes’s (and Shiller’s) ‘‘animal spirits’’ stirring so that
economic growth can resume. The prime place to start on this is with a major
overhaul and revamping of the Federal tax system. The overhaul includes the
following elements:
• Elimination of the corporate income tax. All corporate earnings, whether
retained or distributed as dividends, would be treated as shareholder
income;
• Elimination of all forms of capital-gains taxation. Capital gains (as has been
noted) are not true income, but rather reflect interest-rate changes and reallocation of existing assets among holders; hence, levies on them simply represent
additional taxation on income of yesteryear;
• Replacement of the personal income tax by a broad-based consumption tax that
would have a generous initial exemption and could be mildly progressive;
• Permanent elimination of estate and inheritance taxes.
A Federal tax system with these elements would be strongly encouraging of
saving and investment, and therefore of economic growth, and would also provide
a much broader tax base than at present. As will suggested in a moment, with the
integration of a capital budget in the budgeting process, tax rates could be set so as
to provide revenues to cover current government expenditures (including interest
302
L. D. Taylor
and depreciation on social infrastructure), leaving government borrowing for
funding of investment in new infrastructure.
15.8 Proposals for Revamping Social Security
and Development of a National Capital Budget
A fifth element in a Federal tax overhaul would be replacement of the present
system of Social Security financing that would not only deal with its future
funding, but with the broader issue of public involvement in retirement saving.
While benefits from Social Security depend upon total contributions, these are
determined by a mandated formula. Contributions are thus in essence a tax for
funding those in retirement, and benefits represent transfers of income from those
working. There is neither any control by the individual over their contributions nor
any sense of ownership.
This proposal would involve a replacement of the present system of Social
Security financing by a system of combined mandatory/voluntary contributions
that are (a) individually owned, (b) earn a risk-free time-varying rate of interest,
and (c) form a fund from which individual retirement benefits would be paid.
The broad features of such a scheme would be as follows:
1. Accounts would be both individual and individually owned but would be
custodially administered by the Social Security Administration.
2. Accounts would be funded by a mandatory contribution of (say) 8 % of an
individual’s annual income (possibly independently of whether they were in the
labor force), plus an additional voluntary contribution of any amount. With a
Federal consumption tax in place, contributions would be tax-free.
3. At the beginning of each year, accounts would be credited with interest at a rate
equal to the previous year’s average rate on 20-year Treasury Bonds.
4. Individuals could begin drawing from their accounts (but would not be required
to do so) at age 65, whether or not they were still working. A certain portion of
the account would be required to be annuitized, but the remainder could be
withdrawn, within limits, in any way desired. A nonzero balance in the voluntary part of an individual’s account at the time of death would be heritable.
The virtues of such a scheme (which obviously would have to be phased in over
a period of years) would include: (1) much of the original motivation for Social
Security System as a ‘‘social safety-net’’ is retained; (2) accounts are individualized and made partially voluntary (but are not privatized); (3) a risk-free market
return on contributions is guaranteed; and (4) most important of all, the program is
self-financing and continually funded.
There remains, of course, the question of what should be done with the savings
that such a scheme would generate. Any idea that contributions could be
sequestered (as in the ‘‘lock box’’ proposed by Albert Gore in the 2000 Presidential
15
Fifty Years of Studying Economics
303
Election) is obviously fiction, as contributions are simply revenues to the government, and any excess over withdrawals will be expended in one way or another.
The ideal disposition, it seems to me, would be for the government to invest excess
contributions in social infrastructure projects that add to the economy’s capacity to
produce.
This, accordingly, brings me to another proposal, which is to develop and
integrate a Capital Account into the Federal budgeting process. Having such an
account would lead to a number of benefits, including (1) making both apparent
and transparent the amount of National social infrastructure in the economy,
together with the cost of its maintenance and (2) allowing for clear-cut distinctions
between current government expenditures, transfer payments, and investment in
social infrastructure. Having a capital account in the Federal Budget would also
allow for informed argument to develop on the proper acquisition and disposition
of Federal revenues and debt; for example, whether tax rates should be set to cover
current government expenditures and new long-term Federal debt incurred only for
funding government investment.
15.9 Conclusion
The intent in these notes has been to set out, based on my 50 years of thinking
about and working in economics, what I feel I really know about the subject and to
apply this knowledge to a number of pressing economic and social issues. I think
that the concept of fluid capital has been my best idea—even more so than the
development of the Houthakker-Taylor models (whose main idea, anyway, was the
brainchild of Henk Houthakker)—and how this provides a simple framework for
analyzing the way that an economy functions and ticks. However, given the nature
of this volume, a few words in closing about telecommunications demand and the
challenges that its modeling currently faces. In many ways, I was lucky to come
onto telephone demand when I did in the mid-1970s, for the industry was still
technologically stable, the questions asked were straightforward to analyze, and
excellent data were readily available. The biggest challenges in modeling were
how to deal with network externalities, the distinction between access and usage,
and how to deal with multipart tariffs. Devising ways to solve these problems—
many of them resulting from collaboration with Gail Blattenberger and Don
Kridel—was a tremendous amount of fun.
However, with the breakup of the Bell System in 1984, things obviously
changed. Competition emerged in a major way, technological change fostered new
ways of communication, and data became fragmented and proprietary. Regression
and quantal choice models are of course still going to be workhorse methods of
analysis, but the days of analyzing a household’s demand for communications
services in the framework of two-equation access/usage framework using comprehensive time-series data in estimation are long in the past. And I hardly need to
mention the problems posed by the many forms that technology now allows
304
L. D. Taylor
information exchange to take and the multitude of services that can be demanded.
Data used in estimation now almost certainly have to be cross-sectionally derived
from surveys, while in the modeling of demand for new products, data will
increasingly have to be obtained from contingent-valuation surveys of the type that
Paul Rappoport, James Alleman, and I have used in analyzing the demands for
VOIP, FIOS, and other broadband services. With these challenges on the horizon,
it is a great time to be an econometrician motivated to model information demand.
I only wish I still had the energy and ability to take them on.
References
Keynes JM (1921) The economic consequences of the peace. Macmillan, New York
Keynes JM (1936) The general theory of employment, interest, and money. Macmillan, New
York
Schumpeter JA (1954) The crisis of the tax state, translated by Stolper WF, Musgrave RA
International economic papers. vol 4, In: Al Peacock et al Macmillan, 1954, pp 5–38.
(Reprinted in Joseph AS (1991) The economics and sociology of capitalism. In: Swedberg R
Princeton University Press)
Taylor LD (2010) Capital, accumulation, and money, 2nd edn. Springer, Berlin
Chapter 16
Concluding Remarks
Áine M. P. Ní-Shúilleabháin, James Alleman and Paul N. Rappoport
Our purpose here is not to summarize all of the articles that have been done in the
introductory Chapter. We wish to stress the importance of evidence based
research, the importance of quality data, and the variety of methodologies to
achieve these aims.
The chapters in this volume present a steady cascade of intriguing empirical and
theoretical results that move the research process forward. This excellent series of
analyses of demand in the information, communications, and technology (ICT)
sector provides a broad-ranging overview—rigorous yet highly accessible—for
economists and econometricians focusing upon this sector.
The volume has developed some innovative tools and techniques. Taylor
emphasized how nonlinear estimation can be a vital tool in econometrics. It is
hardly any more difficult to develop the notions of least-squares estimation (and of
statistical inference based upon such estimates) in a nonlinear context. Cogger’s
framework of piecewise linear estimation also adds more precision to estimates.
Inferring qualitative data on socioeconomic class, employment status, education, and the like—given their discrete nature—requires an entirely different set of
tools from those applied to purely quantitative data. Models with endogenous
qualitative variables and dichotomous models are well applied in this volume of
essays in the chapters authored by Banerjee et al., Beard et al., Garin-Munot and
Perez-Amaral, Levy and Tardiff.
Á. M. P. Ní-Shúilleabháin (&)
CITI, Columbia University, New York, USA
e-mail: Aine.Ni.Shuilleabhain@alumni.lse.ac.uk
J. Alleman
University of Colorado at Boulder, 2491 Twenty-third Street, Boulder, CO 80304, USA
e-mail: james.alleman@colorado.edu
P. N. Rappoport
Temple University, 114 East Waverly Road, Wyncote, PA 19095, USA
e-mail: prapp4@gmail.com
J. Alleman et al. (eds.), Demand for Communications Services - Insights and Perspectives,
The Economics of Information, Communication, and Entertainment,
DOI: 10.1007/978-1-4614-7993-2_16, Springer Science+Business Media New York 2014
305
306
Á. M. P. Ní-Shúilleabháin et al.
Other applications have practical implications, such as cost, saving in management (Williamson) to even more critical, life-saving applications such as in
avalanche forecasting (Blattenberger and Fowles).
Nevertheless, more research remains to be done (as usual). In particular, there is
considerable room for the use of more advanced econometric techniques in the
estimation and analysis of economic demand, as well as supply. This is not the
place to outline these tools and techniques, but simply to alert the reader that these
papers do not provide a complete set of tools but many areas of research remain.
There are many areas for future theory and methodologies developments in
demand/forecasting research, as well as their applications, in the ICT/TMT sector.
A wealth of additional techniques and forms of analyses and estimations are
available. These provide fertile ground for further research, to which we can look
forward. Our hope is the works reported herein are exciting enough to stimulate
further research analysis in this growing and compelling field of ICT.
Appendix
The Contribution of Lester D. Taylor
Using Bibliometrics
Sharon G. Levin, Stanford L. Levin
It is common practice that a Festschrift lauds the accomplishments of the
individual being honored. Indeed, the intent of this paper is no different. This is
accomplished by using a bibliometric approach. Not only are publications and
citations counted, as is the tradition in the sciences, but Professor Taylor’s
contributions are traced to the knowledge base in consumer demand and
telecommunications demand in greater depth. To do so, data are extracted from
the Institute for Scientific Information’s (ISI’s) Web of Knowledge, now Thomson
Reuters Web of Science (WoS), still commonly referred to as ISI, and Google
Scholar1 and use the visual processor SmartDraw2 to conduct a geospatial analysis
of Taylor’s contributions in the areas of telecommunications and consumer
demand.
A.1
Brief Background
Lester D. Taylor received his PhD in Economics from Harvard University in 1963.
He served as an instructor and assistant professor at Harvard before joining the
Economics Department at the University of Michigan in 1969 and rising to the
rank of associate professor in 1974. He joined the University of Arizona as a fulltime professor in 1974 and retired with emeritus status in 2004.
Taylor’s first journal article appeared in The Review of Economics and Statistics
in 1964. The first edition of the seminal work, Consumer Demand in the United
States, co-authored with H. S. Houthakker, was published in 1966. Professor
Taylor’s research career extends more than 45 years. Indeed, the year 2010 saw the
1
While Elsevier’s Scopus is also available, its usefulness in the present study is severely limited
by the fact that it only contains citations that have been made since 1995. Since much of Taylor’s
work appeared and likely was cited much earlier than this, the impact of his work using Scopus
would be markedly underestimated.
2
SmartDraw (trial version). http://www.smartdraw.com/.
J. Alleman et al. (eds.), Demand for Communications Services - Insights and Perspectives,
The Economics of Information, Communication, and Entertainment,
DOI: 10.1007/978-1-4614-7993-2, Springer Science+Business Media New York 2014
307
308
Appendix: The Contribution of Lester D. Taylor
publication of the third edition of Consumer Demand in the United States and the
second edition of his solo-authored book, Capital, Accumulation, and Money: An
Integration of Capital, Growth, and Monetary Theory. In total, Professor Taylor
has authored or co-authored 55 papers in refereed journals and edited volumes and
12 books or monographs.
A.2
An Overall Assessment Using Bibliometrics
Citation analysis is a common way to evaluate an author’s research impact.3
Pioneered by Eugene Garfield more than fifty years ago with the establishment of
the ISI in 1958 (subsequently acquired by the Thomson Corporation in 1992) and
the creation of the Science Citation Index in 1964 (now part of the WoS), citation
analysis has become a mainstay of scholarly evaluations. Bibliometric measures
that count co-authorships, co-inventors, collaborations, references, citations, and
co-citations are widely used in many fields of science to analyze the popularity and
impact of specific articles, authors, and even research groups or institutions.
Indeed, the United Kingdom (UK) government is using bibliometric indicators in
its Research Excellence Framework, a program designed to allocate research
funding based on an assessment of the quality of the research produced in the UK.4
This Appendix relies primarily on Google Scholar’s database and (the software)
Publish or Perish (PoP) to investigate the overall impact of Taylor’s body of
research. Google Scholar, introduced in 2004, includes publications and citations
in journals, books and book chapters, conference proceedings, government reports,
and working papers. While the WoS is also used in the analysis, it is much less
comprehensive than Google Scholar since it only includes publications and
citations found in a group of ISI-selected journals. Thus, it ignores books,
monographs, reports, and edited volumes.5 Since many of Taylor’s publications
appear in these other sources and may have been cited by papers appearing in these
other sources, relying on the WoS alone would likely underestimate the impact of
his body of work. (The counts found in PoP may not be perfect either since they
rely only on materials that have been posted on the Web.) The one major
advantage of using the WoS compared with Google Scholar is that it contains
address information that we can use for a geospatial analysis of the impact of
Taylor’s work.
3
Garfield’s pioneering paper in 1955 ‘‘envisioned information tools that allow researchers to
expedite their research process, evaluate the impact of their work, spot scientific trends, and trace
the history of modern scientific thoughts. http://thomsonreuters.com/products_services/
science/free/essays/50_years_citation_indexing.
4
http://www.hefce.ac.uk/research/ref
5
In addition, the WoS does not contain publications prior to 1970. The WoS ‘‘cited reference’’
function does include citations to non-ISI listed journals but captures only those citations that
have appeared in an ISI-listed journal. See Harzing (2010), p. 172.
Appendix: The Contribution of Lester D. Taylor
309
Table A.1 Comparison of metrics from WoS and PoP
Metric
WoS
General
Cited reference
Publications
Citations
WoS
PoP
41
869
19
431
67
2940
Table A.2 Taylor’s 10 most cited works from PoP (Google Scholar)
WoS PoP Title
Date
0
327
0
79
62
23
45
40
630
568
286
266
168
86
77
65
1970
1975
1966
1994
1980
1993
1977
1964
24
26
65
58
Consumer demand in the United States
The demand for electricity: a survey
Consumer demand in the United States, 1929–1970
Telecommunications demand in theory and practice
Telecommunications demand: a survey and critique
Post-divestiture long-distance competition in the United States
The demand for energy: a survey of price and income elasticities
Three-pass least squares: a method for estimating models with a lagged
dependent variable
Saving out of different types of income
Advertising and the aggregate consumption function
1971
1972
To illustrate the differences in coverage, Table A.1 shows the publication and
citation counts6 obtained using WoS and PoP. In compiling this table, the WoS
‘‘cited reference’’ feature was also used which does include citations to non-ISI
journals, although it still excludes citations from non-ISI journals and also
excludes citations to a second author of a publication.
As expected, the publications and citation counts obtained using PoP are
considerably larger than those obtained using either of the WoS searches. To get a
sense of where the disparities arise, Table A.2 shows Taylor’s 10 most cited works
according to PoP and compares their citation counts with those found using the
cited reference function of WoS. Not surprisingly, using WoS, the two books that
Taylor co-authored with Houthakker have zero citations since they are non-ISI
works and ISI does not track citations to the second author of a publication. These
two account for more than 900 citations alone using PoP. For the other eight
publications in the top 10, the fact that citations from non-ISI sources are excluded
from the WoS means that their citation counts are often less than half of those
reported by PoP. The counts also indicate how widely Taylor’s work has been
cited in non-ISI journals.
6
These counts do not correct for self-citations. The counts exclude proceedings, book reviews,
PhD dissertations, unpublished papers, and material not referenced on the vitae. The raw counts
had to adjust for errors in citing titles, author names, and year of publication.
310
Appendix: The Contribution of Lester D. Taylor
Table A.1 indicates that over his career, Professor Taylor has published 67
works7 that have received 2,940 citations. What does this record say about
Professor Taylor’s place among academic economists? We can get some clues
from prior studies of publishing productivity and citations practices in economics.
First, inequality in publishing is a fact of life in economics and other fields that
require individual creativity (David 1994). Indeed, a recent study (Hutchinson
et al. 2010) found that more than 51 % of the 1985 cohort of doctorates in
economics had not published even a single refereed journal article by 1999.
Furthermore, a study using data from the 1993 National Survey of Postsecondary
Faculty (NSOPF) (Hartley et al. 2001) found that 27 % of the economists surveyed
had never published a single refereed article (10 % for economists at research
universities, a percentage that increased to 24 % for faculty with more than 21
years of experience). Thus, many academic economists simply do not publish even
one refereed journal article.
On the other hand, of those who publish, the Hartley et al., found that only 23 %
of the faculty at research institutions had published more than 20 refereed journal
articles over their careers. For faculty with more than 22 years of experience, the
corresponding percentage was 27 %. (Taylor has 35 refereed journal publications.)
This result is not surprising given the empirical regularity known as Lotka’s Law.8
The number of authors publishing n papers is approximately l/n2 of those
publishing one paper.9 Thus, if 10,000 authors publish one (1) paper, one would
expect to find one-quarter of them (2,500 authors) producing two papers, etc.
Consequently, the likelihood of someone publishing 35 articles as Taylor has done
is less than 0.1 % of the number producing just one paper–or just 8 individuals out
of 10,000—clearly placing Professor Taylor in the high-end of the distribution of
publishing economists.
In academic circles, citations are also a measure of performance. Every
published scientific paper must cite the papers that it is connected to or builds on. In
the past, citations to one’s work were thought to be rather rare. In work done in the
late 1980s, David Pendlebury of ISI found that the vast majority of papers were
infrequently cited or completely uncited. Indeed, 55 % of all the papers that were
published in 1984 in the journal set covered by ISI’s citation database did not
receive a single citation in the 5 years after they were published.10 And, in the social
sciences (including business), the degree of uncitedness was even higher at 74.7
%.11 A recent study by Wallace et al. (2009) using 25 million papers and 600
million references from the WoS over the period 1900–2006, however, has found
7
The total excludes book reviews and miscellany.
Lotka (1926).
9
Indeed, a study by Cox and Chung (1991) confirms that the economics literature conforms well
to this bibliometric regularity.
10
Hamilton (1991).
11
Pendlebury (1991) notes that these numbers fall when one excludes abstracts, editorials,
obituaries, letters, and other non-papers and, as a result, the percentage of articles in the social
sciences that are not cited falls to 48 %.
8
Appendix: The Contribution of Lester D. Taylor
311
Fig. A.1 The citation rates
for social science articles,
1956–1996
that the degree of uncitedness has fallen. As the following figure taken from their
paper shows, the degree of uncitedness ten years after an article has been published
is now below 40 % in the social sciences. This figure also shows that the percentage
of papers receiving more than 20 citations has now grown to about 10 % (Fig. A.1).
Thus far, the statistics from citation analyses have been based solely on journal
articles. A recent paper by Tang (2008), however, sheds light on the long-term
citation history of monographs published in six fields including economics. He
found that monographs in economics received on average just 6.52 citations and
45 % were never cited at all. Thus, the citation norms for journal articles and books
both suggest that Taylor’s citation record is extraordinary.
While one can look at the total number of citations that an author has received
or the average number of citations per publication, these indicators do not
distinguish between a large number of citations to a few influential works or a few
citations each to a larger body of less-influential work. To partially remedy this,
the Hirsch (2005) index h is often used as the metric to compare the publication
and citation records of different authors in the same discipline.
Hirsch’s h index is defined to be the largest number h such that the researcher
has at least h papers with h or more citations. For example, an h of 10 says that the
researcher has at least 10 papers with at least 10 citations. In order to achieve an h
of more than 10, the researcher would need to have more than 10 papers with more
than 10 citations. In essence, it can be thought of as a count of the number of
‘‘good’’ papers that a researcher has written. The index also raises the bar for what
‘‘good’’ means when comparing more accomplished researchers. Furthermore,
Hirsch’s h index has the desirable feature of discounting the disproportionate
weight given to both highly cited and uncited papers that result when counting the
average number of citations per paper.12
A recent paper by Ellison (2010) uses the h index to examine the citation
records (using Google Scholar) of the 166 economists who held tenured positions
12
Some criticisms have led to alternative variants that control for co-authorship as well. See
Harzing (2010): 28–30.
312
Appendix: The Contribution of Lester D. Taylor
at one of the 25 most highly regarded US economics departments in the
2006–2007 academic year (Arizona is not one of them)13 and who received their
PhDs in 1989 or later. For this highly accomplished group, he found that the mean
h index was 18 with a minimum of 8 and a maximum of 50. Professor Taylor’s h
index is 22: twenty-two of his publications have received at least 22 citations each.
Thus, Taylor’s record compares favorably with this select group of younger
economists.14
Next a more depth the impact that Taylor’s work is we conduct an in depth
analysis of the impact that Taylor’s work has had in the areas of demand analysis
and telecommunications.
The Demand for Electricity: A Survey, Bell Journal of Economics and
Management, 1975.
This is Professor Taylor’s first solo-authored article on demand analysis. As
shown in Table A.2, to date it has received more than 500 citations according to
PoP, which uses the Google Scholar database. Furthermore, out of the more than
200 articles published in the highly rated Bell Journal of Economics and
Management over the period 1974–1976, this article was the 7th most cited.
Using data from ISI, the only database that contains the addresses of authors
who wrote journal articles that cite this work (ISI counts 291 journal citations15 to
this work), authors’ addresses are available for only 164 of the citations. These
data yield addresses for authors or coauthors located in 22 countries, with locations
in the United States being most prevalent. Of the remaining 21 countries, Canadian
authors account for 10 and French authors account for 5, with the remaining
countries accounting for 1–4 citations each. The following map highlights the
global influence of this work (Fig. A.2).
A.3
Telecommunications Demand
Two books capture the essence of Taylor’s contributions in the area of
telecommunications demand: Telecommunications Demand: A Survey and
Critique (1980) and Telecommunications Demand in Theory and Practice (1994).
In an analysis of the literature on this topic in economics and business done by
searching for the words ‘‘telecommunications’’ and ‘‘demand’’ anywhere in the
13
This list includes: MIT, Harvard, Stanford, Chicago, Princeton, NYU, UC-Berkeley,
Pennsylvania, UCLA, Columbia, Wisconsin, Northwestern, Duke, Yale, Rochester, UC-Davis,
Minnesota, UC-San Diego, Michigan, Maryland, Ohio State, Cornell, Texas, Southern California,
and Illinois.
14
For this select group of economists, there is a high likelihood that their h indices will increase
over their careers as they continue to produce highly regarded works.
15
This citation count is smaller than that reported in Table A.2 because ISI includes only
citations from journal articles and excludes citations from other sources and it does not include
citations from non-ISI journals.
Appendix: The Contribution of Lester D. Taylor
313
Fig. A.2 The global influence of The Demand for Electricity: A Survey
title of the publication16 using Google Scholar’s PoP, 166 unique documents were
uncovered after the raw data were cleaned. Of these 166 works, Taylor’s books on
telecommunications were the first and second most-cited, with the 1994 volume
receiving 270 citations and the 1980 volume garnering 154 citations.17 Of the
1,345 citations to this literature, these two books account for 31.5 % of the total.
Furthermore, the eight works written or co-authored by Taylor account for 447
citations or about one-third of the total citations received by this literature.
Furthermore, in a broader search using PoP where the phrase ‘‘telecommunications demand’’ could appear anywhere in the publication, 774
contributions were identified after cleaning the raw data and excluding editorials,
patent citations, and class notes. These documents garnered 11,603 citations. Even
within this extensive body of literature, Taylor’s 1994 book was the fourth highest
cited with 276 citations, while the 1980 book was ranked thirteenth with 158
citations.18
16
This analysis missed papers for which the title did not include the words ‘‘telecommunications’’ and ‘‘demand.’’ For example, this search would miss a paper titled ‘‘Telephone Demand,’’
but the terms ‘‘telephone’’ and ‘‘demand’’ were too broad to identify literature that is closely
related to Taylor’s work [4/27/2011].
17
These results differ from other citation counts since some publications may not have
‘‘telecommunications’’ and ‘‘demand’’ in title, and the analysis also did not pick up articles only
citing ‘‘telephone demand.’’ [4/27/2011].
18
While we also would have liked to conduct a paper-paper network analysis of contributions in
telecommunications demand, it proved impossible to do because many of the contributions were
published as books, book chapters, or papers not in ISI-listed journals. Furthermore, the topic is
so broad that it was not possible to identify closely related works.
314
Appendix: The Contribution of Lester D. Taylor
Fig. A.3 The global influence of Telecommunications Demand: A Survey and Critique (1980)
and Telecommunications Demand in Theory and Practice (1994)
The geospatial influence of Taylor’s books on telecommunications demand can
be investigated by using the ISI reference application again. For the 1980 volume,
47 citations from journal articles were found, all of which contained address
information. These data revealed citations from authors working in 11 countries,
with the bulk of the citations, 33, coming from authors located in the United States.
For the 1994 volume, 49 citations were found, of which one was missing address
information. In this instance, 15 countries were represented, and 33 of these
articles were authored or co-authored by scientists located in the United States.
Figure A.3 displays the geospatial distribution of the citations made to both of
these books combined, 104 citations in articles written by authors located in 19
different countries.
A.4
Conclusions
It is clear from this analysis that Professor Taylor ranks highly among influential
researchers based on several different measures of citations to his demand work in
electricity and telecommunications. His works are widely cited both in the US and
around the world, and they are among the most frequently cited work in demand
analysis. Finally, Professor Taylor has a publication record and a citation record
that compares favorably to top researchers at the most elite universities in the
country.
Biographies
James Alleman is Professor Emeritus of Network Economics in the College of
Engineering and Applied Science, University of Colorado, Boulder and is a Senior
Fellow and Director of Research at Columbia Institute of Tele-Information (CITI),
Columbia Business School, Columbia University. Dr. Alleman was a Visiting
Senior Scholar at IDATE in Montpellier, France, in the fall of 2005 and continues
his involvement in IDATE’s scholarly activities. He was the Director of the
International Center for Telecommunications Management at the University of
Nebraska at Omaha, Director of Policy Research for GTE, an economist for the
International Telecommunication Union, and a Visiting Professor at the Columbia
Business School. He has conducted economic and financial research in information
and communications technology (ICT) policy. He provides litigation support in
this area.
To reach Dr. Alleman directly, send an email to James.Alleman@Colorado.edu
or visit his website: http://www.colorado.edu/engineering/alleman/.
Md. Shah Azam is Associate Professor at the University of Rajshahi,
Bangladesh. He holds a Master of Business Studies (MBS) in marketing and a
Master of Philosophy (MPhil) degree in the field of diffusion of electronic
commerce.
His research interests include adoption and diffusion of innovation; receptivity
of information and communication technology by individuals and organizations;
as well as the effects of culture and environment on technology usage, decision
processes, and productivity of technology usage. Currently he is enrolled at Curtin
University, Western Australia, for higher studies. He is also currently involved in
two research projects at the Communication Economics and Electronic Markets
Research Centre (CEEM). He has published in many scientific journals and
collective works and has presented his work at many international conferences.
Aniruddha Banerjee is an economic consultant specializing in strategic,
regulatory, and litigation aspects of network industries. He is Senior Vice
President for advanced analytics at Centris and leads data modeling for
communications and media companies. He was formerly Vice President at
J. Alleman et al. (eds.), Demand for Communications Services - Insights and Perspectives,
The Economics of Information, Communication, and Entertainment,
DOI: 10.1007/978-1-4614-7993-2, Springer Science+Business Media New York 2014
315
316
Biographies
Analysis Group, Inc. and National Economic Research Associates. In those
positions, he testified for industry clients before regulatory agencies in the US and
provided consulting services to US and international clients in areas of market
research, competition policy, mergers and acquisitions, optimal regulation, and
complex business litigation. Dr. Banerjee has held positions in AT&T’s Market
Analysis and Forecasting Division, Bell Communications Research’s Regulatory
Economics group, and BellSouth Telecommunications’ Pricing Strategy and
Econometrics group. He serves on the Board of Directors and Publications
Committee of the International Telecommunications Society. Dr. Banerjee holds a
PhD in Agricultural Economics from the Pennsylvania State University, where he
also taught economics for several years.
T. Randolph Beard is Professor of Economics at Auburn University, USA, and
an Adjunct Fellow of the Phoenix Center for Advanced Legal and Economic
Public Policy Studies in Washington, D.C. His work in Industrial Economics and
Microeconomics has appeared in The RAND Journal of Economics, The Review of
Economics and Statistics, Management Science, and other outlets. He is author of
several books, and his latest book is Global Organ Shortage: Economic Causes,
Human Consequences, Policy Responses Stanford University Press (2013).
Gail Blattenberger is a member of the Economics Department at the University
of Utah. She received her PhD in Economics from the University of Michigan in
1977 where she studied under Lester Taylor. They worked together on electricity
demand. She is currently on long-term disability (MS), but she continues her
research at the University of Utah. Her research currently involves Bayesian
statistical analysis with emphasis on understanding model fragility. This research
is directed toward operational methods with an understanding that useful statistical
procedures should involve model assessment of observable or measureable
outcomes as opposed to discussions of parameters within models. Recent advances
in computing allow one to pay attention to the predictive distribution integrated or
averaged over thousands of potential models.
Erik Bohlin is Professor at the Department of Technology Management and
Economics at Chalmers University of Technology, Gothenburg. He has published
in a number of areas relating to the information society—policy, strategy, and
management. He is Chief Editor of Telecommunications Policy; Chair of the
International Telecommunications Society; Member of the Scientific Advisory
Boards of Communications & Strategies, The International Journal of
Management and Network Economics, and Info—the Journal of Policy,
Regulation and Strategy for Telecommunications, Information and Media;
Member of the Scientific Committee of the Florence School of Regulation
(Communications and Media); and Member of the Swedish Royal Academy of
Engineering Sciences. Erik Bohlin obtained his graduate degree in Business
Administration and Economics at the Stockholm School of Economics (1987) and
his PhD at Chalmers University of Technology (1995).
Biographies
317
Kenneth O. Cogger is Professor Emeritus, University of Kansas, and President,
Peak Consulting. He received a PhD in Statistics and Management Science from
the University of Michigan and has taught at the University of Michigan, Montana
State University, George Washington University, and the University of Kansas,
where he served as Director of Research and Director of Doctoral Programs. He
has over 40 refereed articles in the Journal of the American Statistical Association,
Management Science, Operations Research, the Journal of Financial and
Quantitative Analysis, and the Journal of Accounting Research. He has served
on National Science Foundation review panels in Statistics, Operations Research,
and Industrial Engineering. He has been honored with the Mentor Award by the
Association of Doctoral Students at Kansas and also was named L. J. Buchan
Distinguished Professor by Beta Gamma Sigma. His current research interests are
in nonlinear statistical modeling. His recent consulting engagements include
clinical pharmaceutical trials optimization.
Robert W. Crandall is a Non-Resident Senior Fellow in the Economic Studies
Program of the Brookings Institution. His research has focused on
telecommunications regulation, cable television regulation, the effects of trade
policy in the steel and automobile industries, environmental policy, and the
changing regional structure of the US economy. His current research focuses on
the effects of occupational licensing of lawyers, competition in the
telecommunications sector, and the development of broadband services. His
latest book, co-authored with Clifford Winston and Vikram Maheshri, First Thing
We Do, Let’s Deregulate All the Lawyers, was published by Brookings in 2011.
His book, Competition and Chaos: U.S. Telecommunications since the 1996 Act,
was published by Brookings in 2005. He is the author or co-author of fourteen
other books on telecommunications, cable television, and a variety of regulatory
issues as well as numerous articles in scholarly journals.
He holds an MS and a PhD in economics from Northwestern University.
Bruce L. Egan is an economist and Senior Affiliated Research Fellow, Columbia
Institute for Tele-Information (CITI), Columbia University, New York. Mr. Egan
has over 35 years of experience in economic and policy analysis of
telecommunications in both industry and academia. He was Executive Vice
President of INDETEC International, a consulting firm specializing in media and
telecommunications during 1996–1998. He was an economist at Bellcore
1983–1988 and Chief Economist at Southwestern Bell Telephone Company
from 1976 to 1983. Mr. Egan has published numerous articles in books and
journals on telecommunications costing, pricing, and public policy. His research
concentration is public policy and economics of technology adoption in
telecommunications; he has written two books on the subject: Information
Superhighways Revisited: The Economics of Multimedia (Artech House, Norwood
MA 1997) and Information Superhighways: The Economics of Advanced
Communication Networks (1990).
318
Biographies
Richard Fowles is an Associate Professor in the Department of Economics,
University of Utah. He obtained his PhD in economics and his BA in philosophy at
the University of Utah. He has taught at Rutgers University and Westminster
College. Fowles studies applied Bayesian statistics in fields related to low
probability/high consequence events.
Teresa Garín-Muñoz is Professor at the National University of Distance
Education (UNED) in Spain. She is currently director of the Department of
Economic Analysis, in which she teaches Microeconomics (undergraduate and
Economics of Tourism (graduate level). Professor Garín-Muñoz holds a M.A. in
Economics from the University of California San Diego and a PhD from the
UNED. She has been a Visiting Scholar at the University of California San Diego.
She has written several textbooks on Microeconomics specially designed for
distance learners. Some results of her research have been published in academic
international journals such as Information Economics and Policy, Applied
Economics, Applied Economic Letters, Tourism Management, Tourism
Economics, and International Journal of Tourism Research. She has also
published some articles in Spanish journals with a large national circulation. In
addition, for the last ten years, she directs and teaches a Master in Economics of
Telecommunications.
Mohsen Hamoudia is Head of Strategy of the Large Projects department within
Orange Business Services, France Telecom Group (Paris). He teaches forecasting
techniques at the ESDES-Business School of Lyon and at ISM (Institut Supérieur
du Marketing) in Paris. He is Director on the Board of Directors of the
International Institute of Forecasters.
He received his MS degree (DESS) in Econometrics, MS degree (DEA) in
Industrial Economics, and PhD in Industrial Economics from the University of
Paris. Dr. Hamoudia has published and presented papers in the areas of forecasting,
econometrics of telecoms and information and communication technology (ICT),
and time-series as applied to transportation and telecommunications. He organized
the 28th International Symposium on Forecasting in Nice, France, in June 2008.
Donald J. Kridel is Associate Professor of Economics at the University of
Missouri-St. Louis. His primary teaching responsibilities are applied econometrics,
microeconomics, forecasting, and telecommunications economics. Prior to joining
the faculty at the University of Missouri-St. Louis in 1993, Kridel held various
positions, including Director-Strategic Marketing, at Southwestern Bell
Corporation (now AT&T). Kridel earned his PhD in economics from the
University of Arizona where Lester Taylor was his PhD advisor. Kridel has been
active in telecommunications demand analysis and pricing research for nearly 30
years, often working with Professor Taylor. Kridel is currently interested in
automating analytics and applying econometric techniques to real-time decisionmaking.
Biographies
319
Sharon G. Levin is Emeritus Professor of Economics at the University of
Missouri-St. Louis. In recent years, Dr. Levin’s research has focused on the quality
and composition of the US scientific workforce. Major themes have been the
impact of immigration on the American scientific community, as well as of age
and vintage and information technology on scientific productivity. In 1993, she
was awarded the Chancellor’s Award for Excellence in Research and Creativity by
the University of Missouri-St. Louis.
Dr. Levin has published numerous articles in journals including, inter alia, The
American Economic Review, Science, The Review of Economics and Statistics, and
Social Studies of Science and Management Science. She also co-authored Striking
the Mother Lode in Science (Oxford University Press). Dr. Levin graduated from
the City College of New York (Phi Beta Kappa) with a B.A. in Economics and
earned both her M.A. and PhD in Economics from the University of Michigan.
Stanford L. Levin is Emeritus Professor of Economics at Southern Illinois
University Edwardsville. From 1984 to 1986, he served as Commissioner of the
Illinois Commerce Commission, Illinois’ utility regulatory agency. Dr. Levin is
President of the Resource Group, Inc., an economic consulting firm.
Dr. Levin has a B.A. in Economics from Grinnell College and a PhD in
Economics from the University of Michigan. He has published numerous articles
in journals including the Southern Economic Journal, The Review of Economics
and Statistics, the Review of Industrial Organization, the Journal of Energy Law
and Policy, and Telecommunications Policy. He is co-editor of books on antitrust
and telecommunications. He has served as an expert witness in antitrust and
regulatory proceedings and testified before federal and state regulatory
commissions in the US and Canada, and in US federal and state courts.
He is on the Board of Directors of the International Telecommunications Society.
Daniel S. Levy specializes in applications of economics and statistics in the study
of corporate structures related to industrial organization, product innovation, and
quality control. He has studied product demand and price elasticities for
manufacturers and retailers for more than 25 years. His product demand
research applies the latest academic methods to corporate pricing questions. The
resulting econometric models are used by corporations around the globe to
measure evolving product demand and set thousands of product prices on an
ongoing basis.
Prior to Advanced Analytical Consulting Group, Inc., Dr. Levy was the
National Leader of the Economic and Statistical Consulting Group at Deloitte
Financial Advisory Services and Global Leader of Economic Consulting at Arthur
Andersen’s Business Consulting Group. He also held research and consulting
positions at Charles River Associates, The RAND Corporation, Needham-Harper
Worldwide Advertising, SPSS Inc., and The University of Chicago Computation
Center.
320
Biographies
Gary Madden is Professor of Economics at Curtin University, Perth, Western
Australia, and Director of the CEEM. His research is primarily focused on
examining empirical aspects of communication economics, electronic markets,
productivity measurement, real options and network economics. Dr. Madden has
published articles in the fields of industrial organization and aspects of the
economics of networks—particularly on telecommunications and the internet—in
scholarly journals including Review of Industrial Organization, Review of
Economics and Statistics, Industrial and Corporate Change, Journal of
Forecasting, International Journal of Forecasting, Economics of Innovation and
New Technology, Applied Economics, Journal of Media Economics and
Telecommunications Policy. Dr. Madden is an Associate Editor of
Telecommunications Policy and a member of the Editorial Board of several
international journals. He is a Board Member of the International
Telecommunications Society. Dr. Madden has served as an economic advisor to
the Australian Government and international agencies.
Áine M. P. Ní-Shúilleabháin is Senior Research Fellow at The CITI , Columbia
Business School. Her longstanding research focuses on modeling term structures
of defaultable bonds and on continuous-time diffusion processes that capture
predictability in the underlying. Her current working papers include: Interest Rates
As Options: the Question of Social Cost (Financial Accelerator Revisited); Debt
and Equity Valuation in General Equilibrium with Dynamically Incomplete
Markets; Pricing Technology Assets: A Multi-Sector Stochastic Growth Model
with Dynamic Bandwidth Trading; Real Options: Trading Mobile Money as
Bandwidth and Currency—A Monetary Policy Model. She is former Director,
Quantitative and Credit Analytics at Barclays Bank Headquarters (London) where
she led quantitative modeling and pricing of credit risk, as well as risk management
at the portfolio level. As CITI’s Associate Director, she published several academic
articles on telecommunications, analyst reports on technology (McGraw-Hill/
Northern Business Information), and co-edited (with Eli Noam) the volume Private
Networks, Public Objectives (Elsevier Science Press, 1996). Ms. Ní-Shúilleabháin
holds degrees from Dublin University (BA—First Class Hons-Communications);
The University of Pennsylvania (Annenberg School for Communications, MA);
INSEAD (MSc, Finance); and the London School of Economics and Political
Science (MSc, Econometrics and Mathematical Economics).
Eli M. Noam has been Professor of Economics and Finance at the Columbia
Business School since 1976. In 1990, after having served for three years as
Commissioner with the New York State Public Service Commission, he returned
to Columbia. Noam is the Director of the CITI. He also served on the White
House’s President’s IT Advisory Council.
Besides the over 400 articles in economics, legal, communications, and other
journals that Professor Noam has written on subjects such as communications,
information, public choice, public finance, and general regulation, he has also
authored, edited, and co-edited 28 books.
Biographies
321
Noam has served on the editorial boards of Columbia University Press as well as
of a dozen academic journals and on corporate and non-profit boards.
He is a member of the Council for Foreign Relations and a fellow of the World
Economic Forum. He received AB, AM, PhD (Economics), and JD degrees, all
from Harvard. He was awarded honorary doctorates from the University of
Munich (2006) and the University of Marseilles (2008).
Teodosio Pérez-Amaral is Professor of Economics at Complutense University in
Madrid. He is currently head of the Department of Foundations of Economic
Analysis II: Quantitative Economics. He previously held the position of staff
economist at the Bank of Spain. He has been a Visiting Scholar at the University of
California San Diego. He has also been a consultant to DIW of Berlin, Deutsche
Telekom and Telefónica. Professor Pérez-Amaral holds a BA from the
Complutense University in Madrid, and a M.A. and PhD in Economics from the
University of California San Diego. He has published in academic journals,
including the Oxford Bulletin of Economics, Econometric Theory, Journal of
Forecasting, Applied Economics and Information Economics and Policy. His
research covers topics in telecommunications economics, internet demand,
financial econometrics, and model construction and selection. He is currently a
member of the Board of Directors of the International Telecommunications
Society.
Paul N. Rappoport is the former Executive Vice President and Chief Research
Officer of Centris. Dr. Rappoport has over 35 years of teaching and research
experience focusing on business intelligence, forecasting and data analysis,
modeling, and statistical assessments. His specialization is in applied
telecommunications demand analysis. He has written extensively on the demand
for broadband, consumer-choice models of Pay TV and the analysis of local
competition. He is an Emeritus Professor of Economics having spent 38 years on
the faculty of Temple University.
Dr. Rappoport serves as Centris’ chief intelligence officer and leads Centris’
research initiatives that included: modeling the demand for over-the-top video,
estimating price elasticities from estimates of a consumer’s willingness to pay,
tracking broadband deployment and internet speed by provider, specifying and
modeling business broadband, forecasting Pay TV demand, and modeling the
demand for best practice Voice-over-IP.
He is a Senior Fellow at Columbia University’s Center for Tele-Information.
He received his PhD from The Ohio State University in 1974.
Ibrahim Kholilul Rohman is a researcher at the Division of Technology and
Society, Chalmers University of Technology in Gothenburg, Sweden. He obtained
his bachelor’s degree in Economics from the Faculty of Economics, University of
Indonesia in 2002. He undertook research for the Institute for Economic and Social
Research, Faculty of Economics, University of Indonesia. In 2006, he obtained a
master’s degree in Monetary Economics from the University of Indonesia. In
2008, Dr. Rohman received a scholarship from the Ministry of Communication
322
Biographies
and Information Technology, Government of Indonesia, to pursue the PhD
program at Chalmers University of Technology which he received in 2012. His
doctoral thesis was entitled ‘‘On the Weightless Economy: Evaluating ICT sectors
in the European, Asian and African regions.’’ His research interests are concerned
with the issues related to information and communication technology (ICT) for
development.
Scott J. Savage is an Associate Professor of Economics at the University of
Colorado, Boulder. He teaches industrial organization, microeconomics, and
telecom economics. Dr. Savage’s research interests include empirical aspects of
industrial organization, economic education, and telecommunications economics.
He is currently studying: pricing in partially deregulated markets; entry,
competition, and pricing in cable TV markets; market structure and media
diversity; and consumer preferences and product quality in US broadband markets.
He received his PhD at Curtin University of Technology in 2000.
Timothy J. Tardiff Principal at Advanced Analytical Consulting Group, has over
30 years of academic, research, and consulting experience and has published
extensively in economics, telecommunications, and transportation journals. He has
participated in legal and regulatory proceedings regarding telecommunications,
economics, anti-trust, and regulation issues in over 25 states and before the United
States Federal Communications Commission. He has international research and
consulting experience in Japan, New Zealand, Peru, Australia, and Trinidad and
Tobago.
Dr. Tardiff’s research has addressed the demand, cost, and competitive aspects
of converging technologies, including wireless and broadband. He has evaluated
pricing policies for increasingly competitive telecommunications markets,
including appropriate mechanisms for pricing access services to competitors and
studied actual and potential competition for services provided by incumbent
telephone operating companies.
Dr. Tardiff has a B.S. in mathematics from the California Institute of
Technology and a PhD in Social Science from the University of California, Irvine.
Lester Taylor Read this book!
Donald M. Waldman is Professor of Economics at the University of Colorado,
Boulder. He teaches and conducts research in microeconometrics. His current
theoretical research includes modeling and estimating with discrete and limited
dependent variables. In applied areas, he is studying nonmarket valuation, energy
conservation, and issues in health economics. He received his PhD at the
University of Wisconsin-Madison in 1979.
R. Bruce Williamson is Senior Economist at the National Defense Business
Institute at the University of Tennessee, working with government defense and
corporate clients. He worked in the 1970s for the Institute for Energy Analysis in
Oak Ridge, Tennessee, where Dr. Williamson first became acquainted with
Professor Taylor’s pioneering work on consumer demand and electricity pricing.
Biographies
323
That was followed by graduate fellowships with Mountain Bell/US West Telephone
Company, a dissertation co-chaired by Dr. Taylor, and more than fifteen years in US
and international telecommunications demand research for Southwest Bell
Corporation (SBC), Motorola, and others. The transition to defense economics in
recent years is ironically a move from the study of a modestly regulated industry to a
highly regulated industry, with a complex, concentrated market structure, and
development and production time horizons measured in decades. Dr. Williamson’s
current interests focus on the econometrics of program cost and schedule and the
long term financial performance of the defense industrial base.
Index
A
Advertiser-supported, 56
Advertising, xii, xix, xx, 41, 52–56, 88, 109,
110, 117, 134, 256, 257, 266, 268, 269, 270
Algorithms, 21, 25
Alleman, ix, x, xiii, 304
AOL, 53
AOL/Time warner, 53
Asia, 115, 156
Asian, 65
Assets, 30, 52, 233, 246, 295, 297, 299, 300
Asymptotic normality, 17
AT&T, xvi, xxviii, 3, 49, 162, 171, 186, 235
Audiences, xvii
Avalanche, xii 211, 212, 215, 217, 219, 220,
224, 225, 226
Azam, xi
B
Bandwidth, 49, 56, 57, 60, 81, 158
Banerjee, x
Bayesian additive regression trees (BART),
211, 217, 220, 221, 222, 225, 226
Beard, xi
Behavior, ix, xv, xvii, xx, xv, xxiii, 40, 44, 61,
62, 133–135, 139, 140, 144, 147, 154, 187,
188, 190, 193, 249, 262, 263
Bell system, xxvi, xxvii, xxviii, 303
Bibliometrics, ix, 309, 310
Bill harvesting, 11, 15, 181
Binary decision variables, 22, 24
Bivariate ordered probit, xi, 93
Bivariate probit, 91, 93, 263, 266, 267
Blattenberger, xii, 214, 215, 225, 303, 306
Bohlin, xi
Broadband, xii, 33, 37, 38, 40–43, 45, 53, 54,
65, 81, 113, 114, 156, 158, 159, 161,
162, 164, 231–253, 304
Broadband services, 33, 53, 304
Broadcast, 33, 36, 41, 42, 50, 55
Budget, 4, 237, 238, 244, 245, 246, 262,
273, 274, 275, 277, 279, 292, 293, 294,
301
Bureau of Economic Analysis (BEA), 36, 46
Bureau of Labor Statistics, 7
Business communications services (BCS), xii,
153–159, 164, 165, 170, 171
Business council of Australia, 84
C
Cable, xvi, xix, 33–37, 40–47, 49–51, 53, 55,
56, 60, 172, 173, 235, 240, 249, 257, 258,
262, 268
Cable television, 33–35, 37, 41, 42, 44, 47, 49,
50, 53, 56, 257, 258
CBS, xvi, 52, 57
Cell phones, xvi, xix, 35, 245
Census Bureau, 44, 46, 256, 264
Central Bank, 301
Centris, ix, 62, 74, 77, 80
China, 115, 154, 158
Co-consumption, 61
Cogger, x, 17
Columbia pictures, 51
Communications, x, xv, xvi, xx, 34, 35, 44, 59,
114, 156, 172, 174, 303
Competition, xxv, xxvi, xxvii, 3, 47, 55, 56,
85, 153, 162, 164, 165, 168, 169, 187,
232, 233, 237, 242, 246, 255
Complex plane, 5
Computer tablets, 35
Conjoint analysis, xvii, xviii
Connect America Fund (CAF), 237, 238
Consumer, ix , x, xii, xiii, xviii–xx, xxv, xxvii,
xxviii, 33–38, 40, 41, 44, 56, 57, 59, 61,
75, 80, 85, 124, 133, 134, 136–140,
J. Alleman et al. (eds.), Demand for Communications Services - Insights and Perspectives,
The Economics of Information, Communication, and Entertainment,
DOI: 10.1007/978-1-4614-7993-2, Springer Science+Business Media New York 2014
325
326
144, 147–149, 154, 175, 187, 188, 191,
192, 194, 232, 235, 246–248, 255–258,
266–270, 291, 292, 299
Consumer choice, 292
Consumer demand, xxv, 194
Consumer Demand in The United States, 7
Consumer expenditures, 34, 36, 38, 268
Copyrighted material, 33, 41
Cosines, x, 5, 15
Cost, xiii, xvi, xxv, 41, 42, 47, 60, 65, 80, 83,
84–86, 91, 101, 109, 110, 138, 139,
185–187, 189–192, 195, 197, 198, 200,
202, 206, 207, 209, 211, 224–226, 232,
235, 237, 238, 240, 243, 246, 250, 251,
256, 258–260, 262, 263, 266–270,
273–279, 281, 283, 284, 286, 287, 296,
303
Couch potatoes, 43
CPI, 45
Crandall, x
Cross-elasticities, ix, xii, xv, xxvii
D
Data, ix, x, xi, xii, xv, xvii, xviii, xix, xx, xxvi,
xvii, 7, 11, 13, 15, 18–21, 24, 25, 35, 40,
42–44, 46, 47, 49, 52–54, 56, 62, 63, 73,
74, 77, 78, 80, 83–85, 87, 91, 110, 116,
117, 121, 123, 125, 134, 135, 140, 154,
155, 158, 159, 162, 163, 169, 171, 172,
175, 176, 181, 188–195, 197, 198,
200–204, 207–209, 211, 214–217, 221,
225, 239, 244, 247, 250–252, 256–259,
262, 263, 265–269, 274–277, 279, 281,
286, 303
Data collection, ix, xv, xvii, xix, xx, 192
Decomposition analysis, 119–122, 125, 129
Defence acquisition, xiii, 273, 274, 276–278,
287
Demand, ix, x, xi, xii, xiii, xv–xx, xxv–xxix, 5,
34, 49, 56, 57, 74, 76–78, 80, 85, 101,
114, 118–121, 123, 125, 126, 129, 135,
153–158, 162–165, 168, 169, 171, 175,
180, 181, 185–198, 200–209, 232, 235,
243, 244, 256–258, 270, 291, 292, 298,
303
Demand analysis, xv
Department of Agriculture, 214, 238, 239
Department of Defense (DoD), xiii, 273, 274,
278, 279, 281, 284, 286
Dependent variable, x, 3, 7, 8, 10–13, 15, 74,
91, 144, 215, 220, 275, 283
Index
Depreciation rate, 34
Diagonal matrix, 118
Digital video recorders (DVRs), 35, 55
DirecTV, 49
Disney, 41, 52, 57, 85
Distribution channels, 33, 34, 40, 41, 44, 50
Downloaded video content, 60
Dummy variables, xi, xii, 8, 91, 283
E
E-commerce, 133, 135
Econometric, xvii, xviii, 265
Econometric estimation, xvii, 267
Econometric modeling, xvii
Economic growth, 38, 113, 114, 129, 237, 241,
244, 292, 301
Economic performance, 113
Economic theory, 274
Economists, xx, xxv, xxvi, 19, 88, 187, 189,
190, 193, 194, 200, 205, 300
Economy, xxv, 113–115, 119, 120, 122,
125–127, 161, 241, 294, 295, 297–301,
303
Egan, xii, 22, 317
Elasticities, xvii, xxvi, xxvii, xxviii, 13, 77, 78,
80, 144, 148, 149, 186, 192, 291
Elasticity, xi, xxv–xviii, 13, 80, 85, 123, 78,
85, 123, 127, 129, 144, 147, 148, 163–165,
171, 180, 186, 187, 189, 293, 294
Electricity, 244
Electronic equipment, 34, 36, 39, 42, 44, 144,
147–149
Empirical, ix, x, xi, xii, xviii, xix, xx, xxvii,
xxviii, 5, 19, 34, 83–85, 110, 134, 139, 148,
149, 171, 180, 189, 193, 270, 273, 275,
276, 286
Endogeneity, xi, xii, 84, 86, 93, 101, 109, 186,
188, 189, 195, 198, 200, 207, 275
Endogenous, xi, 84, 87, 88, 91–93, 101, 163,
207, 286
Enterprise, 84, 86, 89, 93, 156, 168, 169, 190,
295
Entertainment, xvii, 33, 40, 41, 43, 57, 124,
135, 148, 149, 259
Entry, xi, xii, 48, 60, 83–89, 91, 93, 101, 109,
110, 172, 231–233, 238, 243, 254, 257, 300
Equilibrium, 119, 188, 195, 292
Equity, 40, 51, 54, 57, 295
Error terms, 4, 92, 202–204, 277
Estimates, xi, xiii, xviii, xxv, 7, 14, 17, 19, 26,
34–36, 43, 46, 63, 80, 93, 119, 127, 153,
Index
163, 165, 168, 169, 171, 177, 187, 191,
195, 197, 200, 201, 204, 206, 207, 225,
241, 251, 256, 257, 262–264, 266–270,
273, 275, 276–279, 281–284, 286, 291, 296
Estimation, x, xii, xvii, xxvi, 10, 11, 15, 17, 18,
20, 21, 24, 26, 27, 47, 78, 80, 86, 92, 119,
153, 154, 163, 169, 192, 194, 195, 198,
200, 203, 204, 206, 207, 209, 221, 266,
276, 277, 279, 281–284, 291, 292, 303
Estimations, 162
EU&rdquo, 116
EU, 113, 115, 116, 154, 156
European, xi, 113–116, 122–125, 127, 130,
137, 140, 149, 154, 162, 299
Expenditure, 7, 8, 11, 34, 113, 115, 116, 161,
164, 168, 191, 274
Export, 114, 120, 121, 126, 127
F
Facebook, 41, 43, 52, 53, 248
Federal Communications Commission
(FCC), 42, 180, 234–237, 242–245, 247,
250, 255
Federal Trade Commission, 53
Fiber, 47, 60, 242, 249, 253
Financial meltdown, xiii, 292, 300
Finland, 116, 125, 129
First-mover advantage, 85
Fixed-line telephone, 61
Focus groups, xvii, 259, 263
Forecast, x, xi, xvi, 59, 75, 115, 158, 168, 169,
217, 224, 225, 237
Forecasted, xix, 215
Forecasting, xvi, xviii, xx, xxvi, 153, 155, 158,
162, 169, 211, 212, 214, 217, 220–222, 225
Fowles, xii, 214, 215, 225, 306
G
Galton, 18–20, 24, 25
Garín-Muñoz, xi, 118
GDP, xi, 46, 52, 113–116, 118, 119, 122, 123,
126, 127, 129, 156, 169, 299
General Accounting Office, 273
Generalized Method of Moments, 93
Germany, 116, 123, 126, 127, 298, 299
Glass–Steagall Act, 300
Global optimum, 21
GMM, 93
Google, 19, 41, 52, 53, 55
Gradient, 21
327
H
H, 19, 20–22, 28, 296
Hamoudia, ix, xii, 153, 158
Heteroskedastic, 93
Heteroskedasticity, 93
High-speed broadband, 33
Hinge location, 19
Hispanic, 65
Households, xvi, xvii, 7, 11, 35, 36, 43, 54, 56,
60–65, 74–77, 172, 176, 180, 241, 247,
250, 251, 256–258, 260, 263, 269, 270, 292
Houthakker, xv, xxv, xxviii, 7, 17, 34, 57, 296,
303
Hudson, 20, 21, 24, 25
Hulu, 40, 41, 52, 59, 74, 76, 77, 80
I
IBM, xvi, 154, 162
ICT sectors, 115
ICT services, 115
ICT-H, 140
ICT, ix, xi, 113–116, 120, 124, 127–129, 153,
154, 156, 161–165, 169
Identity matrix, 118, 120
Import, 114, 119–121, 127, 129
Income, xvii, xxvii, xxviii, 3, 7, 12, 40, 47, 49,
62, 63, 65, 147, 172, 175–177, 180, 245,
256–258, 269, 273, 291–294, 296–302
Independent variables, xii, 10, 11, 19, 27, 74,
88, 178, 278, 284
Industry, xi, xviii, xxv–xxix, 40, 42, 44, 46, 48,
50, 55, 83–88, 116, 117, 119–121, 137,
154, 162, 163, 171, 181, 191, 193, 194,
231, 235, 237, 240, 243, 246, 251, 253, 303
Information, ix, x, xi, xvii, xix, xx, xxv, xxvi,
xxvii, 10, 11, 21, 33, 34, 36, 40, 46, 47, 62,
80, 83, 84, 86, 87, 92, 113–115, 120, 122,
124, 138–140, 153, 154, 155, 162, 172,
175, 176, 187, 189, 202, 206, 207, 209,
212, 214, 215, 226, 238, 239, 243, 244,
252, 253, 256–259, 262–264, 267–269,
270, 277, 279, 286, 304
Input–output, xi, 117, 122–124, 129
Input–output tables, 124
Integration, xx, 46, 156, 163, 168, 301
Interest rates, 294, 301
Inter-LATA, 11–13
Intermediate demand, 120
International Telecommunication Union, 232
International Telecommunications Society,
318, 321, 322
328
Internet, xi, xii, xviii, xix, xx, xxviii, 40–45,
48–50, 52–57, 60, 62, 81, 114, 115,
133–135, 137–141, 144, 147–149, 154,
156, 159, 172, 173, 237, 245, 250, 252,
253, 255–259, 262–265, 270
Internet connection, 56
Interventions, 19
Interviews, xvii, 134
Intra-LATA, 11–13
Investment, xv, xxvi, 53, 84, 86–88, 93, 114,
116, 156, 161, 163–165, 169, 231–233,
235, 236, 241, 242, 244–246, 250, 253,
274, 296–298, 300, 301, 303
iPods, 35, 38
Italy, 116, 127
iTunes, 41
J
Japan, 115, 129, 323
K
Keynes, xiii, 291, 294–296, 298, 299, 301
Knowledge economy, 114
Korea, 115, 116, 129
Kridel, xi, 11, 180, 303
L
Lagrange multiplier, 93
Laptops, 35, 42, 44, 158, 246
Leap, 48
Learning effects, 84, 86, 88, 89
Levin, ix, xiii, 321
Levy, ix, xi, 321
Liberty Media, 52
Likelihood, xxviii, 21, 74, 92, 134, 139, 140,
144, 147–222, 253, 262, 264, 266, 267
Linear programming (LP), x, 17–20, 22, 26–28
Liquidity, 295, 301
Local exchange company (LEC), 11, 13, 14, 177
Local minima, 21
Logarithms, 5–7, 11
Long-distance, xxv, xxvii, xxviii, 3, 11, 186
M
Macroeconomic, 156, 169, 298
Market entry, 84, 91, 109
Marketing, xvi, xvii, xx, xxvi, 85, 133, 149
Index
Mathematical model, xvii
MCI, xxviii, 3
Measurement techniques, xvi, 6
Media, ix, x, 34, 36, 40, 50, 52, 53, 123, 257,
259, 260, 266, 318, 322
Media consumption, xix, 40
Media products, ix, xv, 33, 41
Methodological approaches, xvii, 7
Methodology, xi, xviii, 74, 117, 125, 211, 258,
264, 274, 277
Metrics, 86, 276, 278, 286
MILP, 17, 22–26, 28
Mixed Integer Programming, x, 17
Mobile phones, 43, 59, 115, 157
Models, x, xi, xii, xiii, xvii–xx, 5, 11, 13–15,
17, 19, 26, 59, 74, 84, 91, 101, 110, 117,
121, 139, 144, 153, 156, 162, 163, 185,
187, 188, 190, 192, 193, 203, 211, 213,
225, 226, 281, 282, 286, 297, 303
Motion picture, 41, 42, 45–47, 50, 52, 57
Multiple regressions, xii, xvi, 18–24, 27,
28, 65, 91, 139, 144, 158,
212, 215
N
National Broadband Plan (NBP), 236, 240,
244
National Exchange Carrier Association
(NECA), 237, 251
NBC/Universal, 42, 49, 51–53
Netflix, 37, 38, 40, 41, 43
Netherlands, 116, 123, 125, 127, 129
Network, xii, 51, 84, 86, 89, 93, 115, 154,
155, 158, 159, 163–165, 231–236, 238,
240–242, 244, 245, 249–251, 253, 303
New media, 33, 44, 50, 52–54, 255, 257–259,
266–278
NewsCorp, 41, 52
Newspapers, xii, xvii, xviii, 33, 41, 46, 47,
255–259, 270
New York Times, xviii, 51
Nielsen, xv, xx, 43
Ní-Shúilleabháin, xiii, 322
Noam, ix, xx, 322, 323
Nominal, 36, 38, 39, 45, 46, 52, 53, 144, 247,
249, 293
Nonlinear, 10, 11, 147, 149, 260
North American Industry Classification
System, 44
Norway, xi, 116, 123, 125, 129
Index
O
OECD, xi, 114, 115, 124, 129, 154
Online, xi, 33, 41–43, 53, 60, 83–86, 88, 89,
91, 93, 109, 110, 133–135, 137, 139–141,
144, 147–149, 248, 257, 259, 262–264
Online markets, 83–85, 93
Optimal, xix, 25, 27, 80, 187, 190, 191, 194,
217, 260
Ordinary least squares (OLS), x, 17, 78, 163,
189, 195–197, 203, 207, 275
OTT bypass, 59, 62
Output, xi, 46, 47, 114, 118–123, 125, 129,
164, 191, 193, 199, 205
Over-the-top (OTT), x, 41, 46, 47, 114,
118–123, 125, 126, 129, 163, 189, 191,
196, 203
P
Pay TV, 59
Pay-per-view, 41, 55
PCE, 36, 38, 39
Pérez-Amaral, xi, 323
Piecewise linear estimation, 26
Piecewise linear regression, 17, 19, 20, 24
Pigou-Haberler-Patinkin, 298
Piracy, 33, 34, 38, 40, 42, 45, 50, 52, 55
PNR & Associates, 11, 15
Polar Coordinates, 4
Polynomial, 77, 78
Predicted probabilities, 75
Predictive distribution, 318
Predictors, 4, 7, 11, 22, 24, 140, 177
Present-value, 295, 296
Price, xi, xv, xvii, xix, xxvi, xxvii, xxviii, xxix,
3, 12, 13, 39, 45, 77, 80, 85, 114, 115, 119,
122, 123, 127–129, 158, 163–165, 168,
169, 171, 172, 175, 177, 180, 186–198,
201–204, 206–209, 240, 249, 256, 265,
270, 291, 292, 294–297, 299, 301
Price elasticity, 80, 163, 164, 186
Probabilities, xi, 60, 75, 91, 92, 144, 221, 225,
226
Probability, 62, 75, 76, 93, 138, 144, 147–149,
175, 211, 212, 214, 220–222, 225–257,
266, 267, 294
Probit model, 84, 91, 93, 101, 263, 264, 267
Product, xv, xviii, xxi, xxvi, xxviii, 36, 41, 76,
85, 88, 122, 124, 134, 136–141, 144,
147–149, 154, 158, 163–165, 169, 187,
191–194, 196, 197, 209, 232
Productivity, 85, 88, 109, 113, 114, 124, 244
Productivity growth, 114
Profiles, 75
329
Profitability, xi, xxvi, xxvii, 187, 191, 193
Public investments, 232
Q
QR, 17–20, 22–24, 26–28
Quasi-rents, 295
R
R&D, 113, 115, 116, 124, 129
Radio, xii, xvi, xvii, 36, 117, 128, 140, 231,
232, 234–236, 253, 255, 256, 258, 259,
262, 264, 265, 270
Radio-frequency identification, xix, 9
Radius-vectors, 15
Rappoport, ix, x, xiii, xv, 11, 304
Real personal consumption, 38
Regression errors, 20
Regression model, 5, 75, 178, 257
Regressions, x, 7, 14, 18–20, 24, 60, 84, 88,
92, 282
Revenue, xxvii, 34, 38, 44–46, 53, 54, 55, 56,
57, 59, 77, 80, 84, 88, 91, 101, 109, 110,
116, 117, 156, 158, 163, 181, 186, 194,
197, 207, 225, 233–235, 245, 247, 296,
297, 301, 303
Rohman, xi, 323
Rural, xii, 231– 238, 240–244, 246–253
Rural Broadband, 231, 238, 240, 242, 243, 251
Rural Utilities Service (RUS), 238, 239,
240–245, 241, 254
S
SAD, 20, 21
Sample, xi, xiii, xvii, 6, 7, 13, 20, 24, 26, 75,
78, 84, 86, 89, 91, 109, 110, 134, 140, 141,
176, 195, 226, 251, 263, 264, 265, 267,
274, 276, 277, 279, 281– 284
Satellite television, 33, 45, 262, 268
Savage, xii, 113, 258, 262–264, 266, 269
Search techniques, 21
Seemingly Unrelated Regression, xiii, 277
Set-top boxes, 55, 60
Sines, x, 15
Small and Medium Enterprises (SME), 84, 86,
101, 161, 162, 164, 168
Software, xi, xxi, 19–21, 24–26, 34, 46, 86,
115, 133, 135, 138, 141, 144, 148, 149,
155, 158, 159, 161, 192
Sony Corporation, 51
Spectrum, 234, 235
Sprint/Nextel, 48
330
Statistically significant, 26, 74, 177, 268
Statistical problems, xvii, 7
Statistical testing, 26
Stochastic process, 220, 221
Subscribers, xvi, 41, 45, 49, 56, 57, 75, 77,
180, 181, 233, 244, 245, 249, 256, 257
Subscription revenues, 55
Substitution, xi, 45, 59–61, 63, 65, 74, 76,
114, 121, 127, 129, 178, 180, 181, 292,
293, 299
Supply, ix, xii, xv, 50, 74, 119, 120, 121, 124,
153, 158, 162–165, 168, 186, 188, 189,
190–192, 194, 202–208, 240, 243, 244,
246, 295
Surveys, xvii, xx, 62, 76, 116, 134, 140, 256,
258, 263, 304
Sweden, xi, 125, 129
Syndicated television programming, 60
T
Taiwan, 115, 129
Tardiff, ix, xi, 185–187
Tax, 7, 237, 245, 248, 292, 298, 301–303
Taylor, ix, x, xi, xiii, xv, xx, xxv, xxvi, xxvii,
xxviii, xxix, 7, 11, 17, 34, 57, 175, 180,
185, 186, 188, 193, 208, 262, 296, 303
Techniques, xiii, xix, xx, xxv, xxvii, 19, 21,
23, 78, 80, 192, 193, 206, 211
Technological change, 34, 40, 126–129, 303
Technology, ix, x, xi, 154, 161, 239, 241, 243,
245, 252
Technology development, 113
Telco, 43, 53, 232–234, 237, 242, 247–251, 253
Telecommunications, ix, xxvii, xxviii, 46, 47,
60, 156, 158, 185, 231, 237–239, 243, 245,
246, 255
Telecommunications Act, xxvii, xxviii, 60,
231, 233, 236, 242, 246, 247, 252, 253,
255, 300
Telecommunications markets, 153, 154
Telephony, xi, 171, 172, 174, 180, 237
Television, xi, xii, xvi, 33, 37, 41, 44, 55, 56,
140, 255, 256, 264, 270
Television networks, 33
Television service, 43, 59, 265
Theoretical, xi, 134, 137, 286
Time Warner, 49, 51–53
Time Warner Cable, 49, 53
Index
Toll minutes, 11
Traditional media, x, 33, 34, 39, 40, 42, 44, 50,
53, 54, 56, 255, 256, 258
TV ratings, xvi, 6
TV shows, 59, 60, 74
Twitter, 44, 52, 53
Two-equation model, x, 3, 4, 15
Type I Errors, xvi
U
U.S. Cellular, 48
Ubscribers, 248
Uncertainty, 85, 137, 169, 294, 295, 296, 301
United States, xxvi, 43, 47, 53, 59, 60, 114,
115, 124, 232, 238, 239, 247, 264, 273
USDA, 238, 241, 243
Utility, xviii, 175, 257, 262, 263, 265–270,
292, 293
V
Variables, x, xi, xii, xivii, xviii, 5, 6, 13–15,
24, 25–28, 74, 84, 86, 88, 91–93, 101, 110,
134, 135, 139, 140, 144, 148, 158,
161–163, 169, 175–177, 188, 189,
195–198, 202– 204, 207–209, 212,
214–217, 219–224, 275, 276–279, 281,
282–284, 286
Verizon, xxviii, 49, 162, 235
Viacom, 52, 57
Video cord-cutting, x, 59, 61, 65, 81
Video distributors, 33, 41, 43, 55, 56
Video media, 36–39, 56, 57
Video over the Internet, 37, 43
Video services, 35, 47, 49, 56, 59
Virtual sellers, 83, 84
Voice cord-cutting, 59, 61, 65, 81
Voice over Internet protocol (VoIP), 47, 154,
156, 158, 161, 162, 233, 247, 248, 251
Voice services, 47, 159
W
Waldman, xii, 258, 262–264, 266, 269
Wall Street Journal, xvi, 51
Washington Post, 51, 258
Wealth transfer, 299
Weapon systems, 273
Index
Williamson, xiii, 323
willingness-to-pay (WTP), 62, 76–78, 80, 256,
257, 259, 262, 265–267, 269, 270
Willing-to-pay, 76, 77, 256, 257, 269, 270
Wireless, xi, xxvii, xxviii, 33, 39, 48, 50, 56,
57, 158, 159, 171–173, 175, 177, 178, 180,
181, 235, 236, 242, 245, 246, 250, 251
331
Wyoming, xi, xii, 232–250, 252, 253
Y
YouTube, 41, 44
Download