Uploaded by thanhnwus

The Oxford Handbook of the Digital Economy ( PDFDrive )

advertisement
[UNTITLED]
Oxford Handbooks Online
[UNTITLED]
The Oxford Handbook of the Digital Economy
Edited by Martin Peitz and Joel Waldfogel
Print Publication Date: Aug 2012 Subject: Economics and Finance
Online Publication Date: Nov
2012
(p. iv)
Oxford University Press is a department of the University of Oxford.
It furthers the University's objective of excellence in research, scholarship,
and education by publishing worldwide.
Oxford New York
Auckland Cape Town Dar es Salaam Hong Kong Karachi
Kuala Lumpur Madrid Melbourne Mexico City Nairobi
New Delhi Shanghai Taipei Toronto
With offices in
Argentina Austria Brazil Chile Czech Republic France Greece
Guatemala Hungary Italy Japan Poland Portugal Singapore
South Korea Switzerland Thailand Turkey Ukraine Vietnam
Oxford is a registered trademark of Oxford University Press
in the UK and certain other countries.
Published in the United States of America by
Oxford University Press
198 Madison Avenue, New York, NY 10016
© Oxford University Press 2012
All rights reserved. No part of this publication may be reproduced, stored in a
retrieval system, or transmitted, in any form or by any means, without the prior
permission in writing of Oxford University Press, or as expressly permitted by law,
by license, or under terms agreed with the appropriate reproduction rights
organization.
Inquiries concerning reproduction outside the scope of the above should be sent to
the
Rights Department, Oxford University Press, at the address above.
You must not circulate this work in any other form
and you must impose this same condition on any acquirer.
Page 1 of 2
[UNTITLED]
Library of Congress Cataloging-in-Publication Data
The Oxford handbook of the digital economy / edited by
Martin Peitz and Joel Waldfogel.
p. cm.
Includes bibliographical references and index.
ISBN 978–0–19–539784–0 (cloth : alk. paper) 1. Information technology—
Economic aspects—Handbooks, manuals, etc. 2. Electronic commerce—
Handbooks,
manuals, etc. 3. Business enterprises—Technological innovations—Handbooks,
manuals, etc. I. Peitz, Martin. II. Waldfogel, Joel, 1962– III. Title: Handbook of the
digital economy.
HC79.I55O87 2012
384.3′3—dc23
2012004779
ISBN 978–0–19–539784–0
1 3 5 7 9 8 6 4 2
on acid-free paper
Page 2 of 2
Contents
Front Matter
Consulting Editors
[UNTITLED]
Contributors
Introduction
Infrastructure, Standards, and Platforms
Internet Infrastructure
Shane Greenstein
Four Paths to Compatibility
Joseph Farrell and Timothy Simcoe
Software Platforms
Andrei Hagiu
Home Videogame Platforms
Robin S. Lee
Digitization of Retail Payments
Wilko Bolt and Sujit Chakravorti
Mobile Telephony
Steffen Hoernig and Tommaso Valletti
Two­Sided B to B Platforms
Bruno Jullien
The Transformation of Selling
Online versus Offline Competition
Ethan Lieber and Chad Syverson
Comparison Sites
José­Luis Moraga­González and Matthijs R. Wildenbeest
Price Discrimination in the Digital Economy
Drew Fudenberg and J. Miguel Villas­Boas
Bundling Information Goods
Jay Pil Choi
Internet Auctions
Ben Greiner, Axel Ockenfels, and Abdolkarim Sadrieh
Reputation on the Internet
Luís Cabral
Advertising on the Internet
Simon P. Anderson
User­Generated Content
Go to page:
Incentive­Centered Design for User­Contributed Content
Lian Jian and Jeffrey K. Mackie­Mason
Social Networks on the Web
Sanjeev Goyal
Open Source Software
Justin P. Johnson
Threats Arising from Digitization and the Internet
Digital Piracy: Theory
Paul Belleflamme and Martin Peitz
Digital Piracy: Empirics
Joel Waldfogel
The Economics of Privacy
Laura Brandimarte and Alessandro Acquisti
Internet Security
Tyler Moore and Ross Anderson
End Matter
Index
Contributors
Oxford Handbooks Online
Contributors
The Oxford Handbook of the Digital Economy
Edited by Martin Peitz and Joel Waldfogel
Print Publication Date: Aug 2012 Subject: Economics and Finance
Online Publication Date: Nov
2012
Contributors
Alessandro Acquisti is Associate Professor of Information Technology and Public
Policy at Heinz College at Carnegie Mellon University.
Ross Anderson is Professor in Security Engineering at the University of Cambridge
Computer Laboratory.
Simon P. Anderson is Commonwealth Professor of Economics at the University of
Virginia.
Paul Belleflamme is Professor of Economics at Université Catholique de Louvain.
Wilko Bolt is an economist at the Economics and Research Division of De
Nederlandsche Bank.
Laura Brandimarte is a PhD candidate in the Public Policy Department (Heinz
College) at Carnegie Mellon University.
Sujit “Bob” Chakravorti is the Chief Economist and Director of Quantitative
Analysis at The Clearing House.
Page 1 of 4
Contributors
Luis Cabral is Professor of Economics and Academic Director (New York Center), IESE
Business School, University of Navarra and W.R. Berkley Term Professor of Economics
at the Stern School of Business at New York University.
Jay Pil Choi is Scientia Professor in the School of Economics at the Australian School
of Business, University of New South Wales.
Joseph Farrell is Professor of Economics at the University of California–Berkeley.
Drew Fudenberg is the Frederick E. Abbe Professor of Economics at Harvard
University.
Sanjeev Goyal is Professor of Economics at the University of Cambridge.
Shane Greenstein is the Elinor and H. Wendell Hobbs Professor of Management and
Strategy at the Kellogg School of Management at Northwestern University.
Ben Greiner is Lecturer at the School of Economics at the University of New South
Wales.
Andrei Hagiu is Assistant Professor of Strategy at the Harvard Business School. (p.
viii)
Steffen Hoernig is Associate Professor with “Agregação” at the Nova School of
Business and Economics in Lisbon.
Lian Jian is Assistant Professor of Communication and Journalism at the Annenberg
School, University of Southern California.
Justin Johnson is Associate Professor of Economics at the Johnson Graduate School
of Management at Cornell University.
Page 2 of 4
Contributors
Bruno Jullien is a Member of the Toulouse School of Economics and Research
Director at CNRS (National Center for Scientific Research).
Robin S. Lee is Assistant Professor of Economics at the Stern School of Business at
New York University.
Ethan Lieber is a PhD student in the Economics Department at the University of
Chicago.
Jeffrey K. MacKie-Mason is the Dean of the School of Information at the University
of Michigan.
Jose-Luis Moraga-Gonzalez is Professor of Microeconomics at VU University–
Amsterdam and Professor of Industrial Organization at the University of Groningen.
Tyler Moore is a postdoctoral fellow at the Center for Research on Computation and
Society at Harvard University.
Axel Ockenfels is Professor of Economics at the University of Cologne.
Martin Peitz is Professor of Economics at the University of Mannheim.
Abdolkarim Sadrieh is Professor of Economics and Management at the University of
Magdeburg.
Timothy Simcoe is Assistant Professor of Strategy and Innovation at the Boston
University School of Management.
Chad Syverson is Professor of Economics at the Booth School of Business, University
of Chicago.
Page 3 of 4
Contributors
Tommaso Valletti is Professor of Economics at the Business School at Imperial
College, London.
J. Miguel Villas-Boas is the J. Gary Shansby Professor of Marketing Strategy at the
Haas School of Business, University of California–Berkeley.
Joel Waldfogel is the Frederick R. Kappel Chair in Applied Economics at the Carlson
School of Management at the University of Minnesota and a Research Associate at the
National Bureau of Economic Research.
Matthijs R. Wildenbeest is Assistant Professor of Business Economics and Public
Policy at the Kelley School of Business, Indiana University.
Page 4 of 4
Introduction
Oxford Handbooks Online
Introduction
The Oxford Handbook of the Digital Economy
Edited by Martin Peitz and Joel Waldfogel
Print Publication Date: Aug 2012 Subject: Economics and Finance
Online Publication Date: Nov
2012
Introduction
Digitization—and the Internet—have transformed business and society in the past 15 years.
The world's Internet-connected population has grown from essentially zero in 1995 to 2 billion
today.1 The Internet and digitization have transformed many industries, including retailing,
media, and entertainment products. Millions of people spend hours each day accessing and
creating information online.
The firms of the digital economy not only affect the daily life of most people in industrialized
countries, but they are also highly profitable. Once nearly left for dead, Apple was in 2011 the
third most valuable company in the world, with a market valuation of almost $300 billion US, far
above its new-economy forebears Microsoft, IBM, and AT&T at $239, $182, and $174,
respectively. Rounding out the newer generation of digital firms are Google, valued at $192
billion, Amazon at $77 billion, Facebook at $83 billion, and eBay at $36 billion.2 While particular
valuations fluctuate, it is clear that firms actively in the digital economy are now global players
with market capitalizations exceeding those of long-established companies in more traditional
industries.
New digitally enabled technologies have facilitated more extensive application of many
business strategies that had formerly occupied the attention of economic theorists more than
business practitioners. The practices include auctions, price discrimination, and product
bundling. Although they have been employed for centuries, they are easier to utilize in digital
contexts. Software operating on platforms has heightened the role of platform competition and
network effects. The Internet has also provided a stage for many phenomena that would have
been difficult for economists—or others—to imagine a few decades ago: large-scale sharing of
digital material at sites such as YouTube, Facebook, and Wikipedia. Open-source software
reflects related behaviors. Along with many new opportunities, the Internet has also brought
some new threats. These include threats to businesses, such as piracy and the security of
connected devices, as well as threats to individuals, such as privacy concerns.
Page 1 of 4
Introduction
These developments have prompted what is, by academic standards, a rapid outpouring of
research. This volume is an attempt to describe that work to date, (p. x) with the goals of both
explicating the state of the literature and pointing the way toward fruitful directions for future
research. The book's chapters are presented in four sections corresponding four broad
themes: (1) infrastructure, standards, and platforms; (2) the transformation of selling,
encompassing both the transformation of traditional selling and new, widespread application of
tools such as auctions; (3) user-generated content; and (4) threats in the new digital
environment. No chapter is a prerequisite for another, and readers are encouraged to read in
any order that aids digestion of the material. A guide to the chapters follows.
The first section deals with infrastructure, standards, and platform competition. In chapter 1,
Shane Greenstein provides an overview on Internet infrastructure with a particular emphasis
on Internet access and broadband development. There is an interdependence between
infrastructure development and changes in industries that rely on this infrastructure. Thus,
researchers investigating such industries need a proper understanding of Internet
infrastructures. In chapter 2, Joseph Farrell and Timothy Simcoe discuss the economics of
standardization, and describe the costs and benefits of alternative ways to achieve
compatibility.
The section then turns to issues related to platform competition. The next chapters focus on a
number of industries that heavily rely on recent developments in electronic data storage and
transmission. In chapter 3, Andrei Hagiu provides a detailed view on developments in business
strategies in the market of software platforms. He offers some rich institutional details on this
important and rapidly changing industry. In chapter 4, Robin Lee provides a related analysis
for the videogame industry. In chapter 5, Wilko Bolt and Sujit “Bob” Chakravorti survey the
research on electronic payment systems. In chapter 6, Steffen Hoernig and Tomasso Valletti
offer an overview on recent advances on the economics of mobile telephony. All these
industries are characterized by mostly indirect network effects. Here, to understand the
development of these industries, one has to understand the price and non-price strategies
chosen by platform operators. Thus, an important part of the literature relates to the recent
theory of two-sided markets. This theme will reappear in a number of other chapters in this
handbook. Finally, in chapter 7, Bruno Jullien contributes to our understanding of B2B
platforms, drawing on insights from the two-sided market literature. This theory-oriented
chapter also helps in formalizing other platform markets.
The second section deals with the transformation of selling. The Internet has transformed
selling in a variety of ways: the reduced costs of online retailing threatens offline retailers,
widespread availability of information affects competition, digital technology allows the
widespread employment of novel pricing strategies (bundling, price discrimination), and
auctions are now seeing wide use. In chapter 8, Chad Syverson and Ethan Lieber focus on the
interdependence between online and offline retailing. In chapter 9, Jose-Luis Moraga and
Matthijs Wildenbeest survey the work on comparison sites. They develop a formal model that
reproduces some of the insights of the literature.
Within the general topic of selling, the volume then turns to pricing practices. In chapter 10,
Drew Fudenberg and Miguel Villas-Boas start with the observation (p. xi) that due to
Page 2 of 4
Introduction
advances in information technology, firms have ever more detailed information about their
prospective and previous customers. In particular, when firms have information about
consumers’ past purchase decisions, they may use this information for price discrimination
purposes. The focus of the chapter is on the effects of price discrimination that is based on
more detailed customer information, both under monopoly and under competition. In chapter
11, Jay Pil Choi surveys the literature on product bundling. This survey focuses on the theory
of product bundling and to which extent it is particularly relevant in the context of the digital
economy.
The next two chapters address issues related to auctions. In chapter 12, Ben Greiner, Axel
Ockenfels, and Abdolkarim Sadrieh provide an overview on Internet auctions. They review
theoretical, empirical, and experimental contributions that address, in particular, bidding
behavior in such auctions and the auction design in single-unit Internet auctions. In chapter
13, Luis Cabral surveys recent, mostly empirical work on reputation on the Internet, with a
particular focus on eBay's reputation system.
The selling section concludes with a contribution on advertising. In chapter 14, Simon
Anderson reports on recent developments in online advertising and offers a mostly theoryoriented survey on advertising on the Internet. In particular, he develops a logit model to
address some of the most important issues, which draws on the two-sided market literature.
The third section of the book discusses the emergent phenomenon of user generated content
on the Internet. In chapter 15, Lian Jian and Jeff MacKie Mason elaborate on user-generated
content, an issue that had hardly arisen prior to the digital economy. In chapter 16, Sanjeev
Goyal discusses the importance of the theory of social network for our understanding of
certain features of the digital economy, in particular, the functioning of social networking sites
such as Facebook. In contrast with the literature that postulates network effects in the
aggregate, the literature on social networks takes the concrete structure of the network
seriously and is concerned in particular with local interactions. In chapter 17, Justin Johnson
elaborates on the economics of open source. Partly, open source is user-generated content.
An important question concerns the generation of ideas, mostly as software products, with
respect to the advantages and disadvantages of open source compared to traditional
proprietary solutions. In this context, the openness of a product is also discussed.
The fourth section of the volume discusses threats arising from digitization and the Internet.
Chapters 18 and 19 analyze digital piracy. In chapter 18, Paul Belleflamme and Martin Peitz
survey the theoretical literature on digital piracy, whereas in chapter 19, Joel Waldfogel
surveys the empirical literature. In chapter 20, Alessandro Acquisti and Laura Brandimarte
introduce the important issue of privacy in the digital economy. In chapter 21, Ross Anderson
and Tyler Moore survey the work on Internet security.
(p. xii)
Notes
Notes:
Page 3 of 4
Introduction
(1.) See Internet World Stats, at http://www.internetworldstats.com/stats.htm, last accessed
June 10, 2011.
(2.) See Ari Levy on http://www.bloomberg.com/news/2011-01-28/facebook-s-82-9-billionvaluation-tops-amazon-com-update1-.html, last accessed June 2, 2011.
Page 4 of 4
Internet Infrastructure
Oxford Handbooks Online
Internet Infrastructure
Shane Greenstein
The Oxford Handbook of the Digital Economy
Edited by Martin Peitz and Joel Waldfogel
Print Publication Date: Aug 2012
Online Publication Date: Nov
2012
Subject: Economics and Finance, Economic Development
DOI: 10.1093/oxfordhb/9780195397840.013.0001
Abstract and Keywords
This article presents an overview on Internet infrastructure, highlighting Internet access and broadband
development. Internet infrastructure is a collective term for all the equipment, personnel, and organizations that
support the operation of the Internet. The Internet largely used existing capital at many installations with
comparatively minor software retrofits, repurposing capital for any new function as long as it remained consistent
with end-to-end. As Internet applications became more popular and more widely adopted, users pushed against the
bandwidth limits of dial-up Internet access. Broadband gave users a better experience than dial-up access. The
most notable feature of the governance of Internet infrastructure are the differences with the governance found in
any other communications market. Many key events in Internet infrastructure took place within the United States,
but this seems to be less likely as the commercial Internet grows large and more widespread.
Keywords: Internet infrastructure, Internet access, broadband, United States, governance
1. Introduction
Internet infrastructure is a collective term for all the equipment, personnel, and organizations that support the
operation of the Internet. The commercial Internet arose after many firms and users voluntarily adopted a set of
practices for enabling internetworking, namely, transferring data between local area networks and computer
clients. The commercial Internet began to provide many revenue-generating services in the mid-1990s. As of this
writing, this network supports a wide array of economic services to more than a billion users, and it continues to
grow worldwide.
Generally speaking, four types of rather different uses share the same Internet infrastructure: browsing and e-mail,
which tend to employ low bandwidth and which can tolerate delay; video downloading, which can employ high
bandwidth and can tolerate some delay; voice-over Internet protocol (IP) and video-talk, which tend to employ high
bandwidth and whose quality declines with delay; and peer-to-peer applications, which tend to use high bandwidth
for sustained periods and can tolerate delay, but, in some applications (such as Bit-Torrent), can impose delay on
others.1
The precise economic characteristic of Internet infrastructure defies simple description. Like other private assets,
sometimes Internet infrastructure is an input in the production of a pecuniary good, and regular investment extends
the functionality or delays obsolescence. Like a public good, sometimes more than one user employs Internet
infrastructure without displacing another or shaping the (p. 4) quality of another user's experience. Even when
visible, the economic contribution of Internet infrastructure is often measured indirectly at best. Yet in contrast to
many private assets or public goods, sometimes many decision makers—instead of one—govern the creation and
deployment of Internet infrastructure. Moreover, although market-oriented prices influence the investment
decisions of some participants, something other than market prices can shape the extent of investment and the
Page 1 of 24
Internet Infrastructure
use of such investment.
This array of applications, the mix of economic characteristics, and the economic stakes of the outcomes have
attracted considerable attention from economic analysts. This review summarizes some of the key economic
insights of the sprawling and vast literature on Internet infrastructure.
The chapter first provides a summary of the emergence of the commercial Internet, providing a review of the
common explanations for why its structure and pricing took a specific form. The chapter then describes how the
deployment of broadband access and wireless access altered many aspects of Internet infrastructure. It also
reviews the importance of platforms, another new development that has changed the conduct of firms providing
infrastructure for the commercial Internet. The chapter finishes with a review of the private governance of Internet
infrastructure and the role of key government policies.
2. The Emergence of the Commercial Internet
Which came first, the infrastructure or the commercial services? In the popular imagination the Internet emerged
overnight in the mid-1990s, with the creation of the commercial browser and services employing that browser. In
fact, the Internet developed for more than two decades prior to the emergence of the browser, starting from the
first government funding at the Defense Advanced Research Project Agency (DARPA) in the late 1960s and the
National Science Foundation (NSF) in the 1980s.2 Hence, there is no question that the infrastructure came first. The
Internet infrastructure of the mid-1990s employed many components and inventions inherited from the
government-funded efforts.
Both the commercial Internet and its government-sponsored predecessor are packet switching networks.3 While
the Internet is certainly not the first packet switching network to be deployed, the commercial Internet has obtained
a size that makes it the largest ever built.
In a packet switching network a computer at one end packages information into a series of discrete messages.
Each message is of a finite size. As part of initial processing by the packet switching system, larger messages were
divided into smaller packets. All of the participating computers used the same conventions for messages and
packets. Messages successfully traveled between computers when the (p. 5) sending and receiving computer
standardized on the same procedures for breaking up and reassembling packets.
Sending data between two such locations rely on something called “protocols,” standardized software commands
that organize the procedures for moving data between routers, computers, and the various physical layers of the
network. Protocols also define rules for how data are formatted as they travel over the network. In 1982 one design
became the standard for the network organized by DARPA, a protocol known as TCP/IP, which stands for
transmission control protocol/internet protocol. Vint Cerf and Robert Kahn wrote the first version, replacing an
outmoded protocol that previously had run the DARPA network. Over time the technical community found
incremental ways to change the protocol to accommodate large-scale deployment of infrastructure and
applications.4
TCP/IP had the same role in the government-sponsored and commercial Internet. It defined the “headers” or labels
at the beginning and end of a packet.5 Those headers informed a computer processor how to reassemble the
packets, reproducing what had been sent. By the mid-1980s all Unix-based computing systems built that interface
into their operating systems, and in the 1990s virtually every computing system could accommodate it.6
An example of TCP/IP in action may help to illustrate its role. Consider sending messages from one computer, say,
at Columbia University in New York, to another, say, at Stanford University in California. In the 1980s such an action
involved the following steps. A user of a workstation, typically using a computer costing tens of thousands of
dollars, would compose the message and then issue a command to send it. This command would cause the data to
be broken into TCP/IP packets going to a central mail machine at Columbia and then a destination mail machine at
Stanford. In between the central mail machine and destination mail machine the file would be broken down into
pieces and completely reassembled at the final destination. During that transmission from one machine to another,
the file typically traveled through one or more routers, connecting various networks and telephone lines. At the
outbound site it was typically convenient to send through a central mailserver, though that did not need to be true
Page 2 of 24
Internet Infrastructure
at the inbound site. At the Stanford server, the local computer would translate the data into a message readable on
another person's workstation. If something went wrong—say, a typographical error in the address—then another
set of processes would be initiated, which would send another message back to the original author that something
had failed along the way.7
These technical characteristics define one of the striking economic traits of Internet infrastructure: it requires
considerable coordination. First, designers of hardware and software must design equipment to work with other
equipment, and upgrades must meet a similar requirement. Second, efficient daily operations of the Internet require
a wide array of equipment to seamlessly operate with one another. Indeed, the widespread adoption of TCP/IP
partly explains how so many participants coordinate their activity, as the protocol acts as a focal point around
which many participants organize their activities at very low transaction costs. (p. 6) Another key factor,
especially during this early period, helped coordinate activity, a principle called end-to-end.8 This principle
emerged in the early 1980s to describe a network where the switches and routers retained general functionality,
moving data between computers, but did not perform any processing. This was summarized in the phrase, “The
intelligence resided at the edges of the network.” Any application could work with any other at any edge, so long
as the routers moved the data between locations.9
End-to-end had substantial economic consequences for the deployment of the commercial Internet. At the time of
its invention, it was a radical engineering concept. It differed from the principles governing the telephone network,
where the switching equipment performed the essential processing activities. In traditional telephony, the end of
the network (the telephone handset) contained little functionality.10 For a number of reasons, this departure from
precedent was supported by many outsiders to the established industry, especially researchers in computing.
Hence, the Internet encountered a mix of benign indifference and active resistance from many established firms in
the communications market, especially in the telephone industry, and many did not invest in the technology, which
later led many to be unprepared as suppliers to the commercial Internet in the mid-1990s.11
End-to-end also had large economic consequences for users. First, the Internet largely employed existing capital at
many installations with comparatively minor software retrofits, repurposing capital for any new function as long as it
remained consistent with end-to-end. Repurposing of existing capital had pragmatic economic advantages, such
as lowering adoption costs. It also permitted Internet applications and infrastructure to benefit from the same
technical changes shaping other parts of computing.12 Accommodating heterogeneous installations supported
another—and, in the long run, especially important—economic benefit: It permitted users with distinct conditions to
participate on the same communications network and use its applications, such as email.
3. The Structure of the Commercial Internet
As long as the Internet remained a government-managed enterprise, the NSF-sponsored Internet could carry only
traffic from the research community, not for commercial purposes.13 In the late 1980s the NSF decided to privatize
the operations of a piece of the Internet it managed (while the military did not privatize its piece). One key
motivation for initiating this action was achievement of economies of scale, namely, the reduction in costs that
administrators anticipated would emerge if researchers shared infrastructure with private users, and if competitive
processes drove private firms to innovate.14 The transition reached resolution by the mid-1990s. Many of the
events that followed determined the structure of (p. 7) supply of Internet infrastructure for the next decade and a
half, particularly in the United States.15
The first group of backbone providers in the United States (i.e., MCI, Sprint, UUNET, BBN) had been the largest
carriers of data in the NSF network. In 1995 and 1996, any regional Internet service provider (ISP) could exchange
traffic with them. At that time the backbone of the US Internet resembled a mesh, with every large firm both
interconnecting with every other and exchanging traffic with smaller firms.16 Some of these firms owned their own
fiber (e.g., MCI) and some of them ran their backbones on fiber rented from others (e.g., UUNet).
After 1997 a different structure began to take shape. It was partly a mesh and partly hierarchical, using “tiers” to
describe a hierarchy of suppliers.17 Tier 1 providers were national providers of backbone services and charged a
fee to smaller firms to interconnect. The small firms were typically ISPs that ranged in size and scale from wholesale
regional firms down to the local ISP handling a small number of dial-in customers. Tier 1 firms did most of what
became known as “transit” data services, passing data from one ISP to another ISP, or passing from a content firm
Page 3 of 24
Internet Infrastructure
to a user. In general, money flowed from customers to ISPs, who treated their interconnection fees with backbone
firms as a cost of doing business.
Tier 1 firms adopted a practice known as “peering,” and it appeared to reinforce the hierarchical structure. Peering
involved the removal of all monetary transfers at a point where two tier 1 providers exchanged traffic. Peering
acknowledged the fruitlessness of exchanging money for bilateral data traffic flows of nearly equal magnitude at a
peering point. Hence, it lowered transaction costs for the parties involved. However, because their location and
features were endogenous, and large firms that denied peering to smaller firms would demand payment instead,
many factors shaped negotiations. As a result, the practice became controversial. An open and still unresolved
policy-relevant economic question is whether peering merely reflects a more efficient transaction for large-scale
providers or reflects the market power of large suppliers, providing additional competitive advantages, which
smaller firms or non–tier 1 firms cannot imitate. The economics of peering is quite challenging to characterize
generally, so this question has received a range of analysis.18
Another new feature of Internet infrastructure also began to emerge at this time, third-party caching services. For
example, Akamai, a caching company, would pay ISPs to locate its servers within key points of an ISP's network.
Content providers and other hosting companies would then pay the caching companies to place copies of their
content on such services in locations geographically close to users, aspiring to reduce delays for users. Users
would be directed to the servers instead of the home site of the content provider and, thus, receive faster
response to queries. A related type of service, a content delivery network (CDN), provided similar services for
electronic retailers. In general, these became known as “overlays,” and since these were not part of the original
design of the noncommercial Internet, numerous questions emerged about how overlays altered the incentives to
preserve end-to-end principles.19
These changes also raised an open question about whether a mesh still determined most economic outcomes. For
much Internet service in urban areas the (p. 8) answer appeared to be yes. ISPs had options from multiple
backbone providers and multiple deliverers of transit IP services, and many ISPs multihomed to get faster services
from a variety of backbone providers. ISPs also had multiple options among cache and CDN services. Evidence
suggests that prices for long-distance data transmission in the United States continued to fall after the backbone
privatized, reflecting excess capacity and lower installation costs.20
For Internet service outside of urban areas the answer appeared to be no. ISPs did not have many options for
“middle mile” transit services, and users did not have many options for access services. The high costs of supply
made it difficult to change these conditions.21
Perhaps no factor altered the structure of supply of Internet infrastructure after commercialization more than the
World Wide Web: The World Wide Web emerged just as privatization began, and it is linked with a particularly
important invention, specifically, the commercial browser. As the commercial Internet grew in the mid to late 1990s,
traffic affiliated with the web overtook electronic mail, symptomatic of web applications as the most popular on the
Internet. The growth in traffic had not been anticipated at the time NSF made plans for privatizing the backbone,
and the subsequent growth in traffic fueled a global investment boom in Internet infrastructure.
Tim Berners-Lee built key parts of the World Wide Web.22 In 1991 he made three inventions available on
shareware sites for free downloading: html (hypertext markup language) and the URL (universal resource locator),
a hyper-text language and labeling system that made transfer of textual and nontextual files easier using http
(hyper-text transfer protocol). By 1994 and after the plans for commercialization were set and implemented, a new
browser design emerged from the University of Illinois. It was called Mosaic, and it was widely adopted in the
academic Internet. Mosaic became the model that inspired the founding of Netscape, the first successful
commercial browser, which also motivated Microsoft to develop Internet Explorer, which touched off the browser
wars.23
As of this writing most observers expect data traffic to continue to grow at rates in the neighborhood of 40 percent
to 50 percent a year.24 This is due to the emergence of another set of applications. Data supporting peer-to-peer
and video applications have been growing rapidly after the millennium, each year using larger fractions of
available capacity.25
Page 4 of 24
Internet Infrastructure
4. Pricing Internet Access
The value chain for Internet infrastructure begins with the pricing for Internet access. Households and business
establishments pay for access to the Internet.26 ISPs provide that access and use the revenue to pay for inputs,
namely, other Internet infrastructure.
(p. 9) In the first half of the 1990s, most commercial ISPs tried the same pricing norms governing bulletin boards.
For bulletin boards the pricing structure of the majority of services involved a subscription charge (on a monthly or
yearly basis) and an hourly fee for usage. For many applications, users could get online for “bursts” of time, which
would reduce the size of usage fees.
The emergence of faster and cheaper modems in the mid-1990s and largescale modem banks with lower per-port
costs opened the possibility for a different pricing norm, one that did not minimize the time users spent on the
telephone communicating with a server. The emergence of low-cost routines for accessing a massive number of
phone lines also contributed to a new norm, because it enabled many ISPs to set up modem banks at a scale only
rarely seen during the bulletinboard era.
As the Internet commercialized, two broad viewpoints emerged about pricing. One viewpoint paid close attention to
user behavior. Users found it challenging to monitor time online, often due to multiple users within one household.
The ISPs with sympathy for these user complaints priced their services as unlimited usage for a fixed monthly price.
These plans were commonly referred to as flat-rate or unlimited plans. Another viewpoint regarded user
complaints as transitory. Supporters of this view pointed to cellular telephones and bulletin boards as examples
where users grew accustomed to pricing-per-minute. The most vocal supporter for this view was one up-andcoming bulletin-board firm, America Online (now known simply as AOL), which had seemed to grow with such
usage pricing.
As it turned out, flat rate emerged as the dominant pricing mode for wireline access. There were many reasons for
this. For one, the US telephone universal service policy had long subsidized local landline household calls by
charging greater prices for business and long-distance calls than for local calls, which were priced at a flat rate per
month in almost every state, with the definition of local left to a state regulator. Typically local was defined over a
radius of ten to fifteen miles, sometimes less in dense urban areas, such as Manhattan. Thus, using local rates
reduced household user expenses, thereby making the service more attractive. That enabled an ISP to offer
service to nontechnical users, betting the users would not stay online for long—in effect, anywhere from twenty to
thirty hours of time a month at most. Because such light users did not all dial-in on the same day or at the same
time of day, the equipment investment did not need to handle all calls at once. With such light users, an ISP could
serve a local area with modem bank capacity at anywhere from one-third to one-quarter the size of the total local
service base. A large enough user community could thus share equipment, defraying equipment costs for an ISP
offering flat rate. Experiments showed that the economies of scale for defraying equipment costs in this way could
support approximately 1000 users in a local point of presence. For many dial-up ISPs, this was not a binding
constraint.27
Flat rate succeeded for another reason. Other firms entered into the production of web services, complementing
what ISPs provided. Flat rate appealed to users who could then choose among a wide array of media and
services.28
(p. 10) Throughout 1996, 1997, and 1998, ISPs experimented with hourly limits and penalties for exceeding these
caps, yet most users resisted them. The experiments were motivated by many users who stayed online for far
more than twenty to thirty hours a month, thereby ruining the key economics that allowed ISPs to share equipment
across many users.29 Most such limits were not particularly binding—involving monthly limits ranging from sixty to
one hundred hours. Some ISPs tried offering steep discounts for steep limits, such as $10 discounts for thirty hours
a month. Yet few buyers took them, persisting with the slightly more expensive unlimited contracts, typically priced
as $20 per month. In short, despite all these experiments, flat rate dominated transactions during the dial-up era of
the commercial Internet.30
5. The Broadband Upgrade
As Internet applications became more popular and more widely adopted, users pushed against the bandwidth limits
Page 5 of 24
Internet Infrastructure
of dial-up Internet access. That motivated some well-placed firms to deploy and offer broadband access as Internet
access. The investment of broadband initiated a second wave of investment in Internet infrastructure after the turn
of the millennium. That has been coincident with the presence of more capital deepening in business computing
and in many facets of the operations to support it.
The two most common forms were cable modem service and (digital subscriber line) (DSL) service. Cable modem
service involved a gradual upgrade to cable plants in many locales, depending on the generation of the cable
system.31 Broadband over telephone lines involved upgrades to telephone switches and lines to make it feasible to
deliver DSL. Both of these choices typically supported higher bandwidth to the household than from the household
—thus called asymmetric digital subscriber line (ADSL).
Broadband clearly gave users a better experience than dial-up access.32 Broadband provides households with
faster Internet service and thus access to better online applications. Broadband also may allow users to avoid an
additional phone line for supporting dial-up. In addition, broadband services are also “always on,” and users
perceive that as a more convenient service. It is also generally faster in use. A maximum rate of 14.4K (kilobytes
per second) and 28.8K were predominant in the mid-1990s for dial-up modems. The typical bandwidth in the late
1990s was 43K to 51K, with a maximum of 56K. DSL and cable achieved much higher maximum bandwidths,
typically somewhere in the neighborhood of a maximum rate of 750K to 3M (megabytes per second), depending on
the user choices and vendor configuration. Even higher bandwidth became available to some households later.
Click to view larger
Figure 1.1 Percentage of Households with Computers and Internet Connections, Selected Years, 1997–
2009.
Source: NTIA (2010).
This story is consistent with Figure 1.1, which summarizes US government surveys of broadband use at
households. The first survey questions about broadband (p. 11) use appear in 2000 and show a growth in
adoption, reaching close to 20 percent of households in 2003, when these surveys were discontinued for some
time.33 The survey resumed in 2007 and the anticipated trajectory continued, with 50.8 percent of households
having broadband in October 2007 and 63.5 percent in October 2009. In the earliest years supply-side issues were
the main determinants of Internet availability and, hence, adoption. Cable and telecom operators needed to retrofit
existing plants, which constrained availability in many places. In those years, the spread of broadband service was
much slower and less evenly distributed than that of dial-up service. Highly populated areas were more profitable
due to economies of scale and lower last-mile expenses. As building has removed these constraints, demandrelated factors—such as price, bandwidth, and reliability—have played a more significant role in determining the
margins between who adopts and who does not.34
Suppliers experimented to find appropriate structures for providing broadband access.35 Countries differed
significantly in the extent to which these different delivery channels played a role. Some cable firms built out their
facilities to deliver these services in the late 1990s, and many—especially telephone companies—waited until the
early to mid-2000s. In some rich countries there was growing use of a third and fourth delivery channel, fiber to the
home, and access with mobile modes.36
A similar wave of investment occurred in many developed countries over the first decade of the new millennium.
Figure 1.2 shows the subscribers per 100 inhabitants in many countries in 2009.37 Although these numbers must
be interpreted with caution, a few facts stand out. A few dozen countries in the Organisation for Economic Cooperation and Development (OECD) experienced substantial adoption of broadband, and many did not. The
variance is not surprising. GDP per capita and broadband per capita have a simple correlation of 0.67.38
Page 6 of 24
Internet Infrastructure
Click to view larger
Figure 1.2 OECD Broadband Subscribers per 100 Inhabitants by Technology, June 2009.
Source: OECD.
Figure 1.3 shows the growth of subscribers per 100 inhabitants in the G7, Canada, the United States, United
Kingdom, Germany, France, Italy, and Japan, (p. 12) as well as the entire OECD. Though countries differ in the
level of broadband use, the similarities among them are apparent. Adoption of broadband grew in all rich and
developed countries.
Click to view larger
Figure 1.3 Broadband Penetration, G7 Countries.
Source: OECD Broadband Portal, http://www.oecd.org/sti/ict/broadband, Table 1i.
To give a sense of worldwide patterns, Table 1.1 presents broadband and dial-up adoption for seven countries—
the United States, Canada, United Kingdom, Spain, China, Mexico, and Brazil.39 These seven represent typical
experiences in the high-income and middle-income countries of the world. The broadband data in Table 1.1 come
from Point-Topic, a private consultancy.40 One fact is immediately obvious. The scale of adoption in the United
States and China far outweighs the scale of adoption in any other country. That occurs for two rather obvious
economic reasons. The United States and China have much larger populations than the United Kingdom, Spain, and
Canada. Although Mexico and Brazil also have (p. 13) large populations, those countries had much lower rates of
adoption. In short, the general level of economic development is a major determinant of adoption.
Page 7 of 24
Internet Infrastructure
Table 1.1 Broadband Subscribers from Point Topic
Broadband Subscribers from Point Topic in Thousands
Year
Nation
2003
2004
2005
2006
2007
2008
2009
CAGR
Brazil
634
1,442
2,671
4,278
5,691
7,509
9,480
47.2%
Canada
3,706
4,829
5,809
6,982
8,001
8,860
9,528
14.4%
China
–
11,385
20,367
30,033
41,778
54,322
68,964
35.0%
Mexico
234
429
1,060
1,945
3,106
4,774
7,836
65.1%
Spain
1,401
2,524
3,444
5,469
7,322
8,296
9,023
30.5%
United Kingdom
960
3,734
7,203
10,983
13,968
16,282
17,641
51.6%
United States
16,042
28,770
37,576
47,489
58,791
67,536
77,334
25.2%
Source: Greenstein and McDevitt (2011)
The popularity of wireline broadband access has reopened questions about pricing. Following precedent, most
broadband providers initially offered unlimited plans. High volume applications—such as video downloading and
peer-to-peer applications—placed pressures on capacity, however. This motivated providers to reconsider their
flat-rate pricing plans and impose capacity limitations. Hence, a decade and half after the blossoming of the
commercial Internet there was considerable speculation about whether video applications would generate a new
norm for pricing access. The answer partly depended on the regulatory regime governing Internet access and
remains open as of this writing.41
6. Expanding Modes of Access
In the decade after the millennium another and new set of mobile broadband services began to gain market traction
with businesses and households. The first were providers known as wireless ISPs, or WISPs for short. They
provided access via a variety of technologies, such as satellite, high-speed WiFi (wireless local area network that
uses high-frequency radio signals to transmit and receive data), WiMax (stands for worldwide interoperability for
microwave access), and other terrestrial fixed-point wireless delivery modes. These providers primarily served
low-density locations where the costs of wireline access were prohibitive.42
Another wireless mode became popular, smart phones. Though smart phones had been available in a variety of
models for many years, with Blackberry being among the most popular, it is commonly acknowledged that the
category began (p. 14) to gain in adoption after the introduction of the Apple iPhone in 2007. As of this writing,
reports suggest the Apple iPhone, Google Android, and new Blackberry designs dominate this product category for
the time being. Yet competitive events remain in flux. Competitive responses organized by Palm, Microsoft, and
Nokia have been attempted, and those firms and others will continue to attempt more. In short, irrespective of
which firms dominate, smart phones are an important new access point.
The economics of smart-phone use remain in flux as well. It is unclear whether the majority of users treat their
smart phones as substitutes to their home broadband use. Smart phones provide additional mobility, and that might
be a valuable trait by itself. If smart phones are simply additional services due to mobility, then the economic value
of smart phone use should be interpreted one way. If the additional services are partial or complete substitutes,
Page 8 of 24
Internet Infrastructure
then the economic value of smart phone use should be interpreted another.
Another open question concerns the boundaries between home and business use. With wireline broadband this is
less of a question because the destination of the location for Internet access largely identifies its buyer (home or
business). Smart phones, however, sell both to home and business, thus obliterating what had been a bright line
between these kinds of customers and altering the economic analysis of the adoption decision and its impact. This
also alters the potential revenue streams for Internet access, as well as the geographic distribution of use, which
could lead to changes in the funding for more Internet infrastructure, as well as the geographic features of
investment.
In addition, because of leapfrogging of mobile over fixed broadband in emerging economies, mobile broadband
may be the first broadband experience for many people. So not only is it unclear whether mobile broadband
substitutes or complements fixed broadband, but the extent of substitutability could vary substantially by country
according to each country's stage of infrastructure development.
The changing modes of access opened multiple questions about changing norms for pricing-data services.
Wireless data services over cellular networks generally did not use flat-rate pricing and often came with explicit
charges for exceeding capacity limits, for example. As of this writing, the norm for these plans was, and continues
to be, an open question.
7. Strategic Behavior from Suppliers
When it first deployed, the Internet was called a “network of networks.” Although the phrase once had meaning as
a description of the underlying configuration of infrastructure, it is misleading today. Leading firms and their
business partners view the commercial Internet through the same lens they view the rest of computing. To them,
the Internet is a market in which to apply their platform (p. 15) strategies. In short, the commercial Internet should
be called a “network of platforms.”
The list of important platforms in Internet infrastructure is long, and many involve familiar firms, such as Google,
Apple, Cisco, Microsoft, and Intel. As a result, the platform strategies of private firms shape the evolution of Internet
infrastructure.43 This was one of the biggest changes wrought by introducing private firms into the supply of
Internet infrastructure.
Commercial firms regard platforms as reconfigurable clusters and bundles of technical standards for supporting
increasing functionality.44 From a user perspective, platforms usually look like “standard bundles” of components,
namely, a typical arrangement of components for achieving functionality.45
Platforms matter because platform leaders compete with others. That usually leads to lower prices, competition in
functionality, and accelerated rollout of new services. Many platform leaders also develop new functionality and
attach it to an existing platform.46 Generally, that brings new functionality to users. A platform leader also might
seek to fund businesses outside its area of expertise, if doing so increases demand for the core platform
product.47 Hence, in the commercial Internet, the strategic priorities of its largest providers tend to shape the
innovative characteristics of Internet infrastructure.
Today many observers believe that Google—which did not even exist when the Internet first commercialized in the
mid-1990s—has the most effective platform on the Internet. Hence, its behavior has enormous economic
consequences. For example, sometimes it makes code accessible to programmers for mash-ups—for example,
building services that attract developers and users with no direct way to generate revenue.48 Sometimes its
investments have direct effects on other Internet infrastructure firms. For example, Google operates an enormous
global Internet backbone, as well as CDN and caching services, for its own operations, making it one of the largest
data-transit providers in the world today. The firm also uses its platform to shape the actions of others. Many other
firms expend considerable resources optimizing their web pages to appear high on Google's search results, and
Google encourages this in various ways. For example, it lets potential advertisers, who will bid in Google's auction,
know which words are the most popular.
Networking equipment provider Cisco is another prominent platform provider for Internet infrastructure, having
grown to become a large provider of equipment for business establishments. For many years, Cisco made most of
Page 9 of 24
Internet Infrastructure
its profit from selling hubs and routers, so the platform strategy was rather straightforward. Cisco aspired to
developing closely related businesses, offering users a nearly integrated solution to many networking problems. At
the same time, Cisco kept out of service markets and server applications, leaving that to integrators, consultants,
and software vendors. That way, Cisco did not compete with its biggest business partners. More recently, however,
Cisco branched into consumer markets (with its purchase of Linksys). The firm also has moved into some server
(competing with HP) and some software/service areas related to videoconferencing and telepresence (p. 16) (by
purchasing Webex, for example). Cisco no longer draws the boundary where it used to, and it is unclear how wide
a scope the firm wants its platform to cover.
Microsoft is perhaps the next best known platform provider whose business shapes Internet infrastructure. In the
early 1990s, Microsoft offered TCP/IP compatibility in Windows as a means of enhancing its networking software, as
well as to support functionality in some of its applications, such as Exchange. In the mid-1990s, Microsoft offered a
browser, partly as a gateway toward developing a broader array of web services, and partly for defensive
purposes, to establish its proprietary standards as dominant.49 Although Microsoft continues to support these
commercial positions and profit from them, the firm has not had as much success in other aspects of its commercial
Internet ventures. MSN, search, mobile OS, and related activities have not yielded sustained enviable success
(yet). Only the firm's investments in Xbox Live have generated a significant amount of Internet traffic, and it
continues to push the boundaries of large-scale multiplayer Internet applications.
Another PC firm, Intel, has an Internet platform strategy. Intel's most important Internet activity came from
sponsoring a Wi-Fi standard for laptops under the Centrino brand in 2003.50 To be clear, this did not involve
redesigning the Intel microprocessor, the component for which Intel is best known. It did, however, involve
redesigning the motherboard for desktop PCs and notebooks by adding new parts. This redesign came with one
obvious benefit: It eliminated the need for an external card for the notebook, usually supplied by a firm other than
Intel and installed by users (or OEMs—original equipment manufacturers) in an expansion slot. Intel also hoped that
its endorsement would increase demand for wireless capabilities within notebooks using Intel microprocessors by,
among other things, reducing their weight and size while offering users simplicity and technical assurances in a
standardized function. Intel hoped for additional benefits for users, such as more reliability, fewer set-up difficulties,
and less frequent incompatibility in new settings. Intel has helped fund conformance-testing organizations,
infrastructure development, and a whole range of efforts in wireless technology. More recently, it has invested
heavily in designing and supporting other advanced wireless standards, such as WiMax.
As with many other aspects of commercialization, the importance of platforms is both cause for celebration and a
source of concern. It is positive when platform strategies help firms coordinate new services for users. Why is the
emergence of platform providers a concern? In short, in this market structure the private incentives of large
dominant firms determine the priorities for investment in Internet infrastructure. Under some circumstances
dominant platform firms have incentives to deny interconnection to others, to block the expansion of others, and, in
an extreme case, to smother the expansion of new functionality by others. When Internet alarmists worry about
conduct of proprietary platforms, they most fear deliberate introduction of incompatibilities between platforms, and
other conduct to deliberately violate end-to-end principles.51 Microsoft, AOL, Intel, Comcast, and WorldCom have
all shown tendencies toward such behavior in specific episodes. (p. 17) Open-source advocates who claim that
they benefit the Internet usually mean they are preventing defensive activity by leaders with defensive tendencies.
More recently, a range of behavior from Apple, Facebook, Twitter, American Airlines, and the Wall Street Journal
have raised fears about the “splintering” of the Internet.52 Splintering arises when users lose the ability to
seamlessly move from one application to another or vendors lose a common set of platforms on which to develop
their applications. Steve Jobs’ decision not to support Flash on the iPhone and iPad is one such recent example. So
is Twitter's resistance to having its services searched without payment from the search engines, and Facebook has
taken a similar stance. American Airlines refused to let Orbitz see its prices without modifying its website to
accommodate new services American Airlines wanted to sell to frequent fliers, and Orbitz refused. As a result, the
two companies no longer cooperate. For some time the Wall Street Journal has refused to let users search
extensively in its archives without a subscription, and its management openly discusses aspirations to gain a fee
from search engines, much as Facebook and Twitter did.
What economic incentives lie behind these concerns? In a nutshell the answer begins thusly: no law compels any
firm to interconnect with any other. Every firm always has the option to opt out of the Internet, and/or do something
Page 10 of 24
Internet Infrastructure
slightly less dramatic, such as opt out of commonly used standards, and/or try to get business partners to use
proprietary standards. At any given moment, that means a firm in a strong competitive position will have incentives
to raise switching costs to its installed base of developers and users, and/or deter migration of users and
developers to competing platforms. A variety of strategic options might contribute to such goals, such as designing
proprietary standards or blocking others from using a rival firm's standard designs.
Concerns about splintering also arise because suppliers must cooperate in order to deliver services. Because one
party's cost is another party's revenue, firms have incentives to sacrifice aspects of cooperation in the attempt to
gain rents. Such behavior is not necessarily in users’ interests.
Consider the negotiations between Cogent and Sprint, for example, which broke down after a peering dispute.53
Cogent refused to pay Sprint after Sprint insisted Congent had not met their obligations under a peering agreement.
After a long stand-off, Sprint's management decided to shut down its side of the peering. That action had
consequences for users on both networks who did not multihome, that is, did not use more than one backbone firm.
One set of exclusive Sprint users could not reach another set of exclusive Cogent users.54 To make a long story
short, users of both carriers were angry, and Sprint's management gave in after a few days. Soon after, the two
firms came to a long-term agreement whose details were not disclosed publicly. Notice how inherently awkward the
negotiations were: Cogent necessarily interacted or exchanged traffic with the very firm with which it competes,
Sprint.55
Another set of cases illustrates how the interests of one participant may or may not intersect with the interests of all
participants. For example, consider Comcast's unilateral declaration to throttle peer-to-peer (P2P) applications on
its (p. 18) lines with resets.56 This case contains two economic lessons. On the one hand, it partially favors giving
discretion to Comcast's management. Management could internalize the externality one user imposes on others—
managing traffic for many users’ general benefit. That is, P2P applications, like Bit-Torrent, can impose large
negative externalities on other users, particularly in cable architectures during peak-load time periods. Such
externalities can degrade the quality of service to the majority of users without some sort of limitation or restriction.
On the other hand, Comcast's behavior shapes at least one additional provider of applications, future
entrepreneurs, many of whom are not present. It would be quite difficult for Comcast and future entrants to reach a
bargain because some of them do not even exist yet. Eventually the FCC intervened with Comcast, issuing an
order to cease blocking, which led to a court case over its authority to issue such an order. As of this writing, the
full ramifications of this court case have not played themselves out.
Another case involving Comcast also illustrates the open-ended nature of the cooperation between firms. In
November 2010, Comcast, the largest provider of broadband Internet access in the United States, entered into a
peering dispute with Level 3, one of the backbone firms with which it interconnected. Level 3 had made an
arrangement with Netflix, a video provider, and this had resulted in Level 3 providing more data to Comcast than
Comcast provided to Level 3. Comcast demanded that Level 3 pay for giving more data to Comcast than Comcast
gave in return, to which Level 3 agreed.57 This agreement was significant because it was the first time an ISP did
not pay the backbone provider for transit services, but instead, the provider paid a large ISP to reach end users. As
of this writing, it is an open question how common such agreements will become, and whether they will alter the
general flow of dollars among Internet infrastructure firms.
8. Governance for Internet Infrastructure
A number of organizations play important roles in supporting the design, upgrading, and operations of Internet
infrastructure. The most notable feature of the governance of Internet infrastructure are the differences with the
governance found in any other communications market.
One notable feature of this structure was the absence of much government directive or mandate. It is incorrect to
say that the government was uninvolved: After all, the NSF and Department of Defense both had played a crucial
role in starting and sponsoring organizations that managed and improved the operations of the Internet.58 Rather,
the commercial Internet embodied the accumulation of multiple improvements suggested through a process of
consensus in committees, and that consensus depended in large part on private action, what economists call (p.
19) “private orderings.”59 Unlike any other communication network, governments did not play a substantial role in
these private orderings.
Page 11 of 24
Internet Infrastructure
The organization that governs the upgrades to TCP/IP is the Internet Engineering Task Force (IETF). It was
established prior to the Internet's privatization, and continued as a nonprofit organization after its
commercialization. Today it hosts meetings that lead to designs that shape the operations of every piece of
equipment using TCP/IP standards.60 Many of these decisions ensured that all complying components would
interoperate. Today decisions at the IETF have enormous consequences for the proprietary interests of firms.
Standards committees had always played some role in the computer market, and they played a similar role in the
shaping of Internet infrastructure. The Institute of Electrical and Electronics Engineers (IEEE), for example, made
designs that shaped the LAN market, modem, and wireless data communications markets.61
Aside from the absence of government mandates, these groups also were notable for the absence of dominant
firms. They were not beholden to the managerial auspices of AT&T or IBM, or any other large firm, for example.
Though all those firms sent representatives who had a voice in shaping outcomes, these institutions were
characterized by divided technical leadership.
That does not imply that all these organizations conducted their business in a similar manner. On the contrary,
these forums differed substantially in their conduct.62 The World Wide Web Consortium (W3C) offers an illuminating
comparison. Berners-Lee forecast the need for an organization to assemble and standardize pieces of codes into a
broad system of norms for operating in the hyper-text world. He founded the World Wide Web Consortium for this
purpose. In 1994 he established the offices for the W3C in Cambridge, Massachusetts, just in time to support an
explosion of web-based services that took advantage of existing Internet infrastructure.
Berners-Lee stated that he had wanted a standardization process that worked more rapidly than the IETF but
otherwise shared many of its features, such as full documentation and unrestricted use of protocols. In contrast to
the IETF, the W3C would not be a bottom-up organization with independent initiatives, nor would it have
unrestricted participation. Berners-Lee would act in a capacity to initiate and coordinate activities. To afford some
of these, his consortium would charge companies for participating in efforts and for the right to keep up-to-date on
developments.63
The governance structures for the IETF and the W3C also can be compared to what it is not, namely, the next
closest alternative for global networking—the Open Systems Interconnection model, a.k.a., OSI seven-layer model.
The OSI was a formal standard design for interconnecting networks that arose from an international standards
body, reflecting the representation of multiple countries and participants. The processes were quite formal. The
network engineering community in the United States preferred their bottom-up approach to the OSI top-down
approach and, when given the opportunity, invested according to their preferences.64
The lack of government involvement could also be seen in other aspects of the Internet in the United States. For
example, the Federal Communications (p. 20) Commission (FCC) refrained from mandating most Internet
equipment design decisions. Just as the FCC had not mandated Ethernet design standards, so it let the spectrum
become available for experiments by multiple groups who competed for wireless Ethernet standards, which
eventually became Wi-Fi. Similarly, the FCC did not mandate a standard for modems other than to impose
requirements that limited interference. It also did not mandate an interconnection regulatory regime for Internet
carriers in the 1990s.65
The US government's most visible involvement in governance has come with its decisions for the Internet
Corporation for Assigned Numbers and Names (ICANN), an organization for governing the allocation of domain
names. The NSF originally took responsibility for domain names away from the academic community prior to
privatizing the Internet, giving it to one private firm. The Department of Commerce established a nonprofit
organization to provide oversight of this firm, with the understanding that after a decade ICANN would eventually
become a part the United Nations.66 This latter transfer never took place, and, as of this writing, ICANN remains a
US-based nonprofit corporation under a charter from the Commerce Department.
One other notable innovative feature of Internet infrastructure is its reliance on the behavioral norms and outcomes
of open-source projects. This had substantial economic consequences, establishing behavior norms for
information transparency that had never before governed the design of equipment for a major communication
market. Key aspects of Internet infrastructure embedded designs that emerged from designs that any firm or user
could access without restriction, and to which almost any industry participant could make contributions.
Page 12 of 24
Internet Infrastructure
One well-known open-source project was Linux, a basis for computer operating systems. It was begun by Linus
Torvald in the early 1990s as a derivative, or “fix,” to Unix. It was freely distributed, with alternating releases of a
“beta” and “final” version. Starting around 1994 to 1995, about the same time as the commercialization of the
Internet, Linux began to become quite popular. What had started as a small project caught the attention of many
Unix users, who started contributing back to the effort. Many users began converting from proprietary versions of
Unix (often sold by the hardware manufacturers) and began basing their operating systems on Linux, which was
not proprietary. This movement gained so much momentum that Linux-based systems became the most common
server software, especially for Internet servers.67
Apache was another early project founded to support and create “fixes” for the HTTP web server originally written
by programmers at the National Center for Super Computing Applications (NCSA) at the University of Illinois. By
2006, more than 65 percent of websites in the world were powered by the Apache HTTP web server.68 Apache
differed from many other open-source organizations in that contributors “earned” the right to access the code. To
be a contributor one had to be working on at least one of Apache's projects. By 2006, the organization had an
active and large contributor base.
(p. 21) Perhaps the most well-known open source format was the least technical. It originated from something
called a wiki. Developed in 1995 by Ward Cunningham, a software engineer from Portland, Oregon, wikis can either
be used in a closed work group or used by everyone on the Internet. They originally were formed to replicate or
make a variation on existing products or services, with the purpose of fixing bugs within the various systems.
Accordingly, wikis were first developed and intended for software development but had grown out of that first use
and became applied to a multitude of applications. In short, wikis became the essential software infrastructure upon
which virtually all major Internet media applications are built.
A particular popular application of wikis, Wikipedia, garnered worldwide attention. In the case of Wikipedia, the
format was applied to the development of textual and nontextual content displayed on the Web. It is an online-only
encyclopedia. The content is user-created and edited. As its homepage proudly states, it is “The Free
Encyclopedia That Anyone Can Edit.” The site has always been free of charge and never accepted advertising.69
Wikipedia beat out Microsoft's Encarta for the honor of the Internet's top research site in 2005, a position that it has
held ever since.70
An experimental form of copyright, the creative commons license, spread extraordinarily fast. This license,
founded in only 2001, is used by over 30 million websites today.71 It has begun to play a prominent role in online
experimentation and everyday activity. Creative commons licenses help organizations accumulate information in a
wide array of new business formats. Flickr is one successful example, having recently passed the milestone of four
billion photos on its site. The creative commons license also is employed by numerous Web2.0 initiatives and new
media, which support online advertising.
After so much experience with open source it is no surprise that the major participants in Internet infrastructure no
longer leave these institutions alone. The standardization organizations find their committees filled with many
interested participants, some with explicit commercial motives and some not. These institutions show signs of the
stress, chiefly in a slowing down in their decision making, if they reach decisions at all. Perhaps that should also be
cause for celebration, since it is an inevitable symptom of commercial success and the large commercial stakes for
suppliers.72
9. Broadband Policy
Many governments today, especially outside the United States, are considering making large subsidies for
broadband investments. Some governments, such as those of South Korea and Australia, have already done so,
making next-generation broadband widely available. Many years of debate in the United States led to the
emergence of a National Broadband Plan, released in March 2010.73 Related debates also led to a large European
framework for the governance of investments in broadband.74
(p. 22) Some of the key issues can be illustrated by events in the United States. At the outset of the commercial
Internet, policy favored allowing firms to invest as they please. During the latter part of the 1990s, policy did not
restrict the ability of firms to respond to exuberant and impatient demand for new Internet services. After a time, the
75
Page 13 of 24
Internet Infrastructure
infrastructure to support those new services became far larger than the demand for services.75 After the dot-com
boom came to a bust, the United States found itself with excessive capacity in backbone and many other
infrastructure facilities to support Internet services.
In the decade after, policy continued to favor privately financed investment. That resulted in a gradual building of
broadband. Why did private supply favor gradualism? In short, aside from perceptions of overcapacity, few
executives at infrastructure firms would ever have deliberately invested resources in an opportunity that was
unlikely to generate revenue until much later, especially ten to twenty years later. Corporate boards would not
have approved of it, and neither would stockholders. One of the few firms to attempt such a strategy was Verizon,
which unveiled a program to build fiber to the home in the latter part of the first decade after the millennium. Due to
low take-up, Verizon did not fully build these services in all its territory.76
Most arguments for building next-generation Internet broadband ahead of demand faced large political obstacles.
Consider one justification, economic experimentation, namely, better broadband clearly helps experimentation in
applications by making users better customers for online ads and electronic retailing. Although the monetary costs
are easy to tally, the benefits are not. Relatedly, the costs are focused, but the gains are diffuse, thus making it
difficult to show that broadband caused the associated gain, even if, broadly speaking, everyone recognizes that
broadband raised firms’ productivity and enhanced users’ experience. Accordingly, financing broadband would
involve a general tax on Yahoo and Amazon and Google and YouTube and other national electronic retailers and
application providers who benefit from better broadband. Needless to say, considerable political challenges
interfere with the emergence of such schemes. Some countries, such as Korea, have managed to come to such a
political agreement, but these examples are the exception, not the rule.
Another set of policies considers subsidizing rural broadband, that is, subsidizing the costs of building wireline
broadband supply in high-cost areas.77 Since private supply already covers the least costly areas to supply, only
a small fraction of potential households benefit from such subsidies, and the costs of subsidizing buildouts are
especially high. Such subsidies face numerous challenges. The US National Broadband Plan provides an excellent
summary of the issues. Many of the justifications for these subsidies are noneconomic in nature—aimed at
increasing civic engagement among rural populations, increasing obtainment of educational goals among children,
or increasing the likelihood of obtaining specific health benefits. Accordingly, the decision to provide such
subsidies is often a political decision rather than purely an economic one.
Another open question concerns the governance of deployment of access networks. In Europe the governments
have chosen a structure that essentially (p. 23) separates ownership of transmission facilities from ownership of
content, and mandates interconnection for many rivals at key points.78 In the United States there was a preference
for private provision of backbone and access networks in the first decade of the millennium, and a light-handed
degree of regulatory intervention, so providers were not required to offer interconnection to others.
No simple statement could characterize the changing norms for regulating Internet infrastructure. For example, as
of this writing, at the federal level, there are initiatives under way to adopt formal policies for ensuring the openness
of Internet access.79 At the local level, there are a range of initiatives by municipalities to provide local access in
competition with private suppliers.80 There is also effort to limit municipal intervention in the provision of access or
deter state limitations on local initiatives.81
10. Summary
No single administrative agency could possibly have built and managed the commercial network that emerged after
the privatization of the Internet. The shape, speed, growth, and use of the commercial Internet after 1995
exceeded the ability of any forecaster inside or outside government circles. The value chain for Internet services
underwent many changes after the Internet was privatized. More investment from private firms, and more entry
from a range of overlays and new applications, altered nearly every aspect of the structure of the value chain.
This evolution occurred without explicit directives from government actors, with only a light hand of directives, and
with astonishing speed.
Many key events in Internet infrastructure took place within the United States in the first decade and a half of the
commercial Internet, but this appears to be less likely as the commercial Internet grows large and more
Page 14 of 24
Internet Infrastructure
widespread. While the United States continues to be the source of the largest number of users of Internet services,
and the single greatest origin and destination for data traffic, the US position in the global Internet value chain will
not—indeed, cannot—remain dominant. That should have enormous consequences for the evolution of the
structure of global Internet infrastructure because many countries insist on building their infrastructure according
to principles that differ from those that governed the first and second waves of investment in the United States. The
boundaries between public and private infrastructure should change as a result, as should the characteristics of
the governance and pricing of Internet infrastructure.
It is no surprise, therefore, that many fruitful avenues for further economic research remain open. For example,
what frameworks appropriately measure the rate of return in investment in digital infrastructure by public and
private organizations? Through what mechanisms does advance Internet infrastructure produce (p. 24) economic
growth, and to which industries in which locations do most of the positive and negative effects flow? What factors
shape the effectiveness of different governance structures for open structures, such as those used by the IETF?
What is the quantitative value of these novel governance structures?
For the time being there appears to be no cessation in the never-ending nature of investment in Internet
infrastructure. Indeed, as of this writing, many questions remain open about the value of different aspects of IT in
the long run, and firms continue to explore approaches to creating value. Virtually all participants in these markets
expect continual change, as well as its twin, the absence of economic tranquility.
References
Abbate, J., 1999. Inventing the Internet, MIT Press: Cambridge, Mass.
Aizcorbe, A., K. Flamm, and A. Khursid, 2007. “The Role of Semiconductor Inputs in IT Hardware Price Decline:
Computers vs. Communications,” in (eds.) E. R. Berndt and C. M. Hulten, Hard-to-Measure Goods and Services:
Essays in Honor of Zvi Griliches, University of Chicago Press, pp. 351–382.
Anderson, C., and M. Wolff, 2010. “The Web Is Dead. Long Live the Internet,” Wired, September.
http://www.wired.com/magazine/2010/08/ff_webrip/.
Arora, A., and F. Bokhari, 2007. “Open versus Closed Firms and the Dynamics of Industry Evolution,” Journal of
Industrial Economics, 55(3), 499–527.
Augereau, A., S. Greenstein, and M. Rysman, 2006. “Coordination versus Differentiation in a Standards War: 56K
Modems,” Rand Journal of Economics, 34 (4), 889–911.
Baumol, W., and twenty six economists. 2006. Economists’ Statement on U.S. Broadband Policy (March 2006). AEIBrookings Joint Center Working Paper No. 06-06-01. Available at SSRN: http://ssrn.com/abstract=892009.
Berners-Lee, T., and M. Fischetti, 1999. Weaving the Web, The Original Design and Ultimate Destiny of the World
Wide Web, Harper Collins, New York.
Besen, S., P. Milgrom, B. Mitchell, and P. Sringanesh, 2001. “Advances in Routing Technologies and Internet Peering
Agreements,” American Economic Review, May, pp. 292–296.
Blumenthal, M. S., and D. D. Clark, 2001. “Rethinking the Design of the Internet: The End-to-End Arguments vs. The
Brave New World.” In (eds.) B. Compaine and S. Greenstein, Communications Policy in Transition: The Internet
and Beyond. MIT Press: Cambridge, Mass., pp. 91–139.
Bresnahan, T., 1999. “The Changing Structure of Innovation in Computing” in (ed.) J. A. Eisenach and T. M. Lenard,
Competition, Convergence and the Microsoft Monopoly: Antitrust in the Digital Marketplace, Kluwer Academic
Publishers, Boston, pp. 159–208. (p. 29)
Bresnahan, T., and Pai-Ling Yin, 2007. “Standard Setting in Markets: The Browser Wars,” in (eds.) S. Greenstein
and V. Stango, Standards and Public Policy, Cambridge University Press; Cambridge, UK. pp. 18–59.
Bresnahan, T., and S. Greenstein, 1999. “Technological Competition and the Structure of the Computer Industry,”
Page 15 of 24
Internet Infrastructure
Journal of Industrial Economics, March, pp. 1–40.
Bresnahan, T., S. Greenstein, and R. Henderson, 2011. “Schumpeterian Competition and Diseconomies of Scope:
Illustrations from the Histories of IBM and Microsoft.” Forthcoming in (eds.) J. Lerner and S. Stern, The Rate and
Direction of Inventive Activity, 50th Anniversary, National Bureau of Economic Research.
http://www.nber.org/books/lern11-1, accessed March, 2012.
Burgelman, R., 2007. “Intel Centrino in 2007: A New Platform Strategy for Growth.” Stanford University Graduate
School of Business, SM-156.
Chiao, B., J. Tirole, and J. Lerner, 2007. “The Rules of Standard Setting Organizations,” Rand Journal of Economics,
34(8), 905–930.
Clark, D., W. Lehr, S. Bauer, P. Faratin, R. Sami, and J. Wroclawski, 2006. “Overlays and the Future of the Internet,”
Communications and Strategies, 63, 3rd quarter, pp. 1–21.
Crandall, R., 2005. “Broadband Communications,” in (eds.) M. Cave, S. Majumdar, and Vogelsang, Handbook of
Telecommunications Economics, pp. 156–187. Amsterdam, The Netherlands: Elsevier.
Cusumano, M., and D. Yoffie, 2000. Competing on Internet Time: Lessons from Netscape and Its Battle with
Microsoft. New York: Free Press.
Dalle, J.-M., P. A. David, R. A. Ghosh, and F. Wolak, 2004. “Free & Open Source Software Developers and ‘the
Economy of Regard’: Participation and Code-Signing in the Modules of the Linu Kernel,” Working paper, SIEPR,
Stanford University, Open Source Software Project
http://siepr.stanford.edu/programs/OpenSoftware_David/NSFOSF_Publications.html
Dedrick, J., and J. West, 2001. “Open Source Standardization: The Rise of Linu in the Network Era,” Knowledge,
Technology and Policy, 14 (2), 88–112.
Doms, M., and C. Forman, 2005. “Prices for Local Area Network Equipment,” Information Economics and Policy,
17(3), 365–388.
Downes, T., and S. Greenstein, 2002. “Universal Access and Local Internet Markets in the U.S.,” Research Policy,
31, 1035–1052.
Downes, T., and S. Greenstein, 2007. “Understanding Why Universal Service Obligations May Be Unnecessary:
The Private Development of Local Internet Access Markets.” Journal of Urban Economics, 62, 2–26.
Evans, D., A. Hagiu, and R. Schmalensee (2006). Invisible Engines: How Software Platforms Drive Innovation and
Transform Industries, MIT Press; Cambridge, Mass.
Federal Communications Commission, 2010a. National Broadband Plan, Connecting America,
http://www.broadband.gov/.
Federal Communications Commission, 2010b. “In the Matter of Preserving the Open Internet Broadband Industry
Practices,” GN Docket No. 09–191, WC Docket No. 07–52, December 23, 2010. http://www.fcc.gov/.
Forman, C., and A. Goldfarb, 2006. “Diffusion of Information and Communications Technology to Business,” in (ed.)
T. Hendershott, Economics and Information Systems, Volume 1, Elsevier, pp. 1–43.
Forman, C., A. Goldfarb, and S. Greenstein, 2005. “How did Location Affect Adoption of the Internet by Commercial
Establishments? Urban Density versus Global Village,” Journal of Urban Economics, 58(3), 389–420. (p. 30)
Fosfuri, A., M. Giarratana, and A. Luzzi, 2005. “Firm Assets and Investments in Open Source Software Products.”
Druid Working Paper No. 05–10. Copenhagen Business School.
Friedan, R., 2001. “The Potential for Scrutiny of Internet Peering Policies in Multilateral Forums,” in (eds.) B.
Compaine and S. Greenstein, Communications Policy in Transition, The Internet and Beyond, MIT Press;
Cambridge, Mass., 159–194.
Page 16 of 24
Internet Infrastructure
Gawer, A., 2009, Platforms, Innovation and Competition, Northampton, Mass.: Edward Elgar.
Gawer, A., and M. Cusumano, 2002. Platform Leadership: How Intel, Microsoft and Cisco Drive Innovation. Boston,
Mass.: Harvard Business School Press.
Gawer, A., and R. Henderson, 2007. “Platform Owner Entry and Innovation in Complementary Markets: Evidence
from Intel.” Journal of Economics and Management Strategy, Volume 16 (1), 1–34.
Gilles, J., and R. Cailliau, 2000, How the Web Was Born, Oxford, UK: Oxford University Press.
Goldfarb, A., 2004. “Concentration in Advertising-Supported Online Markets: An Empirical Approach.” Economics of
Innovation and New Technology, 13(6), 581–594.
Goldstein, F., 2005. The Great Telecom Meltdown. Boston: Artech House.
Greenstein, S., 2006. “Wikipedia in the Spotlight.” Kellogg School of Management, case 5–306–507,
http://www.kellogg.northwestern.edu/faculty/kellogg_case_collection.aspx.
Greenstein, S., 2007a. “Economic Experiments and Neutrality in Internet Access Markets,” in (eds.) A. Jaffe, J.
Lerner and S. Stern, Innovation, Policy and the Economy, Volume 8. Cambridge, Mass.: MIT Press, pp. 59–109.
Greenstein, S., 2007b. “The Evolution of Market Structure for Internet Access in the United States,” in (eds.) W.
Aspray and P. Ceruzzi, The Commercialization of the Internet and its Impact on American Business. Cambridge,
Mass.: MIT Press, pp. 47–104.
Greenstein, S., 2009a. “Open Platform Development and the Commercial Internet.” In (ed.) A. Gawer, Platforms,
Innovation and Competition, Northampton, Mass.: Edward Elgar, pp. 219–250.
Greenstein, S., 2009b. “Glimmers and Signs of Innovative Health in the Commercial Internet,” Journal of
Telecommunication and High Technology Law, pp. 25–78.
Greenstein, S., 2010. “The Emergence of the Internet: Collective Invention and Wild Ducks.” Industrial and
Corporate Change. 19(5), 1521–1562.
Greenstein, S., 2011. “Nurturing the Accumulation of Innovations: Lessons from the Internet,” in (eds.) Rebecca
Henderson and Richard Newell, Accelerating Innovation in Energy: Insights from Multiple Sectors, University of
Chicago Press, pp. 189–224.
Greenstein, S., and R. McDevitt. 2009. “The Broadband Bonus: Accounting for Broadband Internet's Impact on U.S.
GDP.” NBER Working paper 14758. http://www.nber.org/papers/w14758.
Greenstein, S., and R. McDevitt, 2011. “Broadband Internet's Impact on Seven Countries,” in (ed.) Randy Weiss,
ICT and Performance: Towards Comprehensive Measurement and Analysis. Madrid: Fundacion Telefonic, pp. 35–
52.
Greenstein, S., and J. Prince, 2007. “The Diffusion of the Internet and the Geography of the Digital Divide,” in (eds.)
R. Mansell, C. Avgerou, D. Quah, and R. Silverstone, Oxford Handbook on ICTs, Oxford University Press, pp. 168–
195. (p. 31)
Haigh, T., 2007. “Building the Web's Missing Links: Portals and Search Engines” in (eds.) W. Aspray and P. Ceruzzi,
The Internet and American Business, MIT Press.
Kahin, B., and B. McConnell, 1997. “Towards a Public Metanetwork; Interconnection, Leveraging and Privatization of
Government-Funded Networks in the United States,” in (eds.) E. Noam and A. Nishuilleabhain, Private Networks
Public Objectives, Elsevier, Amsterdam, the Netherlands. pp. 307–321.
Kahn, R., 1995. “The Role of Government in the Evolution of the Internet,” in (ed.) National Academy of
Engineering, Revolution in the U.S. Information Infrastructure. Washington, D.C.: National Academy Press, 13–24.
Kende, M., 2000. “The Digital Handshake: Connecting Internet Backbones,” Working Paper No. 32, Federal
Page 17 of 24
Internet Infrastructure
Communications Commission, Office of Planning and Policy, Washington, D.C.
Kesan, J. P., and R. C. Shah, 2001. “Fool Us Once, Shame on You—Fool Us Twice, Shame on Us: What We Can
Learn from the Privatizations of the Internet Backbone Network and the Domain Name System,” Washington
University Law Quarterly, 79, 89–220.
Laffont, J.-J., J. S. Marcus, P. Rey, and J. Tirole, 2001. “Internet Peering,” American Economic Review, Papers and
Proceedings, 91,. 287–292.
Laffont, J.-J., J. S. Marcus, P. Rey, and J. Tirole, 2003. “Internet Interconnection and the Off-net Pricing Principle,”
Rand Journal of Economics, 34(2), 370–390.
Leiner, B., V. Cerf, D. Clark, R. Kahn, L. Kleinrock, D. Lynch, J. Postel, L. Roberts, and S. Wolff, 2003. A Brief History
of the Internet, Version 3.32, Last revised, 10 December, 2003.
http://www.isoc.org/internet/history/brief.shtml, downloaded August, 2009.
Lerner, J., and J. Tirole, 2002. “Some Simple Economics of Open Source Software,” Journal of Industrial Economics,
50 (2), 197–234.
Lessig, L., 1999, Code and Other Laws of Cyber Space, New York: Basic Books.
Mackie-Mason, J., and J. Netz, 2007., “Manipulating Interface Standards as an Anticompetitive Strategy,” in (eds.)
Shane Greenstein and Victor Stango, Standards and Public Policy, Cambridge Press; Cambridge, Mass., pp. 231–
259.
Marcus, J. S., 2008. “IP-Based NGNs and Interconnection: The Debate in Europe,” Communications & Strategies,
No. 72, p. 17.
Marcus, J. S., and D. Elixmann, 2008. “Regulatory Approaches to Next Generation Networks (NGNs): An
International Comparison.” Communications and Strategies, 69 (1st quarter), 1–28.
Mockus, A., R. T. Fielding, and J. D. Herbsleb, 2005. “Two Case Studies of Open Source Software Development:
Apache and Mozilla,” in (ed.) J. Feller, Perspectives on Free and Open Source Software. Cambridge, MA: MIT Press;
Cambridge, pp 163–210.
Mowery, D. C., and T. S. Simcoe, 2002a. “The Origins and Evolution of the Internet,” in R. Nelson, B. Steil, and D.
Victor (eds.), Technological Innovation and Economic Performance. Princeton, N.J.: Princeton University Press,
229–264.
Mowery, D. C. and T. S. Simcoe, 2002b. “Is the Internet a U.S. Invention? An Economic and Technological History of
Computer Networking.” Research Policy, 31(8–9), 1369–1387.
Mueller, M.L., 2002. Ruling the Root: Internet Governance and the Taming of Cyberspace, Cambridge, Mass.: MIT
Press. (p. 32)
NTIA, 2010. “Exploring the Digital Nation: Home Broadband Internet Adoption in the United States,”
http://www.ntia.doc.gov/reports.html.
Nuechterlein, J. E., and P. J. Weiser, 2005. Digital Crossroads: American Telecommunications Policy in the
Internet Age, Cambridge, Mass.: MIT Press.
Odlyzko, A., 2010. “Bubbles, Gullibility, and Other Challenges for Economics, Psychology, Sociology, and
Information Sciences.” First Monday 15, no. 9.
http://www.uic.edu/htbin/cgiwrap/bin/ojs/index.php/fm/article/view/3142/2603
Ou, G., 2008. “A Policy Maker's Guide to Network Management. The Information Technology and Innovation
Foundation,” http://www.itif.org/index.php?id=205.
Oxman, J., 1999. The FCC and the Unregulation of the Internet. Working paper 31, Federal Communications
Commission, Office of Planning and Policy, Washington, D.C.
Page 18 of 24
Internet Infrastructure
Partridge, C., 2008. “The Technical Development of Internet e-Mail,” IEEE Annals of the History of the Computing
30, pp. 3–29.
Quarterman, J.S., 1989. Matri Computer Networks and Conferences, Bedford, Mass.: Digital Press.
Rosston, G., 2009. The Rise and Fall of Third Party High Speed Access.” Information, Economics and Policy 21, pp.
21–33.
Rosston, G., S. J. Savage, and D. Waldman, 2010. Household Demand for Broadband Internet Service.” SIEPR Paper
09–008. http://siepr.stanford.edu/publicationsprofile/2109.
Russell, A.L., 2006. Rough Consensus and Running Code and the Internet-OSI Standards War.” IEEE Annals of the
History of Computing, 28, pp. 48–61.
Saltzer, J.H., D.P. Reed, and D.D. Clark, 1984. “End-to-End Arguments in System Design,” ACM Transactions on
Computer Systems 2, pp. 277–288
.
An earlier version appeared in the Second International Conference on Distributed Computing Systems (April,
1981) pp. 509–512.
Savage, S. J., and D. Waldman, 2004, “United States Demand for Internet Access,” Review of Network Economics
3, pp. 228–247.
Seamans, R., 2010. “Fighting City Hall: Entry Deterrence and New Technology Upgrades in Local Cable TV
Markets.” Working paper, NYU Stern School of Business, http://pages.stern.nyu.edu/~rseamans/index.htm.
Simcoe, T., 2007. “Delay and De jure Standardization: Exploring the Slow Down in Internet Standards
Development,” in (eds.) Shane Greenstein and Victor Stango, Standards and Public Policy, Cambridge, Mass.:
Cambridge Press, pp. 260–297.
Simcoe, T., 2010. “Standard Setting Committees: Consensus Governance for Shared Technology Platforms,”
Working Paper, Boston University, http://people.bu.edu/tsimcoe/index.html.
Stranger, G., and S. Greenstein, 2007. “Pricing in the Shadow of Firm Turnover: ISPs in the 1990s.” International
Journal of Industrial Organization, 26, pp. 625–642.
Strover, S., 2001. “Rural Internet Connectivity.” Telecommunications Policy, 25, pp. 331–347.
Von Burg, U., 2001. The Triumph of Ethernet: Technological Communities and the Battle for the LAN Standard,
Palo Alto, CA: Stanford University Press.
Von Hippel, E., 2005. “Open Source Software Projects as User Innovation Networks,” in (ed.) Joseph Feller, Brian
Fitzgerald, Scott A. Hissam, Karim Lakhani, Perspectives on Free and Open Software, Cambridge, Mass.: MIT Press;
pp. 267–278.
Von Schewick, B., 2010. Internet Architecture and Innovation, Cambridge, Mass.: MIT Press. (p. 33)
Waldrop, M., 2001. The Dream Machine: J.C.R. Licklider and the Revolution that Made Computing Personal, New
York: Penguin.
Wallsten, S., 2009. “Understanding International Broadband Comparison,” Technology Policy Institute,
http://www.techpolicyinstitute.org/publications/topic/2.html
West, J., and S. Gallagher, 2006. Patterns of Innovation in Open Source Software, in (ed.) Hank Chesbrough, Wim
Haverbeke, Joel West, Open Innovation: Researching a New Paradigm, Oxford University Press: Oxford, pp. 82–
108.
Zittrain, J., 2009. “The Future of the Internet and How to Stop It, Creative Commons Attribution—Noncommercial
Share Alike 3.0 license,” www.jz.org.
Page 19 of 24
Internet Infrastructure
Notes:
(1.) This is explained in considerable detail in Ou (2008).
(2.) There are many fine books about these developments, including Abbate (1999), Leiner et al. (2003), and
Waldrop (2001).
(3.) Packet switching had been discussed among communications theorists since the early 1960s, as well as by
some commercial firms who tried to implement simple versions in their frontier systems. As has been noted by
others, the ideas behind packet switching had many fathers: Paul Baran, J. C. Likelider, Douglas Engelbart, and Len
Kleinrock. There are many accounts of this. See, e.g., Quarterman (1989), Abbate (1999), Leiner et al. (2003), or
Waldrop (2001).
(4.) These advances have been documented and analyzed by many writers, including, e.g., Quarterman (1989),
Abbate (1999), Leiner et al. (2003), and Waldrop (2001).
(5.) An extensive explanation of TCP/IP can be found in many publications. Summarized simply, TCP determined a
set of procedures for moving data across a network and what to do when problems arose. If there were errors or
specific congestion issues, TCP contained procedures for retransmitting the data. While serving the same function
of a postal envelope and address, IP also shaped the format of the message inside. It specified the address for the
packet, its origin and destination, a few details about how the message format worked, and, in conjunction with
routers, the likely path for the packet toward its destination. See, e.g., Leiner et al. (2003).
(6.) In part, this was due to a DOD requirement that all Unix systems do so, but it also arose, in part, because most
Unix users in research environments found this feature valuable.
(7.) Moreover, a set of fail-safes had been put in place to make sure that one error message did not trigger
another. That is, the system avoided the nightmare of one error message generating yet another error message,
which generated another and then another, thus flooding the system with never-ending messages.
(8.) The paper that defined this phrase is commonly cited as Saltzer et al. (1984).
(9.) As stated by Blumenthal and Clark (2001) in a retrospective look: “When a general-purpose system (for
example, a network or an operating system) is built and specific applications are then built using this system (for
example, email or the World Wide Web over the Internet), there is a question of how these specific applications
and their required supporting services should be designed. The end-to-end arguments suggest that specific
application-level functions usually cannot, and preferably should not, be built into the lower levels of the system—
the core of the network.”
(10.) End-to-end also differed from most large computing systems at the time, such as mainframes, which put the
essential operations in the central processing unit. When such computers became situated in a networked
environment, there was little else for the terminals to do. They became known as dumb terminals. Contrast with
end-to-end, which colloquially speaking, located intelligence at the edges of the system, namely, in the clients.
(11.) The economics behind this “surprise” is discussed in some detail in Greenstein (2011).
(12.) See, e.g., Aizcorbe et al. (2007), Doms and Forman (2005), or the discussion in Forman and Goldfarb (2006).
(13.) See the discussions in, e.g., Kahn (1995), Kahin and McConnel (1997), and Kesan and Shah (2001), or
Greenstein (2010).
(14.) On many of the challenges during the transition, see Abbate (1999) or Kesan and Shah (2001).
(15.) For longer discussions about the origins and economic consequences, see, e.g., Mowery and Simcoe (2002a,
b), and Greenstein (2011).
(16.) The term “mesh” is due to Besen et al. (2001).
(17.) See Friedan (2001).
Page 20 of 24
Internet Infrastructure
(18.) For insights into the incentives to conduct traffic and come to peering agreements, see Besen et al. (2001)
and Laffont et al. (2001, 2003).
(19.) For more on overlays, see Clark et al. (2006).
(20.) See the evidence in Rosston (2009).
(21.) This topic has received attention ever since the commercial Internet first began to blossom. See, e.g., Strover
(2001) or Downes and Greenstein (2002). A summary can be found in Greenstein and Prince (2007). Also see the
discussion in, e.g., the Federal Communications Commission (2010a).
(22.) See, e.g., Berners-Lee and Fischetti (1999), and Gilles and Cailliau (2000).
(23.) For more on this story, see Cusamano and Yoffie (2000).
(24.) Andrew Odlyzko maintains an accessible summary of studies and forecasts of data traffic at
http://www.dtc.umn.edu/mints/.
(25.) Anderson and Wolff (2010) present a very accessible version of this argument.
(26.) Longer explanations for these events can be found in Greenstein (2007a, b).
(27.) For a longer discussion, see, e.g., Downes and Greenstein (2002, 2007).
(28.) For a discussion, see, e.g., Goldfarb (2004), Haigh (2007).
(29.) High usage could happen for a variety of reasons. For example, some technical users simply enjoyed being
online for large lengths of time, surfing the growing Internet and Web. Some users began to operate businesses
from their homes, remaining online throughout the entire workday. Some users simply forgot to log off, leaving their
computers running and tying up the telephone line supporting the connection to the PC. And some users grew more
experienced, and found a vast array of activities more attractive over time.
(30.) One survey of pricing contracts in May 1996 found that nearly 75 percent of the ISPs offering 28K service (the
maximum dial-up speed at the time) offered a limited plan in addition to their unlimited plan. That dropped to nearly
50 percent by August. By March of 1997 it was 33 percent, 25 percent by January of 1998, and less than 15
percent by January of 1999. For a summary see Stranger and Greenstein (2007).
(31.) During the 1990s most cable companies sold access to the line directly to users but made arrangements with
other firms, such as Roadrunner or @home, to handle traffic, routing, management, and other facets of the user
experience. Some of these arrangements changed after 2001, either due to managerial preferences, as when
@home lost its contract, or due to regulatory mandates to give users a choice over another ISP, as occurred after
the AOL/Time Warner merger. See Rosston (2009).
(32.) Download speed may not reach the advertised maxima. In cable networks, for example, congestion issues
were possible during peak hours. In DSL networks, the quality of service could decline significantly for users far
away from the central switch. The results are difficult to measure with precision.
(33.) The descriptive results were published in reports authored by staff at the NTIA. See NTIA (2010).
(34.) In addition to the surveys by Pew, also see, e.g., Savage and Waldman (2004), Rosston et al. (2010), and the
summary of Greenstein and Prince (2007). For surveys of business adoption and its variance over geography, see
Forman and Goldfarb (2006) and Forman et al. (2005).
(35.) See, e.g., Crandall (2005).
(36.) In many areas, fiber to the home was prohibitively expensive for almost all users except businesses, and
even then, it was mostly used by businesses in dense urban areas, where the fiber was cheaper to lay. Fiber to the
home has recently become cheaper and may become a viable option sometime in the future. See Crandall (2005).
(37.) OECD Broadband Portal, http://www.oecd.org/sti/ict/broadband, Table 1d.
Page 21 of 24
Internet Infrastructure
(38.) OECD Broadband Portal, http://www.oecd.org/sti/ict/broadband, Table 1k. For a critique of the US standings in
these rankings and insights into how to correct misunderstandings, see Wallsten (2009). Perhaps the biggest issue
is the denominator, which is per capita. However, a subscriber tends to subscribe to one line per household. Since
US average household size is much larger than average household size in other countries, the figure gives the
false impression that US residences have less access to broadband than is actually accurate.
(39.) These come from Greenstein and McDevitt (2011).
(40.) For sources, see Greenstein and McDevitt (2009, 2011).
(41.) See e.g., Federal Communications Commission (2010b), or Marcus (2008).
(42.) See e.g., http://www.wispa.org/. Also see http://wispassoc.com/.
(43.) For interpretations of platform incentives, see e.g., Gawer (2009) or Evans, Haigu and Schmalensee (2006).
(44.) This is distinct from an engineering notion of a platform. The designers of the Internet deliberately built what
they regarded as a computing platform. The inventors employed what they regarded as a balanced and
standardized bundle of components to regularly deliver services. This balance reflected a long-standing and
familiar principle in computer science. The inventors and DARPA administrators anticipated a benefit from this
design: others would build applications, though these inventors did not presume to know what those applications
would do specifically.
(45.) See, e.g., Bresnahan and Greenstein (1999), or Dedrick and West (2001).
(46.) See, e.g., Bresnahan (1999).
(47.) See, e.g., Gawer and Cusumano (2002) or Gawer and Henderson (2007).
(48.) Sometimes Google retains many proprietary features, particularly in its search engine, which also supports a
lucrative ad-placement business. Google takes action to prevent anyone from imitating it. For example, the
caching, indexing, and underlying engineering tweaking activities remain hidden from public view.
(49.) See e.g., Cusumano and Yoffie (2000), Bresnahan and Yin (2007), or Bresnahan et al. (2011).
(50.) For an account of this decision, see Burgelman (2007).
(51.) For a book with this theme, see, e.g., Zittrain (2009), Lessig (1999), or Von Schewick (2010).
(52.) Anderson and Wolff (2010) or Zittrain (2009).
(53.) Once again, this case is explained in detail in Greenstein (2009b).
(54.) Numerous computer scientists and networking experts have pointed out that both Sprint and Cogent could
have adjusted their routing tables in advance to prevent users from being cutoff. Hence, there is a sense in which
both parties bear responsibility for imposing costs on their users.
(55.) It appears that Sprint's capitulation to its user base is, however, evidence that Sprint's management does not
have the ability to ignore its users for very long.
(56.) This case is explained in detail in Greenstein (2009b).
(57.) For summary, see Adam Rothschild, December 2, 2010, http://www.voxel.net/blog/2010/12/peering-disputescomcast-level-3-and-you.
(58.) This is especially true of the Internet Architecture Board and IETF, before it moved under the auspices of the
Internet Society in 1992, where it remains today. See, e.g., Abbate (1999) or Russell (2006).
(59.) See Abbate (1999) for a history of the design of these protocols. See Partridge (2008) for a history of the
processes that led to the development of email, for example.
Page 22 of 24
Internet Infrastructure
(60.) Simcoe (2007, 2010) provides an overview of the operations at IETF and its changes as it grew.
(61.) For further discussion see Farrell and Simcoe, chapter 2 of this volume.
(62.) For example, see, e.g., Chiao et al. (2007).
(63.) These contrasts are further discussed in Greenstein (2009a).
(64.) See, e.g., Russell (2006).
(65.) The latter forbearance was deliberate. On the lack of interference in the design of the Ethernet, see von Burg
(2001). On the design of 56K modems, see Augereau, Greenstein, and Rysman (2007). On the lack of regulation for
network interconnection, see the full discussions in, e.g., Oxman (1999) or Kende (2000) or the account in
Neuchterlein and Weiser (2005). More recent experience has departed from these trends, particularly in principles
for regulating last-mile infrastructure. A summary of these departures is in Greenstein (2007b).
(66.) For a longer explanation of these origins, see, e.g., Kesan and Shah (2001) or Mueller (2002).
(67.) There is considerable writing about the growth of the production of Linux software and from a variety of
perspectives. See, e.g., Dalle, David, Ghosh, and Wolak (2004), Lerner and Tirole (2002), VonHippel (2005), West
and Gallagher (2006), or Arora and Farasat (2007). For an account and analysis of how many firms got on the
Linux bandwagon, see, e.g., Dedrick and West (2001) or Fosfuri, Giarratana and Luzzi (2005). For further
discussion see chapter 17 by Johnson, this volume.
(68.) For more on the history and operation of Apache, see, e.g., Mockus, Fielding, and Herbsleb (2005).
(69.) For more information, see Greenstein (2006).
(70.) For further discussion, see the chapter 3 by Hagiu, this volume.
(71.) The figures come from the website maintained by the Creative Commons, http://creativecommons.org/.
(72.) For one interesting account of the changing ratio of “suits to beards” at the IETF, see Simcoe (2007, 2010).
For an account of the manipulation of hearings at the IEEE, see Mackie-Mason and Netz (2007).
(73.) Federal Communications Commission (2010a).
(74.) See Marcus (2008) and Marcus and Elixmann (2008).
(75.) The overinvestment in Internet infrastructure in the late 1990s had many causes. These are analyzed by,
among others, Goldstein (2005), Greenstein (2007b), and Odlyzko (2010).
(76.) Statistics in the National Broadband Plan, FCC (2010a), seem to indicate that 3 percent of US homes subscribe
to this service as of the end of 2009.
(77.) The economics behind the high cost of providing broadband in low-density locations is explained in detail in
Strover (2001), Crandall (2005), and Greenstein and Prince (2007).
(78.) See Marcus (2008) and Marcus and Elixmann (2008).
(79.) For a summary, see Goldstein (2005), Nuechterlein and Weiser (2005), Greenstein (2010), and Federal
Communications Commission (2010b).
(80.) See, e.g., Seamans (2010).
(81.) See, e.g., Baumol et al. (2006).
Shane Greenstein
Prof Shane Greenstein is Kellogg Chair in Information Technology at the Kellogg School of Management, Northwestern University,
Evanston, USA
Page 23 of 24
Four Paths to Compatibility
Oxford Handbooks Online
Four Paths to Compatibility
Joseph Farrell and Timothy Simcoe
The Oxford Handbook of the Digital Economy
Edited by Martin Peitz and Joel Waldfogel
Print Publication Date: Aug 2012
Online Publication Date: Nov
2012
Subject: Economics and Finance, Economic Development
DOI: 10.1093/oxfordhb/9780195397840.013.0002
Abstract and Keywords
This article describes the economics of standardization, and the costs and benefits of alternative ways to achieve
compatibility. Four paths to compatibility, namely standards wars, negotiations, dictators, and converters, are
explained. These four paths to compatibility have different costs and benefits. Standard setting organizations are a
heterogeneous set of institutions connected by their use of the consensus process. Government involvement may
be appropriate when private control of an interface would result in utmost market power. Converters are attractive
because they preserve flexibility for implementers. Compatibility standards can emerge through market
competition, negotiated consensus, converters, or the actions of a dominant firm. Developing a better
understanding of how a particular path is selected shows a crucial first step toward measuring the cost-benefit
tradeoffs across paths, and adjudicating debates over the efficiency of the selection process.
Keywords: standards wars, negotiations, dictators, converters, compatibility standards, standardization, standard setting organizations
1. Introduction
Compatibility standards are design rules that promote product interoperability, such as the thread size for
mechanical nuts and bolts or the communication protocols shared by all Internet devices. Products that adhere to
standards should work together well, which produces a range of benefits: users may share information, or “mix
and match” components; the cost of market entry declines; and there is a division of labor, thus enabling
specialization in component production and innovation. This chapter describes four paths to compatibility—
standards wars, negotiations, dictators, and converters—and explores how and when they are used, as
alternatives or in combination.
While product interoperability may pose engineering challenges, we focus on issues of economic incentive that
arise when its costs and benefits are not evenly distributed. For example, firms that control a technology platform
may resist compatibility with other systems or standards that could reduce switching costs for their installed base.
Specialized component producers may fight against standards that threaten to “commoditize” their products. Even
when support for compatibility is widespread, rival firms may advocate competing designs that confer private
benefits because of intellectual property rights, lead-time advantages, or proprietary complements.
Given this mix of common and conflicting interests, we focus on four natural ways to coordinate design decisions
(in the sense of achieving compatibility). The first (p. 35) is decentralized choice, which can yield coordinated
outcomes when network effects are strong, even if the resulting process is messy. Negotiations are a second
coordination mechanism. In particular, firms often participate in voluntary standard setting organizations (SSOs),
which seek a broad consensus on aspects of product design before endorsing a particular technology. A third
route to compatibility is to follow the lead of an influential decision maker, such as a large customer or platform
leader. Finally, participants may abandon efforts to coordinate on a single standard and instead patch together
1
Page 1 of 20
Four Paths to Compatibility
partial compatibility through converters or multihoming.1
These four paths to compatibility have different costs and benefits, which can be measured in time and resources,
the likelihood of successful coordination for compatibility, and the ex post impact on competition and innovation.
Whether these complex welfare trade-offs are well internalized depends on how (and by whom) the path to
compatibility is chosen. A full treatment of the compatibility problem would specify the selection process and
quantify the relative performance of each path. In practice, although theory clarifies the potential trade-offs, we
have limited empirical evidence on the comparative costs and benefits of each path, or the precise nature of the
selection process.
Sometimes the choice of a particular path to compatibility is a more-or-less conscious decision. For example, firms
can decide whether to join the deliberations of an SSO or follow the lead of a dominant player. A dominant player
can decide whether to commit to a standard and expect (perhaps hope) to be followed, or defer to a consensus
negotiation. As these examples suggest, it can be a complex question who, if anyone, “chooses” the mechanism,
if any, used to coordinate. Some market forces push toward efficiency, but it is not guaranteed. For example, a
platform leader has a general incentive to dictate efficient interface standards or to allow an efficient evolution
process, but that incentive may coexist with, and perhaps be overwhelmed by, incentives to stifle ex post
competition. Likewise, competition among SSOs may or may not lead them toward better policies, and standards
wars may or may not tip toward the superior platform.
Sometimes firms will start down one path to compatibility and then veer onto another. For instance, a decentralized
standards war may be resolved by resort to an SSO or through the intervention of a dominant firm. Slow
negotiations within an SSO can be accelerated by evidence that the market is tipping, and platform sponsors may
promote complementary innovation by using an SSO to open parts of their platform. Although theory suggests that
certain “hybrid paths” can work well, we know rather little about how different coordination mechanisms
complement or interfere with one another.
This chapter begins by explaining something familiar to many readers: how the choice of interoperability standards
resembles a coordination game in which players have a mix of common and conflicting incentives. In particular, it
explains how compatibility produces broadly shared benefits, and discusses several reasons that firms may
receive private benefits from coordinating on a preferred technology. Section 3 describes costs and benefits of our
four paths to compatibility. Section 4 examines the selection process and the role of “hybrid” paths to
compatibility. Section 5 concludes.
(p. 36) 2. Costs and Benefits of Compatibility
When all influential players favor compatibility, creating or upgrading standards involves a coordination problem.
When there is but one technology, or when participants share common goals and notions of quality, the solution is
primarily a matter of communication that can be solved by holding a meeting, or appointing a focal adopter whom
all agree to follow. But if there are several technologies to choose from and participants disagree about their
relative merit, it turns a pure coordination game into a battle of the sexes, where players may try to “win” by
arguing for, or committing to, their preferred outcome.
Figure 2.1 illustrates the basic dilemma in a symmetric two-player game. As long as C 〉 B, the benefits of
compatibility outweigh the payoffs from uncoordinated adoption of each player's preferred technology, and the
game has two pure-strategy Nash Equilibria: both adopt A, and both adopt B. Each player gains some additional
private benefit (equal to D) in the equilibrium that selects their preferred technology. When these private benefits
are small (D ≈ 0), many coordination mechanisms would work well. But as D grows large, players will push hard for
their preferred equilibrium. Whether these efforts to promote a particular outcome are socially productive depends
on a variety of factors about the players’ available actions and the details of the equilibrium selection process.
Later, we assume that D 〉 0, and compare the costs and benefits of four broad methods for choosing an
equilibrium. But first, this section explains why the payoffs in Figure 2.1 can be a sensible way to model the choice
of compatibility standards, particularly in the information and communications technology (ICT) sector.
Page 2 of 20
Four Paths to Compatibility
Click to view larger
Figure 2.1 Compatibility Choice as a Coordination Game.
The benefits of compatibility (C-B in Figure 2.1) come in two flavors: horizontal and vertical. Horizontal
compatibility is the ability to share complements across multiple platforms, and we call a platform horizontally open
if its installed base of complements can be easily accessed from rival systems. Many parts of the Internet are in this
sense horizontally open. For example, web pages can be displayed on (p. 37) competing web browsers, and rival
instant messenger programs allow users to chat. Video game consoles and proprietary operating systems, such as
Microsoft Windows, by contrast, are horizontally closed: absent further action (such as “porting”), an application
written for one is not usable on others.
These distinctions can be nuanced. For example, what if a platform's set of complements is readily available to
users of rival platforms, but at an additional charge, as with many banks’ ATM networks? Similarly, Microsoft may
have choices (such as the degree of support offered to cross-platform tools like Java) that affect, but do not fully
determine, the speed and extent to which complements for Windows become available on other platforms.
Benefits of horizontal compatibility include the ability to communicate with a larger installed base (direct network
effects) and positive feedback between the size of an installed base and the supply of complementary goods
(indirect network effects). Katz and Shapiro (1985) analyzed oligopoly with firm-specific demand-side increasing
returns. A more recent literature on many-sided platforms (e.g. Rochet and Tirole 2003; Parker and Van Alstyne
2005; Weyl 2010) extends the analysis of indirect network effects by allowing externalities and access prices to
vary across different user groups.2
Vertical compatibility is the ability of those other than the platform sponsor to supply complements for the system.
We call a platform vertically open if independent firms can supply complements without obtaining a platform
leader's permission.3 For example, the Hush-a-Phone case (238 F.2d 266, 1956), and the FCC's later Carterfone
decision (13 F.C.C.2d 420) opened the U.S. telephone network to independently supplied attachments such as
faxes, modems, and answering machines. Many computing platforms, including Microsoft Windows, use vertical
openness to attract independent software developers. Like horizontal openness, vertical openness can be a matter
of degree rather than a sharp distinction. For instance, a platform leader may offer technically liberal access
policies but charge access fees.
Vertical compatibility produces several types of benefits. There are benefits from increased variety when vertical
compatibility allows users to “mix and match” components (Matutes and Regibeau 1988). Vertical openness also
can reduce the cost of entry, strengthening competition in complementary markets. Finally, vertical compatibility
leads to a “modular” system architecture and division of innovative labor. Isolating a module that is likely to
experience a sustained trajectory of improvements allows other components to take advantage of performance
gains while protecting them from the cost of redesign. And when the locus of demand or the value of
complementary innovations is highly uncertain, modularity and vertical openness facilitate simultaneous design
experiments (Bresnahan and Greenstein 1999; Baldwin and Clark 2000).
The benefits of horizontal or vertical compatibility are often broadly shared but need not be symmetric: there can
also be private benefits of having a preferred technology become the industry standard. Such private benefits,
labeled D in Figure 2.1, often lead to conflict and coordination difficulties in the search for compatibility.
(p. 38) One important source of conflict is the presence of an installed base. Upgrading an installed base can be
costly, and firms typically favor standards that preserve their investments in existing designs. Moreover, platform
leaders with a large installed base will favor designs that preserve or increase switching costs, while prospective
Page 3 of 20
Four Paths to Compatibility
entrants push to reduce them. For example, in its US antitrust case, Microsoft was convicted of using illegal tactics
to prevent Windows users, developers, and OEMs from migrating to the independent standards embodied in the
Netscape browser and Java programming language.
Design leads are another source of conflict. Short ICT product life cycles leave firms a limited window of
opportunity to capitalize on the demand unleashed by a new standard, and first-mover advantages can be
important. Thus, firms may try to block or delay a new standard if rivals have a significant lead at implementation.
DeLacy et al. (2006) describe such efforts in the context of Wi-Fi standards development.
In some cases, there is conflict over the location of module boundaries, or what engineers call the “protocol
stack.” Since compatibility often promotes entry and competition, firms typically prefer to standardize components
that complement their proprietary technology, but leave room for differentiation in areas where they have a
technical edge. For example, Henderson (2003) describes how the networking technology start-up Ember allegedly
joined several SSOs to prevent new standards from impinging on its core technology.
Conflicts can also emerge when firms own intellectual property rights in a proposed standard, which they hope to
license to implementers or use as leverage in future negotiations. Lerner, Tirole, and Strojwas (2003) show that
nearly all “modern” patent pools are linked to compatibility standards, and Simcoe (2007) documents a rapid
increase in intellectual property disclosures in the formal standard-setting process. While data on licensing are
scant, Simcoe, Graham, and Feldman (2009) show that patents disclosed in the formal standards process have an
unusually high litigation rate.
Finally, conflicting interests can amplify technological uncertainty. In particular, when technical performance is
hard to measure, firms and users will grow more skeptical of statements from self-interested participants about the
quality of their favored design.
3. Four Paths to Compatibility
Given this mix of conflict, common interest, and incomplete information, choosing compatibility standards can be a
messy process. This section considers the performance of four paths to compatibility—standards wars, SSOs,
dictators, and converters—in terms of the probability of achieving compatibility, the expected time and resource
costs, and the implications for ex post competition and innovation. (p. 39) We find that economic theory helps
articulate some of the complex trade-offs among these paths, but there is little systematic evidence.
3.1. Standards Wars
Standards wars can be sponsored or unsponsored; for brevity we focus here on the sponsored variety, in which
proponents of alternative technologies seek to preempt one another in the marketplace, each hoping that
decentralized adoption will lead to their own solution becoming a de facto standard through positive feedback and
increasing returns. Standards wars have broken out over video formats, modem protocols, Internet browsers, and
transmission standards for electricity and cellular radio. These wars can be intense when horizontally incompatible
platforms compete for a market with strong network effects, which they expect to tip toward a single winner who will
likely acquire market power. Much has been written about the tactics and outcomes in such wars, but we do not
attempt to discuss them comprehensively here, only to remind the reader of some of the dynamics.4
Standards wars often involve a race to acquire early adopters and efforts to manipulate user expectations, as
described in Besen and Farrell (1994) or Shapiro and Varian (1998). Preemption is one strategy for building an
early lead in an adoption race. Another strategy is to aggressively court early adopters with marketing, promotions,
and pricing. Firms may also work to influence users’ expectations regarding the likely winner, since these beliefs
may be self-fulfilling.5
Firms that fall behind in a race for early adopters or expectations may use backward compatibility or bundling to
catch up. Backward compatibility jump-starts the supply of complements for a new platform. For instance, many
video game platforms can play games written for older consoles sold by the same firm. Bundling promotes the
adoption of new standards by linking them to existing technology upgrades. For example, Sony bundled a Blu-ray
disc player with the Playstation game console to promote that video format over HD-DVD, and Bresnahan and Yin
Page 4 of 20
Four Paths to Compatibility
(2007) argue that Microsoft took advantage of the Windows upgrade cycle to overtake Netscape in the browser
wars.
Given the range of tactics used in a standards war, does decentralized technology adoption provide an attractive
route to coordination? One social cost is that it will often lead to the emergence of a horizontally closed platform.
While one might question the direction of causality (perhaps intractable conflicts over horizontal interoperability
lead to standards wars), alternative paths to coordination may produce more ex post competition and reduce the
risk of stranded investments.
The economic logic of standards wars seems consistent with concerns that markets may “tip” prematurely toward
an inferior solution. While many cite the QWERTY keyboard layout (the standard in English-language keyboards) as
an example (e.g. David 1990), Liebowitz and Margolis (1990) dispute the empirical evidence and suggest that
markets will typically coordinate on the best available technology, as long as the benefits of changing platforms
outweigh any switching (p. 40) costs. It is difficult to find natural experiments that might resolve this debate, for
example by randomly assigning an early lead in settings where there are clear differences in platform quality. But
even if standards wars typically “get it right” in terms of selecting for quality, coordination problems may affect the
timing of standards adoption.6
Optimists argue that standards wars are speedy, since participants have strong incentives to race for early
adopters, and that fierce ex ante competition offsets any social cost of ex post incompatibility. But competition for
early adopters does not always take the form of creating and transferring surplus, and its benefits must be weighed
against the costs of stranded investments in a losing platform. Moreover, the uncertainty created by a standards
war may cause forward-looking users, who fear choosing the losing platform, to delay commitments until the battle
is resolved. For example, Dranove and Gandal (2003) find that preannouncement of the DivX format temporarily
slowed the adoption of the digital video disc (DVD). Augereau, Rysman, and Greenstein (2006) suggest that the
standards war in 56K modems also delayed consumer adoption.
When it is costly to fight a standards war, participants may seek an escape route, such as some type of truce.7 For
example, the 56K modem standards war ended in adoption of a compromise protocol incorporating elements of
both technologies. The battle between code-division multiple access (CDMA) and time division multiple access
(TDMA) cellular phone technology ended in a duopoly stalemate, with each standard capturing a significant share
of the global market.
Ironically, these escape routes and stalemates illustrate a final strength of decentralized adoption as a path to
compatibility: it can reveal that network effects are weak, or that technologies initially perceived as competing
standards can ultimately coexist by serving different applications. For example, Bluetooth (IEEE 802.15) was
conceived as a home-networking standard, but ceded that market to Wi-Fi (IEEE 802.11) and is now widely used in
short-range low-power devices, such as wireless headsets, keyboards, and remote controls. Similarly, the plethora
of digital image formats (JPEG, GIF, TIFF, PNG, BMP, etc.) reflect trade-offs between image quality and compression,
as well as compatibility with specific devices. Since “war” is a poor metaphor for the process of matching
differentiated technology to niche markets, Updegrove (2007) has proposed the alternative label of “standards
swarms” for settings where network effects are weak relative to the demand for variety.
3.2. Standard Setting Organizations
One alternative to standards wars is for interested parties to try and coordinate through negotiation. This process is
often called formal or de jure standard setting, and typically occurs within consensus standard setting
organizations.
There are hundreds of SSOs, and many of these nonprofit institutions develop standards for safety and
performance measurement, as well as product (p. 41) compatibility.8 We use a broad definition of SSO that
includes globally recognized “big I” standard setters, such as International Telecommunication Union (ITU) and
(International Organization for Standardization (ISO); private consortia that manage a particular platform, such as
the Internet Engineering Task Force (IETF) and World Wide Web Consortium (W3C); and smaller consortia that
focus on a particular technology, such as the USB Forum or the Blu-ray Disc Association.9 This definition could
even be stretched to include collaborative product-development groups, such as open-source software
Page 5 of 20
Four Paths to Compatibility
communities. While the largest SSOs have hundreds of subcommittees and maintain thousands of specifications,
small consortia can resemble joint ventures, wherein a select group of firms develop and cross-license a single
protocol under a so-called promoter-adopter agreement.
Standards practitioners typically distinguish between consortia and “accredited” standards developing
organizations (SDOs). SDOs sometimes receive preferential treatment in trade, government purchasing, and
perhaps antitrust in return for adhering to best practices established by a national standards agency, such as the
American National Standards Institute (ANSI).10 Table 2.1 hints at the size and scope of formal standard-setting in
the United States by counting entries in the 2006 ANSI catalog of American National Standards and listing the 20
largest ANSI-accredited SDOs.11
SSOs use a consensus process to reach decisions. Though definitions vary, consensus typically implies support
from a substantial majority of participants. For example, most accredited SDOs require a super-majority vote and a
formal response to any “good faith” objections before approving a new standard. Since SSOs typically lack
enforcement power, this screening process may serve as a signal of members’ intentions to adopt a standard, or
an effort to sway the market's beliefs. Rysman and Simcoe (2008) provide some empirical evidence that SSOs’
nonbinding endorsements can promote technology diffusion by studying citation rates for US patents disclosed in
the standard-setting process, and showing that an SSO endorsement leads to a measurable increase in forward
citations.
Beyond using a loosely defined consensus process and relying on persuasion and network effects to enforce their
standards, SSOs’ internal rules and organization vary widely. Some are open to any interested participant, while
others charge high fees and limit membership to a select group of firms. Some SSOs have a completely transparent
process, whereas others reveal little information. Some SSOs require members to grant a royalty-free license to
any intellectual property contained in a standard, whereas others are closely aligned with royalty-bearing patent
pools. There has been little empirical research on the internal organization of SSOs, but Lemley (2002) and Chiao,
Lerner, and Tirole (2007) examine variation in SSOs’ intellectual property rights policies.
Given SSOs’ heterogeneity, what can we say about the costs and benefits of the consensus process as a path to
coordination? Since SSOs encourage explicit comparisons and often have an engineering culture that emphasizes
the role of technical quality, there is some reason to expect higher-quality standards than would emerge from a
standards war or an uninformed choice among competing (p. 42) technologies. This prediction appears in the
stochastic bargaining model of Simcoe (2012), as well as the war-of-attrition model of Farrell and Simcoe (2012),
where SSOs provide a quality-screening mechanism.
Page 6 of 20
Four Paths to Compatibility
Table 2.1 Major ANSI Accredited SSOs
Acronym
Standards
ICT
Full Name
INCITS
10,503
Y
International Committee for Information Technology Standards
ASTM
8,339
N
American Society for Testing and Materials
IEEE
7,873
Y
Institute of Electrical and Electronics Engineers
UL
7,469
N
Underwriters Laboratories
ASME
7,026
N
American Society of Mechanical Engineers
ANSI/TIA
4,760
Y
Telecommunications Industry Association
ANSI/T1
3,876
Y
ANSI Telecommunications Subcommittee
ANSI/ASHRAE
3,070
N
American Society of Heating, Refrigerating and Air-Conditioning
Engineers
AWS
2,517
N
American Welding Society
ANSI/NFPA
2,365
N
National Fire Protection Association
ANSI/EIA
2,011
Y
Electronic Industries Association
ANSI/SCTE
1,803
Y
Society of Cable Telecommunications Engineers
ANSI/AWWA
1,759
N
American Water Works Association
ANSI/AAMI
1,621
Y
American Association of Medical Imaging
ANSI/NSF
1,612
N
National Sanitation Foundation
ANSI/ANS
1,225
N
American Nuclear Society
ANSI/API
1,225
N
American Petroleum Institute
ANSI/X9
940
N
Financial Industry Standards
ANSI/IPC
891
Y
Association Connecting Electronics Industries
ANSI/ISA
872
Y
International Society of Automation
Total ICT
30,786
43%
Notes: List of largest ANSI accredited Standards Developing Organizations based on a count of documents
listed in the 2006 ANSI catalog of American National Standards. The “Standards” column shows the actual
document count. The “ICT” column indicates the authors' judgment as to whether that organization's primary
focus is creating compatibility standards.
Page 7 of 20
Four Paths to Compatibility
But technical evaluation and screening for quality can impose lengthy delays, especially when the consensus
process gives participants the power to block proposed solutions. A survey by the National Research Council
(1990) found that standards practitioners viewed delays as a major problem, and Cargill (2001) suggests that the
opportunity costs of delayed standardization explain a broad shift from accredited SDOs toward less formal
consortia. Farrell and Saloner (1988) develop a formal model to compare outcomes in a standards war (grab-thedollar game) to an SSO (war of attrition). Their theory predicts a basic trade-off: the formal consensus (p. 43)
process leads to coordination more often, while the standards war selects a winner more quickly.
SSOs have sought ways to limit deadlocks and lengthy delays. First, some grant a particular party the power to
break deadlocks, though such unilateral decisions could be viewed as a distinct route to compatibility (see below).
A second approach is to start early in the life of a technology, before firms commit to alternative designs.
Illustrating the impact of commitment on delays, Simcoe (2012) shows how delays at the IETF increased as the
Internet matured into a commercial platform. But early standardization also has downsides; in particular, private
actors have little incentive to contribute technology if they see no commercial opportunity, so anticipatory
standards rely heavily on participation from public sector institutions such as academia or government labs.
A third way to resolve deadlocks is to agree on partial or incomplete standards. Such standards often include
“vendor-specific options” to facilitate product differentiation. And SSO participants sometimes agree to a
“framework” that does not achieve full compatibility but standardizes those parts of an interface where
compromise can be reached (thus lowering the ex post cost of achieving compatibility through converters).12
Fourth, SSOs may work faster if competing interests are placed in separate forums, such as independent working
groups within a large SSO or even independent consortia. Lerner and Tirole (2006) model forum shopping when
there is free entry into certification, and show that technology sponsors will choose the friendliest possible SSO,
subject to the constraint that certification sways user beliefs enough to induce adoption. This “competing forums”
approach works well if there is demand for variety and converters are cheap. But if network effects are strong,
forum shopping may produce escalating commitments in advance of a standards war. For example, the Blu-ray and
HD-DVD camps each established an independent implementers’ forum to promote their own video format.
Beyond providing a forum for negotiation and certification activities, SSOs are often a locus of collaborative
research and development. Thus, one might ask whether selecting this path to coordination has significant
implications for innovation. Some forms of innovation within SSOs raise a public goods problem: incentives are
weak if all firms have free access to improvements, especially in highly competitive industries. Weiss and Toyofuku
(1996) gather evidence of free riding in 10BaseT standards development. Cabral and Salant (2008) study a model
where standardization leads to free riding in research and development (R&D), and find that firms may favor
incompatibility if it helps them sustain a high rate of innovation. Eisenmann (2008) suggests that SSOs often
struggle with “architectural” innovations that span many component technologies, since it is difficult to coordinate
the decisions of specialized firms with narrow interests in the outcomes of a particular working group or technical
committee.
However, such problems need not prevent all innovation within SSOs. Firms often contribute proprietary technology
to open platforms, thus indicating that the benefits of standardizing a preferred technology outweigh the temptation
to (p. 44) free-ride in those cases. Where SSOs encourage horizontal openness, that should encourage
innovation in complementary markets by expanding the addressable market or installed base. And while standards
can reduce the scope for horizontal differentiation in the market for a focal component, increased competition may
stimulate the search for extensions and other “vertical” quality enhancements, as emphasized in Bresnahan's
(2002) analysis of divided technical leadership in the personal computer industry and the quality-ladder model of
Acemoglu et al. (2010).
Finally, in horizontally closed platforms, SSOs may encourage innovation by enabling commitments to vertical
openness.13 In particular, when a platform leader controls some bottleneck resource, small entrants may fear
higher access prices or other policy changes that would capture a share of their innovation rents. Platform leaders
might solve this hold-up problem by using SSOs to commit to ex post competition (see generally Farrell and Gallini,
1988). For instance, Xerox researchers used an SSO to give away the Ethernet protocol, and Microsoft took the
same strategy with ActiveX (Sirbu and Hughes, 1986; Varian and Shapiro, 1998, 254).
One important way SSOs address potential hold-up problems is by requiring firms to disclose essential patents and
Page 8 of 20
Four Paths to Compatibility
to license them on reasonable and nondiscriminatory (RAND) terms. These policies seek to prevent patent holders
from demanding royalties that reflect coordination problems and the sunk costs of implementation, as opposed to
the way that well-informed ex ante negotiation would reflect benefits of their technology over the next best
solution.14 Although the “reasonable” royalty requirement can be hard to enforce, and SSOs cannot protect
implementers from nonparticipating firms, these intellectual property policies are nevertheless an important method
for platform leaders to commit to vertical openness.
In summary, standard setting organizations are a heterogeneous set of institutions linked by their use of the
consensus process. This process emphasizes technical performance, and may select for high-quality standards,
but can also produce lengthy delays when participants disagree. SSOs also provide a forum for collaborative
innovation, and a way for platform leaders to commit to vertical openness in order to promote market entry and
complementary innovation.
3.3. Imposing a Standard
A third path to coordination is for someone with sufficient clout to simply impose a standard. This dominant player
might be a platform leader, a large customer or complementer, or a government agency. A potential advantage of
coordination by fiat is speed. In particular, dictators can avoid or resolve deadlocks that emerge in both standards
wars and SSOs. Systemwide architectural transitions may also be easier when a de facto platform leader
internalizes the benefits of a “big push” and is therefore willing to bear much of the cost. However, since dictators
are not always benevolent or capable of spotting the best technology, ex post competition, innovation incentives,
and technical quality will often depend on who is in charge.
(p. 45) Platform leaders often dictate standards for vertical interoperability. For example, AT&T historically set the
rules for connecting to the US telephone network, and IBM has long defined the interfaces used in the market for
plug-compatible mainframes. More recently, Apple has maintained tight control over new applications for its
iPhone/iPad platform. In principle, platform leaders should have an incentive to use their control over key interfaces
so as to organize the supply of complements efficiently. However, fears that incumbent monopolists will block entry
or hold up complementary innovators often lead to calls for policy makers to intervene in support of vertically open
interfaces.15
Farrell and Weiser (2003) summarize arguments for and against mandatory vertical openness, and introduce the
term internalizing complementary externalities (ICE) to summarize the laissez faire position that a platform leader
has socially efficient incentives. When ICE holds, a platform leader chooses vertical openness to maximize surplus;
open interfaces promote entry and competition in complementary markets, while closed interfaces encourage
coordination and systemic innovation. However, Farrell and Weiser note multiple exceptions to the ICE principle,
making it difficult to discern the efficiency of a platform leader's vertical policies in practice. For example, a platform
sponsor may inefficiently limit access if it faces regulated prices in its primary market; if control over complements
is a key tool for price discrimination; if it has a large installed base; or if a supply of independent complements
would strengthen a competing platform.16
In addition to platform leaders, large customers or complementers can act as de facto standard setters. For
instance, WalMart played an important role in the standardization for radio frequency identification (RFID) chips by
committing to a particular specification. Similarly, movie studios played a significant role in resolving the standards
war between Blu-ray and HD-DVD. And in some cases, the pivotal “customer” is actually a user group, as when
Cable Labs—a consortium of broadcasters—developed the data over cable service interface specification
(DOCSIS) protocol for cable modems.
The interests of large complementers and direct customers are often at least loosely aligned with those of endusers, to the extent that their own competitive positions are not threatened. Thus consumers may well benefit from
choices made by those powerful players. However, even well-informed powerful players may find it useful to gather
information within an SSO before making a decision. Farrell and Simcoe (2012) model this hybrid process, and find
that it often outperforms both uninformed immediate random choice and an SSO-based screening process that
lacks a dominant third-party.
Government is a third potential dictator of standards. In some cases, the government exerts influence as a large
Page 9 of 20
Four Paths to Compatibility
customer. For example, the US Office of Management and Budget Circular A-119 encourages government agencies
to use voluntary consensus standards. And in response to requests from the European Union, Microsoft submitted
its Open Office XML file formats to ISO. More controversially, governments may use regulatory authority to promote
a standard. For example, the US Federal Communication Commission coordinated a switch from (p. 46) analog
(NTSC; named for the National Television System Committee) to digital (ATSC; named for the Advanced Television
System Committee) television broadcasting. Sometimes support for standards is even legislated, as in the 2009
stimulus package, which contains incentives for physicians to adopt standardized electronic medical records (but
does not take a position on any specific technology).
In general, government involvement can be relatively uncontroversial when there are large gains from coordination
and little scope for innovation or uncertainty about the relative merits of different solutions. For instance, it is useful
to have standards for daylight-saving time and driving on the right side of the road. Government involvement may
also be appropriate when private control of an interface would lead to extreme market power primarily because of
severe coordination problems as opposed to differences in quality. But government intervention in highly technical
standard-setting processes can pose problems, including lack of expertise, regulatory capture, and lock-in on the
government-supported standard.
3.4. Converters and Multihoming
Converters, adapters, translators, and multihoming are ways to reduce the degree or cost of incompatibility. For
example, computers use a wide variety of file formats to store audio and video, but most software can read several
types of files. Econometricians use translation programs, such as Stat Transfer, to share data with users of different
statistical software. Even the Internet's core networking protocols arguably function as a cross-platform converter:
as long as all machines and networks run TCP/IP, it is possible to connect many different platforms and applications
over a wide variety of physical network configurations.
One benefit of using converters to achieve compatibility is that no single party incurs the full costs of switching.
Rather, everyone can choose their preferred system but can also tap into another platform's supply of
complements, albeit at a cost and perhaps with some degradation. Since translators need not work in both
directions, development costs are typically incurred by the party who benefits, or by a third party who expects to
profit by charging those who benefit. And converters can avert the long deadlocks that may occur in a standards
war or an open-ended negotiation, since there is no need to agree in advance on a common standard: each
platform simply publishes its own interface specifications and lets the other side build a converter (assuming
unanimous support for the converter-based solution).
Sometimes users or complementers may join several platforms; such multihoming can resemble a converter
solution. For example, most retailers accept several payment card systems, so consumers can pick one and for
the most part not risk being unable to transact. Corts and Lederman (2009) show that video game developers
increasingly multihome, and argue that multiplatform content explains declining concentration in the console
market over time. And instead of seeking a common standard for all computer cables, most machines offer a
variety of sockets (p. 47) to accommodate different connectors such as USB, SCSI, HDMI, and Ethernet.
Multihoming preserves platform variety and may align the costs and benefits of horizontal compatibility. However,
dedicated converters or coordination on a single platform become more efficient as the costs of platform adoption
increase.
Of course, multihoming or converters cannot eliminate conflicting interests, and can open new possibilities for
strategic behavior. For example, firms may seek an advantage by providing converters to access a rival's
complements while attempting to isolate their own network. Atari tried this strategy by developing a converter to
allow its users to play games written for the rival Nintendo platform. However, Nintendo was able to block Atari's
efforts by asserting intellectual property based on an encryption chip embedded in each new game (Shapiro and
Varian, 1998).
Firms may use one-way converters to create market power on either side of a vertical interface. MacKie-Mason and
Netz (2007) suggest that Intel pursued this strategy by including a “host controller” in the USB 2.0 specification,
and allowing peripheral devices to speak with the host-controller, but delaying the release of information about the
link between the host-controller and Intel's chipsets and motherboards.
Page 10 of 20
Four Paths to Compatibility
Converters can also favor a particular platform by degrading, rather than fully blocking, interoperability. Many
computer users will be familiar with the frustrations of document portability, even though most word processors and
spreadsheets contain converters that read, and sometimes write, in the file formats used by rival software.
Finally, converters may work poorly for technical reasons. This may be particularly salient for vertical interfaces,
since allowing designs to proliferate undercuts the benefits associated with modularity and specialization across
components. For example, most operating systems do not provide a fully specified interface for third-party
hardware (e.g. printers or keyboards), and the “device driver” software that acts as a translator is widely believed
to be the most common cause of system failures (Ganapathi et al., 2006).
In summary, converters are attractive because they preserve flexibility for implementers. However, in a standards
war, firms may work to block converters, as Atari did. Firms may also gain competitive advantage by using
converters to manipulate a vertical interface. And even when there is little conflict, dedicated compatibility
standards may dominate converters for heavily used interfaces, where performance and scalability are important.
4. Choosing a Path
What determines which path to compatibility is followed, or attempted, in a particular case? When will that choice
be efficient? While data on the origins of compatibility standards are scant, this section offers some remarks on the
selection process.17
(p. 48) Choosing a path to compatibility can itself be a coordination problem, creating an element of circularity in
analysis of this choice. We try to sidestep this logical dilemma by grouping platforms into two categories: those with
a dominant platform leader, and shared platforms that default to either collective governance (SSOs) or splintering
and standards wars. Eisenmann (2008) suggests that this distinction between shared and proprietary platforms
emerges early in the technology life cycle, based on firms’ strategic decisions about horizontal openness. In
particular, he predicts that platform leaders will predominate in “winner-take-all” markets where network effects are
large relative to the demand for variety, multihoming is costly, and the fixed costs of creating a new platform are
substantial.
This life-cycle perspective of platform governance is consistent with the intriguing (though unsystematic)
observation that many technologies settle on a particular path to compatibility, even a specific agency, and adhere
to it over time. For example, the ITU has managed international interoperability of telecommunications networks
since 1865, and JEDEC has been the dominant SSO for creating open standards for semiconductor interoperability
(particularly in memory chips) since 1968. Likewise, for products such as operating systems and video game
consoles, proprietary platform leadership has been the dominant mode of coordination across several generations
of technology.
Nevertheless, there are several well-known cases of dominant firms losing de facto control over a platform. The
most famous example is IBM and the personal computer architecture. Other examples include the demise of
microcomputing incumbents like Digital Equipment; Google replacing AltaVista as the dominant search engine;
Microsoft's well-documented struggles to adapt to the Internet; and the ongoing displacement of the Symbian
cellular phone operating system by alternatives from Research in Motion (Blackberry), Apple (iPhone), and Google
(Android). Bresnahan (2001) suggests that a key condition for such “epochal” shifts in platform leadership is
disruptive technical change at adjacent layers of the larger system, since it is hard to displace a platform leader
through direct horizontal competition when network effects are strong. Although this observation certainly accords
with the facts of well-known cases, it is not very amenable to formal testing given the infrequent nature of such
major shifts.
4.1. Selection and Efficiency
When there is a clear platform leader, the ICE principle suggests that leader will have an incentive to choose an
efficient coordination process regarding vertical compatibility. For example, the platform leader might delegate
standard-setting activities to an SSO when fears of hold-up impede complementary innovation, but impose a
standard when SSO negotiations deadlock.
Page 11 of 20
Four Paths to Compatibility
Unfortunately, the ICE principle is subject to many caveats, bringing back questions about whether a platform
leader chooses a particular path for its efficiency or for other reasons such as its impact on ex post competition.
For instance, Gawer and (p. 49) Henderson (2007) use Intel's decision to disseminate USB as an example of ICE,
while MacKie-Mason and Netz (2007) argue that Intel manipulated USB 2.0 to gain a competitive advantage. Where
one study emphasizes the initial decision to give up control over a technology, the other emphasizes the use of
one-way converters to exclude competitors and gain lead-time advantages in complementary markets. These
competing USB narratives highlight the difficulty of determining a platform leader's motives.
Without a platform leader, it is less clear why the private costs and benefits of choosing an efficient path to
compatibility would be aligned. If firms are ex ante symmetric and commit to a path before learning the merits of
competing solutions, they would have an ex ante incentive to choose the efficient mechanism. But standard setting
is typically voluntary, and firms do not commit to abide by consensus decisions. Thus, when asymmetries are
present, or emerge over time, firms may deviate to a path that favors their individual interests. While these
deviations from collective governance may lead to the “forking” of standards, they do not necessarily block the
SSO path, and in some cases the remaining participants can still achieve converter-based compatibility.
Some observers suggest that this chaotic situation can deliver the virtues of both decentralized adoption and
collective choice. For example, Greenstein (2010) argues that a proliferation of SSOs combined with widespread
independent technical experimentation is a sign of “healthy standards competition” on the commercial Internet.
This optimistic view emphasizes the virtues of “standards swarms.” When network effects are weak (as the
absence of a platform leader might sometimes suggest), and substantial market or technological uncertainty exists,
decentralized choice can identify promising standards for a particular niche, with SSOs emerging to facilitate
coordination as needed. Unfortunately, there is no guarantee that mixing decentralized adoption with SSOs
captures the benefits and avoids the costs of either path in isolation. In particular, either path may lead to a
stalemate, and when decentralized adoption is the outside option there is always a danger of stranded investments
or selecting the wrong system.
A second optimistic argument holds that new ways to govern shared technology platforms will arise in response to
market pressures and technological opportunities. For example, Cargill (2001) and Murphy and Yates (2009) claim
that accredited SDOs lost market share to small consortia during the 1980s and 1990s because the SDOs’
ponderous decision-making procedures were ill-matched to rapid ICT product lifecycles (see also Besen and Farrell
1991). Smaller and less formal organizations might work faster by relaxing the definition of consensus, taking
advantage of new technologies for collaboration, and allowing competing factions to work in isolation from one
another. Berners-Lee and Fischetti (1999) cite delays at the IETF as a primary motive for creating the World Wide
Web Consortium (W3C), and Figure 2.2 shows that consortia are indeed on the rise.18
Figure 2.2 The Growth of Consortia.
Notes:Figure shows the cumulative number of new consortia founded during each five-year period, based
on the authors' analysis of the list of ICT consortia maintained by Andrew Updegrove, and published at
www.consortiuminfo.org..
While this evolutionary hypothesis is intriguing, it is not obvious that organizational experimentation and
competition will evolve an efficient path to compatibility. Simcoe (2012) shows that consortia still experience
coordination delays when participants have conflicting interests over commercially significant technology. (p. 50)
And the proliferation of SSOs also increases the potential for forum shopping, as emphasized by Lerner and Tirole.
We view competition between SSOs as a promising topic for further research.19 At present, it remains unclear
Page 12 of 20
Four Paths to Compatibility
whether the current proliferation of organizational models for SSOs is the outcome of, or part of, an evolutionary
process, or simply confusion regarding how best to organize a complex multilateral negotiation.
4.2. Hybrid Paths
Although markets, committees, converters, and dictators offer distinct paths to compatibility, they can sometimes
be combined. For example, standards wars may be resolved through negotiations at an SSO, or the intervention of
a dominant firm; and slow SSO negotiations may be accelerated by an agreement to use converters, or by
evidence that the market is tipping toward a particular solution.
Farrell and Saloner (1988) model a hybrid coordination process that combines markets and committees. In their
model, the hybrid path combines the virtues of standards wars and SSOs without realizing all of the costs. In
particular, the market works faster than an SSO, while the committee reduces the chance of inefficient splintering.
The IETF's informal motto of “rough consensus and running code” reflects a similar logic. By emphasizing “running
code,” the IETF signals that firms should not wait for every issue to be resolved within a committee, and that some
level of experimentation is desirable. However, it remains important to achieve at least “rough consensus” before
implementation.
Synergies of hybrid style can also occur between SSOs. Independent firms and small consortia often work to
preestablish a standard, before submitting it to an accredited SDO for certification. For example, Sun Microsystems
used ISO's (p. 51) publicly accessible specification (PAS) process to certify the Java programming language and
ODF document format (Cargill 1997). Similarly, Microsoft used ISO's fast-track procedures to standardize its Open
Office XML document formats. As described above, platform leaders may value SDO certification if it provides a
credible signal of vertical openness that attracts complementary innovators. However, critics claim that fast-track
procedures can undermine the “due process” and “balance of interest” requirements that distinguish SDOs from
consortia, leading users or complementers to adopt proprietary technology out of a false sense of security.
A second hybrid path to compatibility occurs when participants in a standards war use converters to fashion an
escape route. For example, the 56K modem standards war was resolved by adopting a formal standard that
contained elements of competing systems. Converters can also reduce the scope of conflicting interests within an
SSO, especially when participants adopt a “framework” that falls short of full compatibility.
An alternative escape route (and third hybrid path) relies on a dictator to break deadlocks within an SSO. Farrell
and Simcoe (2012) analyze a model of consensus standard setting as a war of attrition, in which a poorly informed
but neutral third party can break deadlocks by imposing a standard. They find that this hybrid process will often
(but not always) outperform an uninterrupted screening process, or an immediate uninformed choice. In practice,
there are many examples of a dominant player intervening to accelerate a consensus process, such as the case
of DOCSIS (cable modems) or electronic health records, both mentioned above.
Thus, informally, there are some reasons to hope that a standards system with many paths to compatibility will
perform well. Platform leaders often have an incentive to choose the efficient path, and a greater variety of “pure”
paths means more options to choose from. SSOs may evolve in response to market pressures and technological
opportunities. And both theory and practical observation suggest that many paths to compatibility can be
combined in complementary ways.
At this point in our understanding, however, any optimism should be very cautious. Various exceptions to the ICE
principle show that platform leaders may weigh efficiency against ex post competition when choosing a path to
compatibility. It is not clear when competition among SSOs will lead to more efficient institutions, as opposed to
increased forum shopping and technology splintering. And while hybrid paths can work well, they highlight the
complex welfare trade-offs among the probability of coordination, the costs of negotiation, and the implications for
ex post competition and innovation.
5. CONCLUSIONS
Compatibility standards can emerge through market competition, negotiated consensus, converters or the actions
of a dominant firm. These four paths to compatibility have different costs and benefits, which depend on whether a
Page 13 of 20
Four Paths to Compatibility
(p. 52) particular standard promotes vertical or horizontal interoperability, the presence of an installed base or
proprietary complements, firms’ sunk investments in alternative designs, and the distribution of intellectual property
rights.
When choosing a path to compatibility, there are trade-offs between the probability of coordination, expected costs
in time and resources, and the implications for ex post competition and innovation. There is an argument that a
platform leader will internalize these costs and benefits and choose the socially efficient path to compatibility. But
that argument has many exceptions. Others argue that decentralized experimentation with different technologies,
loosely coordinated by a combination of markets and SSOs, will typically produce good outcomes. However, it is
hard to predict how far competition among SSOs leads them toward optimal policies, or how reliably standards wars
select the superior platform.
Amid these complex questions, there is certainly scope for beneficial government involvement, whether as a large
end-user, a regulator, or a third party with technical expertise. But direct government intervention in highly
technical standard-setting processes can pose problems, including lack of expertise, regulatory capture, and lockin on government-supported standards.
Viewing the economic literature on compatibility standards in terms of our four broad paths also suggests several
research opportunities. First, there are very few data on the relative market share of these alternative paths. Thus,
it is unclear whether economists have focused on the most important or most common modes of organizing the
search for compatibility, or merely the routes they find most interesting. Our impression is that standards wars and
platform leaders have received more academic attention than have SSOs and converters. Possibly this is because
the former paths are replete with opportunities for interestingly strategic, yet familiarly market-based, competitive
strategies, while the latter options lead to less tractable or more foreign questions of social choice and bargaining.
A second topic for research is the selection of a path to compatibility, particularly in the early stages of a
technology life cycle. Many studies assume either a standards war or a platform leader (who might delegate the
choice of standards for vertical compatibility to an SSO). But we know little about how the rules for collective
governance of a shared platform emerge or evolve over time. And there is not much research on forum shopping
by technology sponsors, or the nature and effects of competition among SSOs. Developing a better understanding
of how a particular path is chosen represents a crucial first step toward quantifying the cost-benefit tradeoffs
across paths (unless the assignment is random), and adjudicating debates over the efficiency of the selection
process.
Finally, there is an opportunity to examine interactions among the four paths to compatibility. Despite some first
steps toward modeling “hybrid” paths, there is no general theory and very little empirical evidence on who
chooses the mechanism(s) and how, or on whether the four paths tend to complement or interfere with one
another.
References
Acemoglu, D., Gancia, G., Zilibotti, F., 2010. Competing Engines: of Growth: Innovation and Standardization. NBER
Working Paper 15958.
Arthur, W.B., 1989. Competing Technologies, Increasing Returns, and Lock-In by Historical Events. Economic
Journal 97, pp. 642–665.
Augereau, A., Greenstein, S., Rysman, M., 2006. Coordination versus Differentiation in a Standards War: 56K
modems. Rand Journal of Economics 34(4), pp. 889–911. (p. 55)
Baldwin, C.Y., Clark, K.B., 2000. Design Rules. Vol. 1: The Power of Modularity. MIT Press.
Bekkers, R., Duysters, G., Verspagen, B., 2002. Intellectual Property Rights, Strategic Technology Agreements and
Market Structure: The Case of GSM. Research Policy 31, pp. 1141–1161.
Berners-Lee, T., Fischetti, M., 1999. Weaving the Web: The Original Design and Ultimate Destiny of the World Wide
Web by Its Inventor. San Francisco: HarperSanFrancisco.
Page 14 of 20
Four Paths to Compatibility
Besen, S. M., Farrell, J., 1991. The Role of the ITU in Standardization: Pre-Eminence, Impotence, or Rubber Stamp?
Telecommunications Policy 15(4), pp. 311–321.
Besen, S. M., Farrell, J., 1994. Choosing How to Compete—Strategies and Tactics in Standardization. Journal of
Economic Perspectives 8(2), pp. 117–131.
Besen, S.M., Saloner, G., 1989. The Economics of Telecommunications Standards. In: R. Crandall, K. Flamm (Eds.),
Changing the Rules: Technological Change, International Competition, and Regulation in Telecommunications.
Washington DC, Brookings, pp. 177–220.
Biddle, B., White, A., Woods, S., 2010. How Many Standards in a Laptop? (And Other Empirical Questions). Available
at SSRN: http://ssrn.com/abstract=1619440.
Boudreau, K., 2010. Open Platform Strategies and Innovation: Granting Access versus Devolving Control.
Management Science 56(10), pp. 1849–1872.
Bresnahan, T., 2002. The Economics of the Microsoft Case. Stanford Law and Economics Olin Working Paper No.
232. Available at SSRN: http://ssrn.com/abstract=304701.
Bresnahan, T., Greenstein, S., 1999. Technological Competition and the Structure of the Computer Industry. Journal
of Industrial Economics 47(1), pp. 1–40.
Bresnahan, T., Yin, P.-L., 2007. Standard Setting in Markets: The Browser War. In: S. Greenstein, Stango, V. (Eds.),
Standards and Public Policy, Cambridge University Press.
Cabral, L., Kretschmer, T., 2007. Standards Battles and Public Policy. In: S. Greenstein, Stango, V. (Eds.), Standards
and Public Policy, Cambridge University Press.
Cabral, L., Salant, D., 2008. Evolving Technologies and Standards Regulation. Working Paper. Available at SSRN:
http://ssrn.com/abstract=1120862.
Cargill, C., 2001. Evolutionary Pressures in Standardization: Considerations on ANSI's National Standards Strategy.
Testimony Before the U.S. House of Representatives Science Committee.
(http://www.opengroup.org/press/cargill_13sep00.htm).
Cargill, C., 2002. Intellectual Property Rights and Standards Setting Organizations: An Overview of Failed Evolution.
Available at: www.ftc.gov/opp/intellect/020418cargill.pdf.
Cargill, C. F., 1989. Information Technology Standardization: Theory, Process, and Organizations. Bedford, Mass,
Digital Press.
Cargill, C. F., 1997. Open Systems Standardization: A Business Approach. Upper Saddle River, N.J., Prentice Hall
PTR.
Chiao, B., Lerner, J., Tirole, J., 2007. The Rules of Standard Setting Organizations: An Empirical Analysis. RAND
Journal of Economics 38(4), pp. 905–930.
Corts, K., Lederman, M., 2009. Software Exclusivity and the Scope of Indirect Network Effects in the U.S. Home
Video Game Market. International Journal of Industrial Organization 27(2), pp. 121–136. (p. 56)
David, P. A., Greenstein, S., 1990. The Economics of Compatibility Standards: An Introduction to Recent Research.
Economics of Innovation and New Technology 1(1), pp. 3–42.
David, P. 1988. Clio and the Economics of QWERTY. American Economic Review 75, pp. 332–337.
DeLacey, B., Herman, K., Kiron, D., Lerner, J., 2006. Strategic Behavior in Standard-Setting Organizations. Harvard
NOM Working Paper No. 903214.
Dranove, D., Gandal, N., 2003. The DVD vs. DIVX Standard War: Empirical Evidence of Network Effects and
Preannouncement Effects. The Journal of Economics and Management Strategy 12(3), pp. 363–386.
Page 15 of 20
Four Paths to Compatibility
Dranove, D., Jin, G., 2010. Quality Disclosure and Certification: Theory and Practice. Journal of Economic Literature
48(4), pp. 935–963.
Eisenmann, T. 2008. Managing Proprietary and Shared Platforms. California Management Review 50(4).
Eisenmann, T., Barley, L., 2006. Atheros Communications. Harvard Business School, Case 806–093.
Eisenmann, T., Parker, G., Van Alstyne, M., 2009. Opening Platforms: When, How and Why? In: A. Gawer (Ed.),
Platforms, Markets and Innovation. Cheltenham, UK and Northampton, Mass.: Edward Elgar Publishing.
European Commission 2010. Guidelines on the Applicability of Article 101 of the Treaty on the Functioning of the
European Union to Horizontal Co-operation Agreements. Available at:
http://ec.europa.eu/competition/consultations/2010_horizontals/guidelines_en.pdf
Farrell, J., 2007. Should Competition Policy Favor Compatibility? In: S. Greenstein, Stango, V. (Eds.), Standards and
Public Policy, Cambridge: Cambridge University Press, pp. 372–388.
Farrell, J., Gallini, N., 1988. Second-Sourcing as a Commitment: Monopoly Incentives to Attract Competition. The
Quarterly Journal of Economics 103(4), pp. 673–694.
Farrell, J., Hayes, J., Shapiro, C., Sullivan, T., 2007. Standard Setting, Patents and Hold-Up. Antitrust Law Journal 74,
pp. 603–670.
Farrell, J., Klemperer, P., 2007. Coordination and Lock-in: Competition with Switching Costs and Network Effects. In:
M. Armstrong, Porter, R.H. (Eds.), Handbook of Industrial Organization (Volume 3), Elsevier, pp. 1967–2056.
Farrell, J., Saloner, G., 1988. Coordination through Committees and Markets. Rand Journal of Economics 19(2), pp.
235–252.
Farrell, J., Simcoe, T., 2012. Choosing the Rules for Consensus Standardization. The RAND Journal of Economics,
forthcoming. Available at: http://people.bu.edu/tsimcoe/documents/published/ConsensusRules.pdf
Farrell, J., Shapiro, C., 1992. Standard Setting in High-Definition Television. Brookings Papers on Economic Activity,
pp. 1–93.
Farrell, J., Weiser, P., 2003. Modularity, Vertical Integration, and Open Access Policies: Toward a Convergence of
Antitrust and Regulation in the Internet Age. Harvard Journal of Law and Technology 17(1), pp. 85–135.
Furman, J., Stern, S., 2006. Climbing Atop the Shoulders of Giants: The Impact of Institutions on Cumulative
Research. NBER Working Paper No. 12523.
Gallagher, S., West, J., 2009. Reconceptualizing and Expanding the Positive Feedback Network Effects Model: A
Case Study, Journal of Engineering and Technology Management 26(3), pp. 131–147. (p. 57)
Gawer A., Henderson, R., 2007. Platform Owner Entry and Innovation in Complementary Markets: Evidence from
Intel, Journal of Economics & Management Strategy 16(1), pp. 1–34.
Ganapathi, A., Ganapathi, V., Patterson, D., 2006. Windows XP Kernel Crash Analysis. Proceedings of the 20th
Large Installation System Administration Conference, pp. 149–159.
Greenstein, S., 2010. Glimmers and Signs of Innovative Health in the Commercial Internet. Journal of
Telecommunication and High Technology Law 8(1), pp. 25–78.
Henderson, R., 2003. Ember Corp.: Developing the Next Ubiquitous Networking Standard. Harvard Business School,
Case 9–703–448.
Henderson, R., Clark, K., 1990. Architectural Innovation: The Reconfiguration of Existing Product Technologies and
the Failure of Established Firms. Administrative Science Quarterly 35(1), pp. 9–30.
Intellectual Property Owners Association, 2009. Standards Primer: An Overview of Standards Setting Bodies and
Patent-Related Issues that Arise in the Context of Standards Setting Activities. Section 16. Available at:
Page 16 of 20
Four Paths to Compatibility
standardslaw.org/seminar/class-2/excerpts-from-ipo-standards-primer/.
Kaplan, J., 1986. Startup: A Silicon Valley Adventure. New York: Penguin.
Katz, M. L., Shapiro, C., 1985. Network Externalities, Competition and Compatibility. American Economic Review 75,
pp. 424–440.
Liebowitz, S. J., Margolis, S., 1990. The Fable of the Keys, Journal of Law and Economics 33(1), pp. 1–25.
Lemley, M., 2002. Intellectual Property Rights and Standard Setting Organizations. California Law Review 90, pp.
1889–1981.
Lerner, J., Tirole, J., 2006. A Model of Forum Shopping. American Economic Review 96(4), pp. 1091–1113.
Lerner, J., Tirole, J., Strojwas, M., 2003. Cooperative Marketing Agreements Between Competitors: Evidence from
Patent Pools. NBER Working Papers 9680.
Majoras, D. 2005. Recognizing the Pro-competitive Potential of Royalty Discussions in Standard-Setting. Stanford
University Standardization and the Law Conference. September 23, 2005. Available at:
http://www.ftc.gov/speeches/majoras/050923stanford.pdf.
Mackie-Mason, J., Netz, J., 2007. Manipulating Interface Standards as an Anticompetitive Strategy. In: S. Greenstein,
Stango, V. (Eds.), Standards and Public Policy, Cambridge University Press.
Matutes, C., Regibeau, P., 1988. “Mix and Match”: Product Compatibility without Network Externalities. RAND Journal
of Economics 19(2), pp. 221–234.
Murphy C., Yates, J., 2009. The International Organization for Standardization (ISO). New York: Routledge.
Murray, F., Stern, S., 2007. Do Formal Intellectual Property Rights Hinder the Free Flow of Scientific Knowledge? An
Empirical Test of the Anti-Commons Hypothesis. Journal of Economic Behavior and Organization 63(4), pp. 648–
687.
Parker, G., Van Alstyne, M., 2005. Two-Sided Network Effects: A Theory of Information Product Design. Management
Science 51(10), pp. 1494–1504.
Rochet, J.-C., Tirole, J., 2003 Platform Competition in Two-sided Markets Journal of the European Economic
Association 1(4), pp. 990–1029.
Russell, A. L., 2006. Rough Consensus and Running Code and the Internet-OSI Standards War. IEEE Annals of the
History of Computing 28(3), pp. 48–61. (p. 58)
Rysman, M., 2009. The Economics of Two-Sided Markets. Journal of Economic Perspectives 23(3), pp. 125–143.
Rysman, M., Simcoe, T., 2008. Patents and the Performance of Voluntary Standard Setting Organizations,
Management Science 54(11), pp. 1920–1934.
Shapiro, C., 2001. Navigating the Patent Thicket: Cross Licenses, Patent Pools, and Standard Setting. In: A. Jaffe, J.
Lerner, S. Stern (Eds.), Innovation Policy and the Economy (Volume 1), MIT Press, pp. 119–150.
Shapiro, C., Varian, H.R., 1998. Information Rules: A Strategic Guide to the Network Economy. Boston, Mass.,
Harvard Business School Press.
Simcoe, T., 2007. Explaining the Increase in Intellectual Property Disclosure. In: Standards Edge: The Golden Mean.
Bolin Group.
Simcoe, T., 2012. Standard Setting Committees: Consensus Governance for Shared Technology Platforms.
American Economic Review 102(1), 305–336.
Simcoe, T., Graham, S. J., Feldman, M., 2009. Competing on Standards? Entrepreneurship, Intellectual Property and
Platform Technologies. Journal of Economics and Management Strategy 18(3), pp. 775–816.
Page 17 of 20
Four Paths to Compatibility
Spence, M., 1975. Monopoly, Quality, and Regulation. Bell Journal of Economics 6(2), pp. 417–429.
Varney, C. A., 2010. Promoting Innovation Through Patent and Antitrust Law and Policy, Remarks prepared for join
USPTO FTC Workshop on the Intersection of Patent Policy and Antitrust Policy. May 26, 2010. Available at:
http://www.justice.gov/atr/public/speeches/260101.htm
Weiss, M. B., Sirbu, M., 1990. Technological Choice in Voluntary Standards Committees: An Empirical Analysis.
Economics of Innovation and New Technology 1(1), pp. 111–134.
Weiss, M., Toyofuku, R., 1996. Free-ridership in the Standards-setting Process: The Case of 10BaseT,
StandardView 4(4), pp. 205–212.
West, J., 2007. The Economic Realities of Open Standards: Black, White and Many Shades of Gray. In: S.
Greenstein, Stango, V. (Eds.), Standards and Public Policy, Cambridge University Press, pp. 87–122.
Weyl, G., 2010. A Price Theory of Multi-Sided Platforms. American Economic Review 100(4), pp. 1642–1672.
Notes:
(1.) A user is said to “multihome” when it adopts several incompatible systems and can thus work with others on
any of those systems.
(2.) See David and Greenstein (1990) or Shapiro and Varian (1998) for a review of the early literature on network
effects, and Rysman (2009) for a review of the nascent literature on two-sided markets.
(3.) Distinguishing between horizontal and vertical compatibility may help illuminate the often murky concept of
open standards. Cargill (1997) suggests that the term “open” has become “an icon to conveniently represent all
that is good about computing,” so when conflicts emerge, all sides claim support of open standards. End-users
typically define “open” in horizontal terms, since they seek a commitment to future competition at the platform
level. Platform leaders typically emphasize vertical compatibility, which grants access to (but not control over)
proprietary technology. Meanwhile, standards mavens call a technology open if it has been endorsed by an
accredited SSO, and open-source advocates focus on free access to the underlying code.
(4.) Shapiro and Varian (1998) provide many other examples, and West (2007) contains a lengthy list of standards
wars. Farrell and Klemperer (2007) review the economic theory.
(5.) Farrell and Saloner (1988) model sequential technology adoption with network effects and show how outcomes
may depend on users’ initial beliefs. The book StartUp (Kaplan, 1986, Ch. 9) provides an entertaining account of
the battle for expectations, and the strategic use of backward compatibility, in pen-based computer operating
systems.
(6.) Cabral and Kretschmer (2007) even suggest that when tipping toward an inferior technology would be very
costly, optimal government policy may be to prolong a standards war so participants can gather more information.
(7.) Interestingly, many of the well-known standards wars that do result in a “fight to the death” involve media
formats.
(8.) A list of roughly 550 SSOs is available at www.consortiuminfo.org. Cargill (2002) and the Intellectual Property
Owners Association (2009) suggest classification schemes.
(9.) These acronyms stand for International Telecommunications Union (ITU), International Organization for
Standards (ISO), Internet Engineering Task Force (IETF), and World Wide Web Consortium (W3C). For ICT
standards, ISO and the International Electrotechnical Commission (IEC) collaborate through a group called the Joint
Technical Committee (JTC 1). Murphy and Yates (2009) describe the relationship between these national standards
bodies and the global standards system administered by ISO.
(10.) Annex 1 of the World Trade Organization's Technical Barriers to Trade Agreement makes explicit reference to
the ISO/IEC Guidelines for SDO procedure. In the United States, OMB Circular A-119 gives preference to SDOs in
Page 18 of 20
Four Paths to Compatibility
federal government purchasing, and the Standards Development Act of 2004 (HR 1086) grants certain antitrust
exemptions. Speeches by Majoras (2005) and Varney (2010), and also the European Commission (2010) antitrust
guidelines on horizontal co-operation provide some assurance to SSOs regarding open discussion of royalty rates.
(11.) Compatibility standards make up roughly 43 percent of the total stock of American National Standards, with
much of the other half related to performance measurement and safety. Thus the ICT sector's share of standards
production exceeds its share of GDP, and even patenting. One explanation is that information technology is, by
design, uniquely modular, so the ratio of standards to products is high. For example, Biddle et al (2010) estimated
that a typical laptop implements between 250 and 500 different compatibility standards.
(12.) The decision to adopt a framework or incorporate vendor-specific options into a standard may be observable,
and hence amenable to empirical research, since the work-process and published output of many important SSOs
(e.g. 3GPP and the IETF) are publicly accessible.
(13.) Furman and Stern (2006) and Murray and Stern (2007) study the impact of “vertical” open access policies on
innovation outside of network industries.
(14.) Farrell et al. (2007) describe an extensive legal and economic literature on SSO IPR policies.
(15.) Calls for mandatory vertical openness often produce fierce debates. For example, prior to the 1956 consent
decree, IBM resisted publishing the technical specifications that would allow vendors to offer “plug compatible”
mainframes and peripherals. More recent are the debates over “net neutrality” and ISPs’ freedom to charge prices
based on the quality-of-service provided to different web sites or Internet applications.
(16.) Weyl (2010) offers an alternative price-theoretic analysis of a monopoly that controls access to both sides of
a two-sided platform. In his model, the monopoly tariffs exhibit two types of deviation from perfectly competitive
pricing: a “classical market power distortion” (which resembles a Lerner markup rule) and a “Spence (1975)
distortion” whereby the monopolist internalizes the network benefits to the marginal, as opposed to the average,
platform adopter.
(17.) One exception to the paucity of data is a paper by Biddle et al. (2010) that identifies 250 compatibility
standards used in a typical laptop. The authors worked with Intel to estimate that 20 percent of these standards
come from individual companies, 44 percent from consortia and 36 percent from accredited SDOs.
(18.) Observing the slowdown at many consortia, Cargill suggests that they too will be supplanted, perhaps by the
open-source software development model, or other bottom-up efforts to establish de facto standards.
(19.) A parallel literature on voluntary certification programs, reviewed in Dranove and Jin (2010), may offer
insights on competition between SSOs that can also be applied to compatibility standards. For instance, they cite
several recent studies that examine the proliferation of competing “eco-labeling” initiatives (e.g. Energy Star
versus LEED for construction or Sustainable Forest Initiative versus Forest Stewardship Council for lumber).
Joseph Farrell
Joseph Farrell is Professor of Economics at the University of California, Berkeley.
Timothy Simcoe
Timothy Simcoe is Assistant Professor of Strategy and Innovation at the Boston University School of Management.
Page 19 of 20
Software Platforms
Oxford Handbooks Online
Software Platforms
Andrei Hagiu
The Oxford Handbook of the Digital Economy
Edited by Martin Peitz and Joel Waldfogel
Print Publication Date: Aug 2012
Online Publication Date: Nov
2012
Subject: Economics and Finance, Economic Development
DOI: 10.1093/oxfordhb/9780195397840.013.0003
Abstract and Keywords
This article presents a detailed view on developments in business strategies in the market of software platforms. The fundamental characteristic of
multisided platforms is the presence of indirect network effects among the multiple “sides.” The broader implication of the Brightcove and Salesforce
examples is that web-based software platforms (SPs) challenge the notion that software platforms are inherently multisided. The article then describes
the platform governance. The emergence of cloud-based platforms and virtualization are increasingly challenging its traditional status as the key
multisided software platform (MSSP) in computer-based industries. Multi-sidedness essentially depends on whether the provider of the SP owns a
related product or service, whose value is increased by the applications built upon the SP. Governance rules appear to have emerged as an important
strategic instrument for SPs.
Keywords: software platforms, multisided platforms, Brightcove, Salesforce, platform governance, cloud-based platforms, virtualization
1. Introduction
Since the 2006 publication of Evans, Hagiu, and Schmalensee (2006) (henceforth IE for Invisible Engines), software platforms have continued to
increase their reach and importance in the information technology sector. Numerous companies, large and small, have developed software platforms
in an attempt to stake out preeminent market positions at the center of large ecosystems of applications and users. Apple has expanded from
computers and digital music players into smart phones, and its iPhone has become the leading hardware-software platform in the mobile market.
Facebook and LinkedIn have turned their respective social networks into software platforms for third-party application developers.1 Google has
launched several new software platforms: the Android operating system for smartphones; App Engine and Chrome operating system (based on the
Chrome browser) for web-based applications; OpenSocial application programming interfaces (APIs) for online social networks. Salesforce, the leading
provider of cloud-based customer relationship management (CRM) software, has turned its product into a software platform for external developers of
business applications (not necessarily related to CRM). Even Lexmark, a printer manufacturer, has caught software platform fever: it recently opened
up APIs and launched an app store for third-party developers to build applications running on top of its printers.2
This chapter provides a brief overview of what I believe to be some of the key recent developments in the world of software platforms. The
technological and economic drivers of value creation by software platforms are unchanged since IE (p. 60) (Moore's Law, the ever-expanding variety
of computer-based devices, economies of scale associated with writing reusable blocks of software code, etc.), but some of their manifestations are
novel, which should supply interesting topics for economics and management research. There is now a significantly wider variety of software platform
types, mostly due to the rise of web (or cloud)-based computing and of a technology called virtualization. As a result, there are intriguing competitive
dynamics between these new software platform species and the incumbents (mostly operating systems). Software platform business models have also
evolved in interesting ways along several dimensions: the extent of vertical integration, pricing structures, and the strictness of the governance rules
placed on their respective ecosystems.
The chapter is organized as follows. Section 2 discusses the distinction between one-sided and multisided software platforms: while the most powerful
software platforms are indeed multisided, it is quite possible for many software platforms to thrive even in a one-sided state. Section 3 analyzes some
of the key changes in software business models that have emerged since the publication of IE, focusing on integration scope, governance, and pricing
structures. Section 4 discusses webbased software platforms and virtualization, which are challenging the preeminence of the operating system in the
relevant computing stacks from above and from below, respectively. The conclusion summarizes the key points and draws a few implications for
further research.3
2. Multisided Versus One-Sided Software Platforms
The notion of software platforms that will be used in this chapter is the same as in IE: a software platform is any piece of software that exposes APIs,
allowing other applications to be built on top.4
As emphasized in IE, one of the key economic features of software platforms is their potential to become the foundation for multisided business
models. In other words, they can help bring together multiple groups of customers, which generally include at least third-party application or content
developers and end-users. Sometimes they also include third-party hardware manufacturers (e.g., Android, Windows) or advertisers (e.g., Google's
search engine and related APIs). The economic value created by software platforms comes fundamentally from reducing the application developers’
fixed costs by providing basic functionalities that most applications need. In other words, they eliminate the need for each developer to individually
“reinvent the wheel.”
But these economies of scale are not sufficient to turn software platforms into multisided platforms. Indeed, the fundamental characteristic of
multisided platforms is the presence of indirect network effects among the multiple “sides”—or (p. 61) customer groups—served. This requires that
Page 1 of 11
Software Platforms
the platform is mediating some sort of direct interaction between agents on different sides and that each side benefits when more members on the
other side(s) join the same platform. Indirect network effects are thus distinct from direct network effects, which occur within the same customer group:
participation by more members on one side creates more value for each individual member on the same side. A classic example of direct network
effects at work is the fax machine. More recently, social networks such as Facebook and LinkedIn were primarily built around direct network effects
among their members. As mentioned in the introduction, however, they have morphed into two-sided platforms by also attracting third-party application
developers.5
Some software platforms are indeed two-sided or multisided. This is the case, for example, with Apple's iPhone, Google's Android, and Microsoft's
Windows Mobile: they allow users to access and use thousands of applications created by third-party developers and, vice versa, they enable thirdparty application developers to reach the millions of users who own phones running the corresponding operating system. It is important to note,
however, that not all software platforms (henceforth SPs) manage to fulfill their multisided potential and become multisided software platforms
(henceforth MSSPs). In fact, the last four years have witnessed a proliferation of one-sided software platforms at various layers of computer-based
industries: these SPs create value through economies of scale and specialization but not through indirect network effects. One-sidedness may be the
result of conscious choice or failure to attract multiple sides on board. The experience of Brightcove, the creator of a SP for online video, provides a
good illustration of the challenges associated with the transition from one-sided SPs to MSSPs. The following subsection is based on Hagiu, Yoffie, and
Slind (2007) and the authors’ subsequent interviews with the company.
2.1. Brightcove: Four-Sided Vision, One-Sided Reality
(Note: This subsection is based on Hagiu, Yoffie, and Slind (2007) and the authors’ subsequent interviews with the company.) Brightcove was founded
in 2005 with the ambition to become a four-sided platform in the rapidly growing market for Internet video. Its vision was to connect content providers
(e.g., MTV, Sony Music, Discovery Channel, The New York Times), end-users, advertisers, and web affiliates (i.e., web properties who would want to
license video streams and integrate them in their sites). Brightcove would achieve this vision by providing several key pieces: a software platform that
would enable content providers to build and publish highquality video streams online, an online site that would aggregate videos and allow users to
view them, an advertising network enabling content providers to sell 15 or 30-second spaces in their videos to advertisers and a syndication
marketplace enabling third-party affiliated websites to license videos from content providers for publication on the affiliates’ sites.
(p. 62) The Brightcove team recognized the complexity involved in building this four-sided platform all at once and reasoned that the first two sides
they needed to get on board were content providers and users. After some internal debate, they decided to start courting the content provider side
first, in particular premium content providers (e.g., Wall Street Journal, Disney, Discovery, MTV, etc.), through the provision of the Brightcove software
platform. Content providers relied on it to create and integrate video in their own websites. Brightcove viewed this merely as the first of two steps
needed to establish itself as a two-sided software platform: once it had attracted a critical mass of content providers as customers, it would attract the
user side by launching its Brightcove.com site and populating it with Brightcove-powered videos from its content providers. In particular, the
management team was explicit that the company's goal was not limited to being “just an enterprise software provider,” that is, selling software tools to
premium content providers.
Brightcove was very successful in attracting premium content publishers as customers for its software platform: many large companies (media,
manufacturing, retail) and organizations (government, not-for-profit) were quick to sign up. And in October 2006, Brightcove did indeed launch its own
user destination site. Unfortunately, however, the site was never able to generate significant traction. This was for two main, related reasons. First,
Brightcove's content provider-customers were reluctant to supply content for Brightcove to build its own destination site, which they viewed as
competing with their own sites for user attention and advertising revenues. Second, because it had to focus most of its limited resources on serving the
large, premium content providers, Brightcove had been unable to dedicate any resources to building its brand for end-users (even if it had had the
financial resources and management bandwidth to do so, this would have conflicted with its focus on premium content). In particular, it entirely
neglected the usergenerated video content functionalities offered at the time by YouTube and other sites such as Revver and MetaCafe. As a result,
while Brightcove was deepening its relationship with premium content providers and building a solid stream of revenues, YouTube was acquiring tens of
millions of users (without any revenues). By the end of 2006, Brightcove.com was hopelessly behind (in terms of user page views) not just YouTube,
but also at least four other video-sharing sites.
Consequently, in April 2008 Brightcove decided to shut down its user destination site, as well as its advertising network, and effectively settled on
being a one-sided software platform for premium content providers. Currently, the company is quite successful in this business and has been profitable
since 2009, but it is hard to imagine it would ever be able to achieve the $1.7 billion valuation achieved by YouTube with its 2006 sale to Google. In an
irony readily acknowledged by the company's leaders, Brightcove might have been able to build YouTube if it had decided to focus on the end-user
side first. Instead, Brightcove's choice to be primarily a software platform (an attractive option by many measures, particularly given the clear and
significant revenue prospects) constrained its ability to turn itself into a multisided business. Media companies were very cautious not to repeat the
mistake made by music studios in the early 2000s, when they had empowered and (p. 63) subsequently become overly dependent on Apple's iTunes
platform. Meanwhile, YouTube's riskier approach of focusing first on users, with no clear monetization prospects initially, allowed it to make an end run
around the large media companies. Whether this latter approach will eventually succeed (by compelling media companies to join, given YouTube's
tremendous user base) remains to be seen.
2.2. Salesforce: Two-Sided or One-Sided Software Platform?
The distinction between one-sided and multisided SPs can be quite subtle, particularly in the case of web-based SPs. Consider the example of
Salesforce. Founded in 1999 by a former Oracle executive, the company established itself as the leading provider of on demand (i.e., web-based)
customer relationship management (CRM) software. Starting in 2005, Salesforce turned itself into an SP by exposing APIs enabling external developers
to build on-demand business applications. The corresponding APIs and other developer services were named the Force.com platform. Some of the
third-party applications built on top of Force.com are complementary to Salesforce's flagship CRM product. They enhance the value of Salesforce's
CRM application, therefore creating indirect network effects. Other applications however are entirely independent of CRM and target completely
different customers. They do not generate indirect network effects around Force.com or Salesforce CRM because their customers can get access to
and use them directly from the developers’ sites through a web browser.
So is Force.com a two-sided or a one-sided SP? If the share of CRM-related applications were very small, then it would arguably be one-sided. This
suggests that attempting to draw a clear line between one-sided and two-sided SPs might be difficult—and ultimately not very insightful. It is however
interesting to consider the distinction from Salesforce's perspective. While building a two-sided SP around its CRM product might sound appealing (it
Page 2 of 11
Software Platforms
would increase switching costs for its customers and create network effects), the scope of applications that can be built on top of CRM is quite limited.
This is probably why the company has chosen to encourage and support the development of on-demand applications unrelated to CRM: while they do
not create indirect network effects, the advantage is to offer larger growth prospects for the Force.com SP. Thus, in theory at least, Salesforce could
conceivably abandon the CRM application and focus exclusively on being a one-sided SP provider, with no direct relationships to enterprise software
customers.
The broader implication of the Brightcove and Salesforce examples is that web-based SPs challenge the notion that software platforms are inherently
multisided (as we had claimed in chapter 3 in IE). The fundamental reason is the following. In a 100 percent offline world, in which software platforms
were tied to specific devices, the SP providers had to ensure both that developers used their APIs and that end-users had those APIs installed on their
devices. For instance, Microsoft had to simultaneously convince PC application developers to rely on Windows (p. 64) APIs and users to buy PCs
running the Windows operating system (OS). Every application built on top of Windows makes users more likely to adopt Windows and vice versa. For
web-based SPs, the user side of the equation is solved by definition: any user with any device running a web browser is in principle able to access the
applications built on top of the SP. In this context, whether a web-based SP is one-sided or two-sided depends essentially on whether its provider owns
another asset or online property whose value to its users increases as more applications are built on the web-based SP. And there is a priori no reason
for which web-based SP providers should possess complementary assets.
3. Multisided Software Platforms Business Models
IE analyzed three key strategic decisions faced by MSSPs: vertical scope (i.e., the extent of vertical integration into one or several of the multiple
sides); pricing structures (i.e., which pricing instruments to use and on which side or sides to seek profits); and bundling (i.e., which features and
functionalities to include in the platform itself). While little has changed in the drivers and manifestations of bundling by software platforms, there have
been some interesting recent developments regarding vertical scope and pricing structure decisions by MSSPs. I shall also discuss an additional
strategic decision: platform governance, that is, the controls and rules put into place by some software platforms to “regulate” the access and
interactions of the multiple sides they serve.6 While this decision was touched upon briefly in IE (chapter 10), it has become much more prominent in
recent years, particularly in the context of new, web-based, and mobile-based software platforms such as Facebook, iPhone, and Android.
3.1. Integration and Scope
The three “core” potential sides for any SP are unchanged from the ones described in IE (chapter 9): end-users, third-party application or content
developers, and third-party hardware and peripheral equipment manufacturers.7 Some Web-based SPs have an additional side—advertisers.
3.1.1. Two-Sided versus Three-Sided Software Platforms
One of the most fascinating forms of platform competition is that between software platforms with different degrees of vertical integration into hardware.
The best known (described in IE chapter 4) is Microsoft versus Apple in PCs. Its most interesting correspondent today is Apple (iPod, iPad, iPhone)
versus Google (Android) (p. 65) in smartphones and tablets. Just like in PCs, Apple runs a two-sided SP with the iPhone (users and third-party app
developers) and makes the bulk of its profits on the user side through its high margins on iPhone sales. Google runs Android as a three-sided SP (users,
handset makers, and third-party app developers), very similar to Windows in PCs, except that Android is open and free for handset makers to use.
Whereas the two SP battles look quite similar (and share one protagonist, Apple), the outcomes will likely be quite different. Windows won the PC battle
thanks to the superior strength of its indirect network effects: having original equipment manufacturers (OEMs) as a third side to its platform made a big
difference and helped tip the PC market to Windows. Today, the market for smartphone operating systems is quite fragmented: out of 80.5 million
smartphones sold in the third quarter of 2010, 37 percent were running on Symbian OS, 26 percent on Android, 17 percent were iPhones, 15 percent
were RIM's Blackberry devices, and 3 percent were based on Windows Mobile, the balance coming from Linux and other OSs.8 The focus on Android
and the iPhone is deliberate: Symbian, RIM, and Microsoft have been losing market share steadily (they stood at 45 percent, 21 percent, and 8 percent,
respectively, in the third quarter of 2009), to the benefit of Android and the iPhone (Android only had a 3.5 percent share in the third quarter of 2009,
while the iPhone was already at 17 percent). It is unlikely that the current state of fragmentation in the smartphone OS market will continue, since it
creates significant inefficiencies for third-party application developers. But this market is also unlikely to tip to one dominant software platform in the
same way the PC market did, for three main reasons. First, third-party software applications seem to add lower value to smartphones relative to PCs.
Second and related, there is significantly more scope for differentiation among smartphones than among PCs based on functionality, hardware
aesthetics, and the overall quality of the user experience. Third, the other key players in the smartphone ecosystem (mobile operators and handset
manufacturers) are unlikely to allow one SP to dominate the smartphone market in the same way that Windows dominates the PC industry.
Both Apple's two-sided model and Google's completely horizontal, three-sided model have strengths and weaknesses. The Apple model with its vertical
integration into hardware tends to produce better devices overall given that one company can design and coordinate the interdependencies of all
necessary components. Google's horizontal model, on the other hand, allows for much faster growth and proliferation of the software platform by
leveraging the efforts of multiple hardware makers. Some of the respective weaknesses are illustrated by the experience of the other major
smartphone SPs. RIM has largely used the Apple model, having kept complete control over the hardware. It was a pioneer of the smartphone market
with its Blackberry line of smartphones, which became a hit with business users. But unlike Apple, RIM was slow in realizing the value of turning its
devices into an attractive platform for third-party application developers. The various Blackberry devices were well-designed as standalone products,
but they did not share a common software platform, which made them collectively less (p. 66) appealing for developers. There are currently only
about 10,000 apps available for Blackberries, compared to over 250,000 for the iPhone and 75,000 for Android. And as a result, during the past year,
RIM has been losing both market share and stock market valuation, mostly as a result of heightened competitive pressure from Apple and Google
devices.9 At the other end of the spectrum, Microsoft has struggled mightily with its Windows Mobile smartphone OS, despite using essentially the same
three-sided strategy that made the success of Windows in PCs. Part of the problem seems to have been Microsoft's failure to either understand or
acknowledge the key differences between PCs and smartphones: its Windows Mobile iterations were constantly criticized for being too bulky and slow
for mobile phones. Finally, although Symbian was the world's first smartphone OS and is still the leader in market share, it is rapidly losing ground, as
mentioned earlier. After more than seven years on the market, Symbian only managed to attract 5,000 applications: by contrast, it took Apple one year
to reach 10,000 applications for its iPhone and 3 years to reach 200,000 (cf. Hagiu and Yoffie, 2009). Symbian used a model that can be best
described as “semi-vertical integration,” which turned out to be a major handicap. It was majority-owned by Nokia but attempted to become a threesided SP: this did not succeed because other handset makers were reluctant to fully commit to a SP controlled by their largest competitor. In turn, Nokia
wasted time and resources trying to persuade potential licensees that it did not fully control Symbian, instead of fully embracing the Apple vertical
integration model and optimizing its phones for Symbian (cf. Hagiu and Yoffie, 2009).
Page 3 of 11
Software Platforms
Consequently, it is impossible to predict, solely based on the choice of integration scope, whether Apple's or Google's model will be more successful in
the smartphone OS market. Unlike PCs, however, it is quite possible that both might thrive and coexist.
3.1.2. App Stores Have and Have Nots
A second prominent element of SP vertical scope nowadays is the provision of a marketplace, or “app store.” App stores such as Android Marketplace,
Apple's iTunes, Sony's PlayStation Store, and others enable end-users of a given SP to find, purchase, and download content and applications for that
SP. At the time of IE's publication, there were few SPs that provided their own app stores: some mobile carriers did so through their own portals, the
most successful of which was NTT DoCoMo's i-mode in Japan; some digital media SPs, led by Apple's iTunes and RealNetworks’ Real Music Store; and,
to some extent, Palm in the PDA market (although Palm had an app store, most applications for Palm OS-powered devices were purchased and
downloaded through a third-party intermediary called Handango—cf. Boudreau, 2008).
Today, app stores have become widespread. By far the largest number is found in the mobile sector, and they are no longer the exclusive preserve of
mobile operators. Instead, the largest and most prominent ones are provided by SP vendors and handset makers: Apple's iPhone App Store, Google's
Android Marketplace, (p. 67) RIM's BlackBerry App World, Nokia's Ovi Store (associated with its Symbian S60 software platform), and Microsoft's
Windows Mobile Marketplace. And app stores are also proliferating in other computer-based industries, in which they were absent only five years ago.
For example, the top three videogame console makers all have launched app stores for the first time during the latest console generation (which
started in late 2005): Nintendo's Wii Shop Channel, Microsoft's Xbox Live Marketplace, and Sony's PlayStation Store. Users access these stores directly
from their consoles in order to purchase and download games, as well as other digital content, most notably movies. Another example is Samsung
recently launching an app store for its Internet-connected TV sets, which runs its Samsung Apps software platform.10 Finally, it is noteworthy that some
prominent SPs (e.g., Facebook and LinkedIn) have chosen to dispense with app stores, at least for now.
What drives SPs’ decisions whether to vertically integrate into the provision of app stores or not? One would expect SPs whose business models
depend on tight controls over the format and nature of third-party applications to provide their own app stores. Clearly, that is the case of Apple with its
iTunes and iPhone App Store. It is indeed important to recall that Apple's profits come disproportionately from its hardware sales (iPods and iPhones). By
exerting tight and exclusive control over content distribution for those devices, Apple is able to further drive up their value relative to competing
devices and increase users’ switching costs—for example, by insisting on very low and uniform pricing for music, by imposing proprietary formats for
digital content (music and applications) that make it work only on Apple devices.11 The same applies to videogame consoles.12 Game console-specific
app stores have only appeared during the latest generation for two simple reasons. First, in previous generations, videogame console content was
almost entirely limited to…videogames. By contrast, all major consoles today act as home computers capable of playing movies, surfing the web, and
organizing all kinds of digital content. And second, in previous console generations the vast majority of games were sold as shrink-wrapped software
on CDs or cartridges, distributed through brick-and-mortar channels. Today, a rapidly increasing share of console content is distributed through the
web, which required the provision of online distribution channels by the console manufacturers themselves.
In contrast, SPs whose business models rely on broad ecosystems of partners generally leave the provision of app stores to others. Thus, although
Google provides its own Android Marketplace (mostly because no one else provided an Android app store when Android was first launched), it also
allows its partner handset makers (e.g., HTC, Motorola) and mobile operators (e.g., Verizon) to build their own Android app stores. As this chapter is
being written, there are reports emerging that Amazon and BestBuy are also planning to launch Android app stores.13 Although for Apple control over
content distribution (app stores) is important in order to differentiate its devices, for Google, “outsourcing” content distribution is essential in order to
allow its partners to differentiate their competing offerings from one another. If Google were to insist on being the sole provider of an Android app store,
that would make it significantly harder for two competing (p. 68) handset makers or two competing mobile operators to differentiate their respective
Android-based devices or services from one another and, in turn, would make them less likely to adopt the Android SP.
3.2. Governance
As defined in Boudreau and Hagiu (2009), managed services provider (MSP) “governance rules” refer to the non-price rules and restrictions that MSP
providers put in place in order to “regulate” access to and interactions on their platforms. The only example of tight governance rules discussed in IE
was that of videogames consoles, which continue to use security chips in order to restrict the entry of game developers. IE also noted that NTT
DoCoMo had imposed some mild governance rules around its i-mode MSSP: it does not exclude anyone, but distinguishes between “official” and “nonofficial” content providers.
During the past four years, the choice between tight versus loose governance rules has emerged as a key strategic decision for MSPs in general and
MSSPs in particular. And there is significant variation in the governance rules chosen by different MSSPs (cf. Boudreau and Hagiu, 2009). At one end of
the spectrum, Apple places tight controls and restrictions over its iPhone app store: every single application must be submitted to Apple for screening,
and developers can only use Apple-approved technical formats.14 Similarly, LinkedIn is very selective in its approval process for third-party
applications built for its social network. At the other extreme, Google approves almost any application written for Android, while Facebook places no
restrictions whatsoever on the apps built on its Platform.
At the most basic level, one can think of the choice between tight versus loose governance rules as reflecting a choice between quality versus
quantity. More precisely, there are three fundamental ways in which tight governance rules can create value for MSPs (cf. Hagiu, 2009). First, they may
help avert “lemons market failures” when there is imperfect information about the quality of at least one side. Second, limiting the entry of and
therefore competition among the members of one side of the market can help enhance their innovation incentives by assuring them of the ability to
extract higher rents. And third, tight governance rules can be used as a way to incentivize the various sides to take actions or make investments
which have positive spillovers on other platform constituents.
When it comes to MSSPs, these three considerations may be compounded by strategic ones. For instance, Apple claims that its restrictions on iPhone
applications are designed to ensure high-quality applications and weed out the low-quality ones, which is all the more important given the large
numbers of applications available and the difficulty consumers might have to tell which ones are good before using them (this story is consistent with
the first fundamental driver mentioned above). But industry observers and Apple critics point out that an equally important driver of the restrictions
might be Apple's desire to favor Apple's own (p. 69) technologies over competing ones (e.g., Apple's iAdsmobile advertising service over Google's
AdMob15).
Furthermore, MSSPs (and MSPs generally) do not choose governance rules independently from their other strategic decisions (e.g., pricing structures).
There are often significant interdependencies that need to be taken into account. For example, one cannot explain the entry restrictions that
Page 4 of 11
Software Platforms
videogame consoles continue to place on game developers solely based on the “lemons market failure” argument. Indeed, while weeding out poor
quality games may have been the primary reason for which Nintendo created these restrictions in the late 1980s, that can hardly be a concern for
console manufacturers today. There are hundreds of specialized magazines and websites that review games, making it highly unlikely that poor quality
games would crowd out the good quality ones. Instead, the decision to keep limiting the number of game developers by console manufacturers is
closely linked to the way they make money. As extensively documented in IE, profits in the console video-gaming business come from games (sales of
first-party games and royalties charged to third-party developers), while consoles are sold at a loss. Free entry by developers for a given console
would drive game prices and therefore console profits down (cf. Hagiu and Halaburda, 2009).
3.3. Pricing Structures
The fundamental drivers of pricing structure choices by MSPs in general and MSSPs in particular are analyzed in depth in IE (chapter 10) and remain
valid. In particular, IE documented that all software platforms with the exception of video-game consoles made the bulk of their profits on the user side
and charged nothing to third-party application or content developers.16 In contrast, videogame consoles are sold below cost to users and their
manufacturers make their profits from royalties charged to third-party game developers. The explanation of these two contrasting pricing structures
relies on three key elements (cf. Hagiu and Halaburda, 2009). First, the “razor-blades” structure of consumer demand for videogames does not hold
for other software platforms, such as Windows: the number of PC applications a consumer buys is poorly correlated with his or her value for the overall
system. Second, as mentioned above, restricting the entry of game developers results in higher quality games, as well as less intense competition
between games on the same console. In turn, this allows console makers to extract more value from game developers. Third, the prevailing pricing
structure may to some extent reflect path-dependence, that is, the difficulty for later entrants to reverse the pricing structure that was first established
by Nintendo in the late 1980s.
There are two notable ways in which MSSP pricing structures have evolved since IE. First, more and more MSSPs charge some form of royalty to the
third-party application or content developer side. All providers of SPs for smartphones (Apple, Google, Microsoft, RIM, Symbian) charge their respective
app developers a 30 percent cut of their revenues (of course, that amounts to 0 for free apps). (p. 70) Facebook does not charge its third-party app
developers for making their products available on its Platform, but in the summer of 2010 it started to levy a 30 percent charge on application
developers who decide to accept user payments in Credits, a virtual currency introduced by Facebook for its users. Whether the share of profits
derived by these SPs from the developer side relative to the user side becomes substantial remains to be seen. For instance, analysts estimated
Apple's 2009 profits coming from iPhone handset sales at roughly $6 billion,17 while a January 2010 estimation of Apple's monthly revenues from the
iPhone app store stood at $75 million,18 that is, $900 million on a yearly basis. Even without considering the costs of running the app store, that is a
ratio of less than 1 to 6 between profits coming from the developer side (through the app store) and profits coming from the user side (through iPhone
sales).
Second, many “modern” MSSPs have advertisers as a third or fourth side to their platform and derive substantial revenues from this side. This is to a
large extent related to the fact that the majority of the new MSSPs are web-based software platforms. By contrast, advertisers were rarely present on
the “classic” MSSPs studied in IE (e.g., PC and PDA operating systems, mobile operator platforms such as i-mode, videogame consoles). The value
created for advertisers is quite clear in the case of Facebook: access to the eyeballs of 500 million users, who spend an average of 40 minutes per
day on the site.19 It is less clear in the case of a SP like Android, which does not have advertisers as one of its sides—at least not directly. But while
advertisers are not directly tied to Android, they are a major customer on Google's other MSP—the search engine—and its major source of revenues.
And there are powerful complementarities between Android and Google's search engine: Android-based phones make Google's search engine the
default and therefore help drive more user traffic to it, which in turn increases the advertising revenues for the search platform. Furthermore, Google
views Android as a strategic weapon designed to preempt the dominance of proprietary software platforms such as Apple's iPhone, RIM's Blackberry,
and Microsoft's Windows Mobile—these alternative software platforms may indeed choose to divert mobile search revenues to their respective online
properties and away from Google.
4. Multi-Layered Platform Competition
One of the novel aspects of competitive dynamics between software platforms today is that they create more and more competition between firms
across different layers of the relevant “technology stacks”—as opposed to competition within the same layer. This is because most technology
companies recognize that controlling a (multisided) software platform can enable them to extract larger rents from the respective ecosystems in which
they play. Indeed, the ecosystems of firms producing computer-based devices and corresponding content and applications create (p. 71) significant
economic value for end-users as a whole. At the same time, however, there is intense competition among firms producing the various layers of those
systems (chips, hardware, operating system, connectivity service, content, and applications) to extract a larger fraction of that value as profits for
themselves.20 In this context, SPs create economic value for all ecosystem participants by reducing the costs of building applications and content for
the relevant devices. But the very features that make SPs valuable to end-users and other ecosystem players—namely economies of scale or network
effects or both—also endow SPs with significant market power. Over time, successful SPs—particularly MSSPs—create both technological and
economic lock-in and end up obtaining tremendous control and influence over all the other layers in the ecosystem.
Of course, this type of competitive dynamic is present to some extent in any industry whose products are made of complementary components
supplied by different players. But it has become particularly salient in computer-based industries, as the relevant “stacks” have grown increasingly
complex and multilayered. The stakes are also higher for software platforms given the tremendous potential value of the indirect network effects
created between end-users, application developers, and, where relevant, advertisers.
There are essentially two mechanisms through which SP competition across layers arises: (1) companies operating in layers other than the operating
system (e.g., hardware, chips, service, content) attempting to provide their own version of the operating system layer; (2) companies attempting to
build SPs at layers above or below the operating system.
The most common manifestation of the first mechanism is hardware or microprocessor manufacturers deciding to build their own operating systems in
order to avoid commoditization and hold-up at the hands of a third-party OS provider. Indeed, the latter scenario prevailed in the PC industry, where
Microsoft's Windows MSSP extracted most of the value, leaving very little to OEMs (cf. Yoffie et al., 2004). The desire to prevent a repeat of this
scenario in the mobile phone market was the primary driver behind the 1998 founding of Symbian by Nokia and several other prominent handset
makers.21 In a similar and more recent move, Samsung decided to create its own Bada operating system for use in its smartphones in 200922 and then,
in 2010, extended it into an operating system and corresponding app store for its phones and high-definition television sets, called Samsung Apps.23
Intel itself, Microsoft's closest partner in the PC industry, has recently entered the operating system game, first by releasing its own Moblin OS for
24
Page 5 of 11
Software Platforms
netbooks in 200924 and then by merging it into MeeGo OS, a joint effort with Nokia, intended to run on netbooks, smartphones, and other computerbased devices.25 Of course, Intel's microprocessors are in much less danger of commoditization than hardware produced by PC and mobile handset
OEMs. Nevertheless, despite its dominance of the microprocessor market for PCs, Intel's fate has oftentimes been too dependent on Microsoft, which
has generally had the upper hand in the PC partnership and has been able to extract a larger share of overall profits.26 This explains Intel's efforts to
play a significant role in the operating system layer—in PCs as well as new markets such as netbooks and smartphones.
(p. 72) Although both mechanisms of across-layer SP competition have become particularly salient during the past four years, the second one is on
the verge of inducing significantly more profound changes in the structure of computer-based industries. This is why it constitutes the focus of this
section. Indeed, the emergence of new software platforms above (cloud-based platforms) and below (virtualization) the operating system are
increasingly challenging its traditional status as the key MSSP in computer-based industries. As these alternative software platforms expand the range
of application programming interfaces (APIs) they are able to provide to application developers, they diminish the economic value that is created and
appropriated by the OS.
4.1. Cloud-Based Software platforms
The tremendous potential of web-based platforms was largely anticipated in chapter 12 in IE through two case studies—eBay and Google. But while IE
emphasized the disruptive effects of web-based software platforms on traditional industries such as retailing and newspapers (the disruption continues
unabated), the focus here is on the way in which these SPs are diverting attention and value away from operating systems.
To some extent, the advent of cloud-based SPs is the second-coming of the middleware visions associated with Netscape's Navigator browser and
Sun's Java software platform in the late 1990s. Both Netscape Navigator and Java had been perceived by some, at the time, as serious threats to the
lock held by Microsoft's Windows SP on third-party developers of PC applications and users.27 The vision was that they would become ubiquitous SPs,
sitting on top of any OS and exposing their own APIs, so that third-party application developers would no longer have to worry about the underlying OS
and hardware: they could write an application once and it would run “everywhere” (that is, on any computing environment supported by Netscape
and/or Java). In other words, had they been successful, Netscape and Java would have significantly weakened the indirect network effects surrounding
the Windows operating system. That vision did not quite materialize. Netscape's browser was completely marginalized by Microsoft's Internet Explorer
during the first “browser wars” of the late 1990s (cf. Yoffie (2001)). Java did succeed in attracting a significant following of developers of “applets,” that
is, small, web-based applications, but never seriously challenged the Windows SP for more sophisticated PC applications. Indeed, given the state of PC
computing power and Internet connection speeds at the time, the scope for web-based applications was quite limited relative to applications running
directly on the OS.
Today, more than ten years later, things have radically changed. The uninterrupted progression of Moore's Law (the number of transistors which can
be placed on an integrated circuit doubles roughly every two years) and the ubiquity of fast broadband connectivity have largely eliminated the
limitations imposed on web-based (or cloud-based) applications. The last four years in particular have (p. 73) witnessed an explosion in the variety of
computer-based, Internet-enabled devices beyond PCs, such as smartphones, netbooks and tablets. This in turn has generated increasing user
demand for accessing the same applications and content across all of these devices. These developments have brought cloud computing and cloud
applications into the mainstream.
Put simply, “cloud computing” refers to a model of delivery of software applications and content in which they are hosted remotely by their providers
on their own servers and accessed by users through an Internet connection and a browser (as opposed to residing on each individual user's device).
Cloud applications range from consumer applications such as social networking (e.g., Facebook), online games (e.g., World of Warcraft), search (e.g.,
Bing), office productivity (e.g., Google docs) and personal finance (e.g., Mint), to enterprise applications such as customer relationship management
(e.g., Salesforce's CRM).
The development of cloud-based applications has naturally been accompanied by the development of cloud-based software platforms, which are a
modern and more sophisticated version of the middleware software platforms of the late 1990s. Some of the most prominent cloud-based SPs that have
emerged during the past four years are Amazon's Web Services, Facebook's Platform, Google's App Engine and various collections of APIs,
Salesforce's Force.com.
These four software platforms illustrate the wide variance in the nature and scope of services cloud-based SPs offer to third-party developers. At one
end of the spectrum, the core of Amazon's Web Services (AWS) consists of “infrastructure services” such as data storage and computing power.28
AWS do also include a few APIs, mostly targeted at e-commerce applications (e.g., an API which enables websites affiliated with Amazon.com to easily
draw and publish information about Amazon products on their own online properties). Still, the two most widely used AWS are Amazon Elastic Compute
Cloud (EC2) and Amazon Simple Storage Service (S3) (Huckman et al. (2008)). These two services allow third-party developers to rent computing
power and storage capacity on Amazon's servers, while paying based on usage. Google's App Engine is similar to and competes with AWS: it offers a
similar range of infrastructure services, although with a stronger focus on web applications.29 But App Engine is surrounded by a broader set of APIs
which Google has released over the years (e.g., Maps API, AdSense API, AdWords API). Thus, while both App Engine and AWS are considered to be
“Platforms as a Service” (PaaS), App Engine is closer in nature to a true SP, that is, an operating system for web applications. At the other end of the
spectrum, Facebook's Platform is purely a SP. Launched in 2007, it consists exclusively of APIs and programming tools allowing third-party developers
to build applications for the social network (Eisenmann et al. (2008)). It does not offer any infrastructure services similar to AWS or App Engine.
Salesforce's Force.com is somewhere in-between. It is a cloud-based platform providing both APIs and hosting services to third-party developers of
cloud-based business applications.
Second, as mentioned earlier, cloud-based SPs are not necessarily multisided, even when adopted by third-party application developers. Facebook's
Platform is (p. 74) clearly multisided.30 The applications built on it are intended for and only available to Facebook users.31 The more such
applications become available, the more valuable Facebook's network is for users and vice versa—the attractiveness of Facebook's Platform to
developers is directly increasing in the number of Facebook users. On the other hand, Amazon's Web Services are mostly one-sided: the large
majority of the third-party developers who use them are not affiliated with Amazon's e-commerce platform. Their adoption and usage of AWS does not
increase the value of Amazon.com to its users or merchants. The only exception is Amazon's e-commerce API, which can be used to build
functionalities for Amazon-affiliated merchants. But it is also used by a large number of independent e-commerce sites, which have no relationship to
Amazon.com. Google's App Engine is similar to Amazon's Web Services, in that it targets generic websites and application developers, which a priori
do not generate indirect network effects around Google's search engine. But some of Google's APIs (e.g., AdWords) clearly enhance indirect network
effects around its search engine.
Page 6 of 11
Software Platforms
But if cloud-based SPs can be one-sided, then how does the other side—end-users—get access to the applications built on top of these SPs? The short
answer is: through web browsers. It should not be surprising then that the coming of age of cloud computing during the past four years coincides with
the advent of the second Browser Wars.
The First Browser Wars—between Microsoft and Netscape—ended around the year 2000, when Internet Explorer (IE) had achieved more than 90
percent market share and Navigator had all but disappeared. After several years of almost uncontested dominance, IE found itself seriously challenged
for the first time by Mozilla's Firefox. Launched in November 2004, Firefox reached over 10 percent market share within a year and 23 percent by the
summer of 2010.32 Until 2008, Firefox accounted for most of Microsoft's Internet Explorer's decline, which went from more than 85 percent market
share in 2004 to less than 60 percent in February 2011. But the new Browser Wars truly heated up with the launch of Google's Chrome browser in
September 2008. Within 3 years, Chrome reached more than 15 percent market share. Around the same time, Apple's Safari browser also started to
pick up serious steam, largely due to the popularity of the iPhone: it passed the 10 percent market share mark for the first time in its existence
sometime during 2011.33 And Microsoft itself has intensified its efforts with IE–a significantly improved IE 9 was launched in March 2011.34
The key factor driving the Second Browser Wars is the crucial status of browsers as users’ gateway to web-based applications. Although browsers
have to rely on open web standards (which by definition are available to and on any browser), they can still exert significant influence on how these
applications are being used and ultimately monetized. For companies like Apple, Google and Microsoft, browsers can be leveraged as indirect strategic
weapons to drive up the value of other, complementary assets.35 Take Google for instance. The first and most immediate benefit of its Chrome browser
is that it can be used to increase traffic to Google's core asset, the search engine, by making it the default home page (many users rarely (p. 75)
modify the default). It also enables Google to obtain more information on user browsing behavior, which allows it to further improve its mechanism for
serving search-related advertising. Finally, provided Chrome achieves significant market share, it enables Google to influence the evolution of webbased APIs, which can be used to increase the value of Google's cloud-based SPs (App Engine and its collections of APIs). In this context, it is not
surprising that Google took the logical next step in 2009, by announcing a Chrome operating system based on the Chrome browser and the other four
sets of Google APIs.36 Chrome OS will exclusively support web applications, that is, applications that are accessible through the Internet and are not
tied to a specific device.37
Regardless of whether they use one-sided or multisided business models and regardless of their scope and nature (that is, whether they offer software
building blocks only or also include infrastructure services), cloud-based SPs are drawing an increasing number of developers. Their expanding scope
and functionalities make it more attractive (from a business standpoint) to write web-based applications as opposed to operating system-based
applications.
Of course, this transition is unlikely to go all the way to its logical extreme, in which all applications would be written for cloud-based SPs and would no
longer be tied to a specific operating system's APIs. Indeed, while cloud-based applications have many economic advantages (consistent and easy
use through many devices; easy installation and upgrades), there are some important tradeoffs. Users (particularly corporate) may worry about
security and having control over their data, which implies a preference for having at least parts of the applications reside on their local computers or
other devices. Furthermore, the cross-device and cross-operating system nature of cloud applications implies that lowest-common denominator
compromises must sometimes be made in building functionalities. This can result in disadvantages relative to applications which reside on local
devices and can therefore be optimized for specific operating systems or hardware. Finally, in the enterprise software market, there is a large
ecosystem of integration and services providers, which have vested financial incentives to resist the shift toward purely cloud-based applications—the
latter would reduce the need for the existence of these companies.
Consequently, although the “write once, run everywhere” promise of cloud-based SPs is very attractive for many application developers, there is a
significant proportion of applications (particularly for enterprise customers) which need to rely on operating system APIs, so that the OS's power as a SP
is unlikely to be entirely supplanted by cloud-based SPs. The rate of adoption of the four cloud-based platforms mentioned above has been staggering:
Force.com has over 1000 applications; 38 more than 300,000 developers are active on App Engine and AWS; 39 and Facebook Platform supports over
550,000 applications.40 But these numbers should not be taken at face value. The majority of applications built on these SPs are still small-scale web
applications; furthermore, Facebook's applications have oftentimes been characterized as useless time-wasters.41 Despite all these limitations
however, the trend is undeniable: the battle for creating innovative applications—along with (p. 76) the value of the corresponding network effects—
is moving away from the operating system and onto the web.
4.2. Virtualization
(Note: This subsection is primarily based on Yoffie, Hagiu, and Slind, 2009.) While cloud-based (multisided) software platforms tend to move economic
value upward from the operating system, virtualization arguably moves value in the opposite direction.
Up until the early 2000s, it was hard if not impossible to imagine a world in which the operating system would not be the first software layer on top of a
computer device's microprocessor, or in which the relationship between the microprocessor and the operating system would not be one-to-one.
Virtualization has challenged both of these assumptions.
Virtualization is a software technology that was first developed in the 1960s by IBM in order to improve the utilization of large mainframe computers. It
did so by partitioning the underlying mainframe hardware and making it available as separate “virtual machines,” to be utilized simultaneously by
different tasks. The modern version of virtualization was invented and commercialized by VMware Inc. in the early 2000s and applied to the x86 family
of servers and personal computers. In its most popular incarnation (called hypervisor or “bare-metal” approach), the virtualization “layer” sits
between the microprocessor and the operating system. As illustrated in the Figure 3.1, it creates replicas of the underlying hardware, which can then
be used as virtual machines running independently of one another and with separate operating systems.
Figure 3.1 Virtualization Technology in the Hypervisor Mode.
Page 7 of 11
Software Platforms
The most immediate benefit of virtualization was to enable enterprises to consolidate underutilized servers, thereby achieving large technology cost
reductions. Today, there are three additional components to the economic value created by virtualization. First, the virtual machines are isolated from
one another, which is highly desirable for reasons related to security (one virtual machine crashing has no effect on the others) and flexibility (the
virtual machines can be handled independently). Second, the virtual machines encapsulate entire systems (hardware configuration, operating system,
applications) in files, which means that they can be easily transferred and transported, just like any other files (e.g., on USB keys). (p. 77) Third, the
virtual machines are independent of the underlying physical hardware, so they can be moved around different servers. This is particularly useful when
one physical server crashes or when one desires to economize energy by shutting down some servers at off-peak times.
The tremendous value of virtualization is reflected in the quick rise of VMware, the market leader. VMware was founded in 1998 by five Stanford and
Berkeley engineers and during the next 10 years became one of the hottest companies in the technology sector. After a $20 million funding round in
May of 2000, the company drew acquisition interest from Microsoft in 2002. But Microsoft eventually backed off in the face of what it perceived to be an
excessive asking price. VMware was then acquired in 2003 for $635 million by the storage system giant EMC. In 2007, EMC sold 10 percent of VMware
shares in an IPO, which earned it $957 million at the initial offering price of $29 a share. As of October 2010, VMware's share price was around $80,
implying a market capitalization of $33 billion. While a significant portion of this value reflects the large cost savings enabled by virtualization, a key
reason for the hype surrounding VMware is the disruptive potential of its software platform for the established order in the server and PC software
industries.
The basic virtualization technology—the hypervisor—is a SP because it exposes APIs which can be utilized by applications running on top of it. It is in
fact a two-sided software platform: there are hundreds of independent software vendors developing applications for VMware's vSphere hypervisor
(VMware also provides many applications of its own, just like Microsoft does for Windows). Examples of virtualization applications are those that enable
corporate IT departments to move virtual machines across physical servers without interruption (the corresponding application from VMware is called
VMotion), to test and deploy new services, and so forth.
There are three important ways in which virtualization threatens to undermine the power held by the operating system (Windows in particular) over the
other layers in the server and PC stacks. First, by breaking the one-to-one relationship between hardware and operating system, virtualization makes it
much easier for individual users and companies to use multiple OSs. In other words, thanks to virtualization, the user side of the operating system MSSP
becomes a multihoming side instead of a single-homing side. Thus, while virtualization may not necessarily decrease the absolute number of Windows
licenses sold, it does increase the market penetration of alternative operating systems (e.g., Linux for enterprises, Mac OS for consumers). In turn, this
increases application developers’ incentives to (also) develop for those other operating systems: if a given user does not have the required operating
system for running the developer's application to begin with, she can use virtualization to install it alongside Windows, without having to buy a new
machine. Overall, this mechanism significantly weakens the famous “applications barrier to entry” enjoyed by Windows.
Second, because it is a software platform sitting at the same level or below the OS, virtualization inevitably takes away some of the OS's value by
offering APIs that otherwise might have been offered by the OS. While this effect is limited today given that the applications built on top of a hypervisor
are quite different in nature (p. 78) from the ones built on top of an OS, it will likely grow in importance as virtualization technology (VMware in
particular) expands its scope.
Third, virtualization enables the creation of “virtual appliances,” that is, applications that come bundled with their own, specialized OS. Instead of
building on top of large, all-purpose OSs like Windows, a developer can match her application with a stripped-down OS that she chooses according to
criteria including security, cost, and programming ease. She can then distribute the OS-application bundle, knowing that it will run on any computing
system equipped with the virtualization platform. It is clear then that, if widely adopted, the virtual appliance model would have the effect of
commoditizing the OS. By the end of 2009, VMware's ISV partners had brought more than 1,300 virtual appliances to market.42
These developments have of course not been lost on Microsoft. In response to the threats posed by VMware's virtualization technology, Microsoft
launched its own virtualization platform in 2008. Called Hyper-V, it was bundled with the Windows Server OS, thus being effectively priced at 0 and
benefitting from the powerful Windows distribution channels. But the fact that Microsoft designed Hyper-V to work closely with Windows can also be
perceived as a weakness relative to VMware's OS-agnostic approach.
The battle between the two companies will be a fascinating one to follow over the next few years. To some extent, it is reminiscent of the browser wars
between Microsoft and Netscape during the late 1990s. Both involve highly innovative companies with superior technologies, which pose disruptive
threats to Microsoft's Windows software platform. Indeed, VMware today has a significant technical and first-mover advantage, which it is seeking to
maintain by building high-value applications on top of its virtualization platform.43 These applications would increase the switching costs for the
customers who are already using VMware technology and make them less likely to switch to Microsoft's competing Hyper-V software platform. It is the
nature of those customers and of the technology involved that makes the comparison to the browser wars somewhat limited. Large corporations (who
make up the large majority of VMware's customers) adopting a virtualization platform for their data centers are much less easily swayed than
consumers choosing browsers.
But regardless of the duration and the outcome of the virtualization platform wars, what is certain is that the appearance of virtualization as a new SP is
irreversibly changing the way in which economic value is created and extracted in the relevant technology stacks.
5. Conclusion
In this chapter I have attempted to briefly survey some of the developments related to software platforms since the publication of IE in 2006, which I
believe are most interesting for economics and management researchers.
(p. 79) Whereas IE had emphasized the inherently multisided nature of SPs, the diverse nature of cloud-based SPs suggests that this view might be
too narrow. By definition, the user side of web-based SP is always on board as long as it has access to an Internet-connected device and a browser. In
this context, multi-sidedness becomes a subtler issue: it essentially depends on whether the provider of the SP owns a related product or service,
whose value is increased by the applications built upon the SP. Furthermore, while multi-sidedness holds the potential of creating larger value through
indirect network effects, it might also create conflicts of interest with some of the members of the relevant ecosystem. This suggests that some software
platforms may face interesting strategic tradeoffs when choosing to function as one-sided or as multisided businesses.
Regarding SP business models, the battle between Microsoft's three-sided SP (Windows) and Apple's two-sided SP (the Macintosh computer) from the
PC market is currently being replicated by Google and Apple in the smartphone industry—quite likely with different results. Comparing and contrasting
Page 8 of 11
Software Platforms
these two battles should provide substantial research material for analyzing the impact of industry characteristics on the outcome of competitive
dynamics between platforms with different degrees of vertical integration. The evolution of SP pricing structures, with more and more SPs (aside from
videogame console makers) charging the application developer side, raises the intriguing question of whether we might be witnessing a shift toward
more balanced pricing models, in which both sides (developers and users) directly contribute substantial shares of platform profits. Governance rules
also seem to have emerged as an important strategic instrument for SPs. A promising avenue for future research would be to investigate the conditions
under which governance rules can provide a sustainable source of differentiation between otherwise similar SPs competing in the same industry.
Finally, the competitive dynamics between software platforms operating at different levels of the technology stack (e.g., web-based platforms,
operating systems, virtualization platform) points to important research questions, which, to the best of my knowledge, have not been systematically
explored. In particular, is it possible to predict, solely based on exogenously given industry characteristics, the layer in the relevant “stack” at which
the most valuable and powerful (multisided) platform should arise? And if not, that is, if multiple outcomes are possible depending on the actions
chosen by players in different layers, can one characterize the general strategies that enable players in one layer to “commoditize” the other layers?
Acknowledgment
I am very grateful to my two co-authors on Invisible Engines—David Evans and Richard Schmalensee—for helpful discussions and feedback on an
early draft. The views expressed in this chapter (and any related misconceptions) are, however, entirely my own.
References
Boudreau, K., 2008. Too Many Complementors? London Business School, working paper.
Boudreau, K., Hagiu, A., 2009. Platform Rules: Multi-Sided Platforms as Regulators. In: Gawer, A. (Ed.), Platforms, Markets and Innovation, Edward Elgar,
Cheltenham and Northampton, pp. 163–191.
Casadesus-Masanell, R., Yoffie, D.B., 2007. Wintel: Cooperation and Conflict. Management Science 53, pp. 584–598.
Eisenmann, T., Piskorski, M.J., Feinstein, B., Chen, D., 2008. Facebook's Platforms. Harvard Business School Case No. 808–128.
Evans, D. S., Hagiu, A., Schmalensee, R., 2006. Invisible Engines: How Software Platforms Drive Innovation and Transform Industries. The MIT Press.
Hagiu, A., 2009. Note on Multi-Sided Platforms—Economic Foundations and Strategy. Harvard Business School Note No. 709–484.
Hagiu, A, Halaburda, H., 2009. Responding to the Wii. Harvard Business School Case No. 709–448 and Teaching Note No. 709–481.
Hagiu, A., Yoffie, D.B., 2009. What's Your Google Strategy? Harvard Business Review April Issue, 1–9.
Hagiu, A., Yoffie, D.B., Slind, M., 2007. Brightcove and the Future of Internet Television. Harvard Business School Case No. 707–457 and Teaching Note
No. 707–568.
Huckman, R. S., Pisano, G., Kind, L., 2008. Amazon Web Services. Harvard Business School Case No. 609–048.
Yoffie, D.B., 2001. The Browser Wars: 1994–1998. Harvard Business School Case No. 798–094.
Yoffie, D.B., 2003a. Wintel (B): From NSP to MMX. Harvard Business School Case No. 704–420.
Yoffie, D.B., 2003b. Wintel (C): From MMX to the Internet. Harvard Business School Case No. 704–421.
Yoffie, D.B., Casadesus-Masanell, R., Mattu, S., 2004. Wintel (A): Cooperation or Conflict? Harvard Business School Case No. 704–419.
Yoffie, D.B., Hagiu, A., Slind, M., 2009. VMware, Inc., 2008. Harvard Business School Case No. 709–435.
Notes:
(1.) See Goyal, S. “Social Networks on the Web,” chapter 16 in this book.
(2.) “Lexmark Tries to Catch App Fever,” Wall Street Journal, October 26th 2010.
(http://online.wsj.com/article/SB10001424052702303467004575574333014975588.html)
(3.) For related discussions in this book, see Lee, R. “Home Videogame Platforms,” chapter 4; Jullien, B. “Two-Sided B2B Platforms,” chapter 7; Choi,
J.P. “Bundling Information Goods,” chapter 11; Anderson, S. “Advertising on the Internet,” chapter 14.
(4.) Chapter 2 in IE provides a detailed technical and economic background on software platforms.
(5.) Today Facebook is a four-sided platform: it connects (1) users; (2) third-party application developers; (3) advertisers; (4) third-party websites that
can be browsed with one's Facebook identity via Facebook Connect.
(6.) The term “platform governance” was coined by Boudreau and Hagiu (2009).
(7.) This side includes hardware OEMs as well as suppliers of accessories such as jogging sleeves for iPods or iPhones.
(8.) http://www.gartner.com/it/page.jsp?id=1466313
(9.) “RIM's Blackberry: failure to communicate,” Businessweek, October 7th 2010.
(http://www.businessweek.com/magazine/content/10_42/b4199076785733.htm)
(10.) “Samsung tries to woo TV app developers,” CNET, August 31st 2010. (http://news.cnet.com/8301–31021_3–20015215-260.html#ixzz124u0TYXs)
Page 9 of 11
Software Platforms
(11.) For more on digital formats and piracy, see Belleflamme, P. and M. Peitz “Digital Piracy: Theory,” chapter 18 and Waldfogel, J. “Digital Piracy:
Empirics,” chapter 19.
(12.) See Lee, R. “Home Videogame Platforms,” chapter 4
(13.) “Amazon Amps Up Apps Rivalry,” Wall Street Journal, October 7th 2010.
(http://online.wsj.com/article/SB10001424052748704696304575538273116222304.html)
(14.) A well-known example is Apple's refusal to allow applications relying on Adobe's Flash program. In the face of mounting developer criticism and
the risk of antitrust investigation, Apple recently decided to relax some of its restrictions. See: “Apple Blinks in Apps Fights,” Wall Street Journal,
September 10th 2010. (http://online.wsj.com/article/SB10001424052748704644404575481471217581344.html)
(15.) “Apple's iAds Policy Gets FTC Scrutiny,” International Business Times, June 14th 2010. (http://www.ibtimes.com/articles/28542/20100614/apples-iad-policy-comes-under-ftc-scanner.htm)
(16.) The only other software platform studied in IE that charged third-party developers was NTT DoCoMo's i-mode (9 percent cut of revenues for
content providers choosing to rely on i-mode's billing system). But even in that case, the revenues coming from developers were less than 1 percent of
total i-mode revenues.
(17.) http://ftalphaville.ft.com/blog/2010/07/13/285006/goldman-really-likes-its-new-ipad/
(18.) http://gigaom.com/2010/01/12/the-apple-app-store-economy/
(19.) The 40 minutes per day estimate is based on Facebook's announcement that its more than 500 million users spend over 700 billion minutes on the
site each month. See: http://www.facebook.com/press/info.php?statistics (accessed December 6th 2010).
(20.) Some aspects of the competitive dynamics among complementors are explored by Casadesus-Masanell and Yoffie (2007).
(21.) Chapter 7 in IE provides background on the founding of Symbian and its evolution up to 2005.
(22.) “Samsung Gives Details on Bada OS,” InformationWeek, December 9th 2009.
(http://www.informationweek.com/news/mobility/business/showArticle.jhtml?articleID=222001332)
(23.) “Samsung tries to woo TV app developers,” CNET, August 31st 2010. (http://news.cnet.com/8301–31021_3–20015215-260.html#ixzz124u0TYXs)
(24.) “Intel Adopts an Identity in Software,” New York Times, May 25th 2009. (http://www.nytimes.com/2009/05/25/technology/businesscomputing/25soft.html)
(25.) “Intel and Nokia Team Up on Mobile Software,” New York Times, February 15th 2010. (http://bits.blogs.nytimes.com/2010/02/15/intel-and-nokiateam-up-on-mobile-software/)
(26.) For details on the Microsoft-Intel relationship, see Yoffie (2003a) and (2003b), Yoffie et al. (2004) and CITE Wintel (A) case and “The End of
Wintel,” The Economist, July 29th 2010 (http://www.economist.com/node/16693547).
(27.) U.S. v. Microsoft—Proposed Findings of Fact (http://www.justice.gov/atr/cases/f2600/2613pdf.htm).
(28.) http://aws.amazon.com/
(29.) “Comparing Amazon's and Google's Platform-as-a-Service (PaaS) Offerings,” ZDNet, April 11th 2008.
(http://www.zdnet.com/blog/hinchcliffe/comparing-amazons-and-googles-platform-as-a-service-paas-offerings/166)
(30.) Facebook is currently a four sided platform. It connects: (1) users; (2) third-party application and content developers; (3) advertisers; and (4)
third-party web properties that can be browsed using one's Facebook identity via Facebook Connect.
(31.) This is similar to the eBay SP described in chapter 12 in IE: eBay's APIs are specifically designed for building applications targeted at eBay buyers
and sellers.
(32.) http://www.netmarketshare.com/browser-market-share.aspx?qprid=0, accessed March 22, 2011.
(33.) “Microsoft Faces New Browser Foe in Google,” The New York Times, September 1st 2008.
(http://www.nytimes.com/2008/09/02/technology/02google.html) “Google Rekindles Browser Wars,” Wall Street Journal, July 7th 2010.
(http://online.wsj.com/article/SB10001424052748704178004575351290753354382.html)
(34.) “Microsoft Modernizes Web Ambitions with IE9,” CNET News, March 16th 2010. (http://news.cnet.com/8301-30685_3-20000433-264.html)
(35.) “Microsoft Faces New Browser Foe in Google,” The New York Times, September 1st 2008.
(http://www.nytimes.com/2008/09/02/technology/02google.html) “Google Rekindles Browser Wars,” Wall Street Journal, July 7th 2010.
(http://online.wsj.com/article/SB10001424052748704178004575351290753354382.html)
(36.) “Clash of the Titans,” The Economist, July 8th 2009. (http://www.economist.com/node/13982647?
story_id=13982647&source=features_box_main)
(37.) “Google Announces Chrome OS,” PCWorld, July 8th 2009. (http://www.pcworld.com/article/168028/google_announces_chrome_os.html)
(38.) http://sites.force.com/appexchange/browse?type=Apps
(39.) “Who's Who in Application Platforms for Cloud Computing: The Cloud Specialists,” Gartner Report, September 2009.
http://en.wikipedia.org/wiki/Amazon_Web_Services
(40.) http://www.facebook.com/press/info.php?statistics
Page 10 of 11
Software Platforms
(41.) “Useless Applications Plague Facebook,” The Lantern, June 20th 2009. (http://www.thelantern.com/2.1345/useless-applications-plague-facebook1.76407)
(42.) VMware 2010 10-K filing. Available at: http://ir.vmware.com/phoenix.zhtml?c=193221&p=IROLsecToc&TOC=aHR0cDovL2lyLmludC53ZXN0bGF3YnVzaW5lc3MuY29tL2RvY3VtZW50L3YxLzAwMDExOTMxMjUtMTAtMDQ0Njc3L3RvYy9wYWdl&ListAll=1
(43.) “VMware Lays Down Corporate IT Marker,” Financial Times, October 4, 2010.
Andrei Hagiu
Andrei Hagiu is Assistant Professor of Strategy at the Harvard Business School.
Page 11 of 11
Home Videogame Platforms
Oxford Handbooks Online
Home Videogame Platforms
Robin S. Lee
The Oxford Handbook of the Digital Economy
Edited by Martin Peitz and Joel Waldfogel
Print Publication Date: Aug 2012
Online Publication Date: Nov
2012
Subject: Economics and Finance, Economic Development
DOI: 10.1093/oxfordhb/9780195397840.013.0004
Abstract and Keywords
This article highlights the key features that distinguish the industrial organization of videogames from other similar
hardware-software markets. The models of consumer and firm (both hardware and software) strategic behavior are
then addressed. Videogame consoles sit squarely amidst the convergence battle between personal computers and
other consumer electronic devices. Games developed for one console are not compatible with others. The pricing
of hardware consoles has been the most developed area of research on platform strategy. It is noted that even
retrospective analysis of the videogame market when all the players were known needed sophisticated modeling
techniques to handle the industry's complexities, which include controlling for dynamic issues and accounting for
consumer and software heterogeneity. The success and proliferation of videogames will continue to spawn
broader questions and enhance the understanding of general networked industries.
Keywords: videogame industry, hardware, software, videogame consoles, pricing, videogame market, consumer
1. Introduction
What began with a box called Pong that bounced a white dot back-and-forth between two “paddles” on a television
screen has now blossomed into a $60B industry worldwide, generating $20B annually in the United States alone.1
Today, videogames are a serious business, with nearly three-quarters of US households owning an electronic
device specifically used for gaming, and many predicting that figure to increase in the coming years.2 Given the
widespread adoption of a new generation of videogame systems introduced in 2006 and the ever growing
popularity of online and on-the-go gaming, videogames are also no longer strictly the stuff of child's play: surveys
indicate 69 percent of US heads of households engage in computer and videogames, with the average age of a
player being 34 years old.3 As newer devices continue to emerge with even more advanced and immersive
technologies, it is likely that videogames will continue to play an ever increasing role in culture, media, and
entertainment.
Owing no small part to this success, the videogame industry has been the subject of a growing number of studies
and papers. This chapter focuses on research within a particular slice of the entire industry – the home videogame
console market – which on its own is a fertile subject for economic research, both in theory and empirics. As a
canonical hardware-software market rife with network effects (c.f., Katz and Shapiro, (1985); Farrell and Saloner
(1986)), videogames are an ideal setting to apply theoretical models of platform competition and “two-sided
markets”; and as a vertical market dominated on different sides by a small number of (p. 84) oligopolistic firms,
videogames provide an opportunity to study issues related to bilateral oligopoly and vertical contracting.
Furthermore, with the development of new empirical methods and tools, data from the videogame market can be
used to estimate sophisticated models of dynamic consumer demand, durable goods pricing, and product
investment and creation. By focusing solely on videogames, economic research can inform our analysis of other
Page 1 of 18
Home Videogame Platforms
related markets in technology, media, or even more broadly defined platform-intermediated markets.
This chapter is organized as follows. I first provide a brief overview of the industrial organization of videogames,
and emphasize the key features that distinguish it from other similar hardware-software markets. Second, I survey
economic research on videogames, focusing primarily on models of consumer and firm (both hardware and
software) strategic behavior; I also highlight potential avenues for future research, particularly with respect to
platform competition. Finally, I conclude by discussing how these economic models can help us better understand
vertical and organizational issues within the industry, such as the impact of exclusive contracting and integration
between hardware platforms and software developers on industry structure and welfare.
2. The Industrial Organization of Home Videogames
2.1. Hardware
Today firms in a variety of industries produce hardware devices that vary widely in size, portability, and
functionality for the purpose of electronic gaming. However, as has been the case for most of the four-decade
history of the home videogame industry, these devices are primarily stationary “boxes” that require a monitor or
television set for use. Referred to as consoles or platforms, these devices are standardized computers tailored for
gaming and produced by a single firm. Approximately 53 percent of households in the United States are estimated
to own a videogame console or handheld system.4
For the past decade, the three main console manufacturers or platform providers have been Nintendo, Sony, and
Microsoft. Nintendo, originally a Japanese playing card company founded in the late 19th century, is the most
experienced veteran of the three: it has manufactured videogame consoles since the late 1970s, and is the only
firm whose primary (and only) business is videogames. Its Nintendo Entertainment System (NES), released first in
Japan in 1983 and two years later in the United States, was the first major videogame platform to achieve global
success. Nintendo has since released several consoles, including the most recent “Wii” in 2006; the Wii was one
of the first to incorporate a novel motion-sensing interface and credited for expanding the appeal of home
videogaming to a broader audience.
(p. 85) The other two console manufacturers entered many years after Nintendo's NES system dominated the
market. Sony released its first videogame console – the Playstation – in 1995.5 One of the first consoles to use
games produced on CDs as opposed to more expensive cartridges, the Playstation would sell over 100M units in its
lifetime and establish Sony as the dominant console manufacturer at the turn of the 21st century. The Playstation
has had two successors: the PS2, released in 2000, became the best selling console in history with over 140M
units sold; 6 and the PS3, released in 2006, is perhaps most famous for being one of the most expensive videogame
consoles ever produced. Finally, Microsoft, as the newest of the three console manufacturers, entered the home
videogame market in 2001 with the Xbox console; it later followed it up with the Xbox360 in 2005. Forty-one
percent of US households are estimated to own at least one of the three newest consoles.7
In general, hardware specifications for a given console remain fixed over its lifetime to ensure compatibility with
any games produced for that console; only by releasing a new console could a firm traditionally introduce new
hardware with more powerful processing power and graphical capabilities. A new set of consoles has historically
been launched approximately every five years – thus heralding a new “generation” within the industry; however,
due to the large sunk-cost associated with development of new consoles, the desire of hardware manufacturers to
recoup initial investments, and the shift toward upgrading existing consoles via add-on accessories, the length
between new generations is likely to increase in the future.8
Although this chapter focuses on home videogame consoles, there is still a large market for dedicated portable
gaming devices, currently dominated by Sony and Nintendo, and gaming on multifunction devices, such as
smartphones and media players (e.g, Apple's iPod and iPhone). In addition, although personal computers (PCs)
have always been able to play games, their significance as traditional videogame platforms is small: less than 5
percent of videogame software revenues today derive from PC game sales (though this may change in the future
given the rise of online gaming via virtual worlds or social networks).9
Finally, just as other devices in other industries have been adding video-gaming capabilities, videogame consoles,
Page 2 of 18
Home Videogame Platforms
too, have been adding greater functionality: for example, today's consoles also function as fully independent
media hubs with the ability to download and stream movies, music, television, and other forms of digital content
over the Internet. Videogame consoles thus sit squarely amidst the convergence battle between personal
computers and other consumer electronic devices.
2.2. Software and Games
In addition to console manufacturers, the videogame industry also comprises firms involved in the production of
software or games.10 These firms can be roughly categorized into two types: developers or development studios,
who undertake (p. 86) the programming and creative execution of a game; and publishers, who handle
advertising, marketing, and distribution efforts. This distinction is not necessarily sharp: most publishers are
integrated into software development, often owning at least one studio; and although independent software
developers exist, they often have close relationships with a single software publisher for financing in exchange for
distribution and publishing rights. Such relationships may appear to be similar to integration insofar they are often
exclusive, and have become standard for independent developers as the costs of creating games have
dramatically increased over time.11
Console manufacturers also historically have been and continue to be integrated into software development and
publishing. Any title produced by a console manufacture's own studios or distributed by its own publisher is
referred to as a first-party title, and is exclusive to that hardware platform. All other games are third-party titles and
are developed and published by other firms.
Much like videogame hardware, videogame software is predominantly produced by a handful of large firms: the top
10 publishers, which also includes the main three console manufacturers, produce over 70 percent of all games
sold, with the largest (Electronic Arts) commanding a 20 percent market share. Furthermore, individual games have
been increasingly exhibiting high degrees of sales concentration with the emergence of “killer applications” and
“hit games.” During the “sixth generation” of the industry between 2000 and 2005, nearly 1,600 unique software
titles were released for the three main consoles; however, the top 25 titles on each system comprised 25 percent
of total software sales, and the top 100 titles over 50 percent. Since then, a handful of titles have sold millions of
copies, with some games even generating over $1B in sales on their own.12
Finally, unlike hardware, the lifetime of a particular game is fairly short: typically half of a game's lifetime sales
occur within the first 3 months of release, and very rarely do games continue to sell well more than half a year from
release.
2.3. Network Effects and Pricing
Since consoles have little if any stand-alone value, consumers typically purchase them only if there are desirable
software titles available. At the same time, software publishers release titles for consoles that either have or are
expected to have a large installed base of users. These network effects operative on both sides of the market are
manifest in most hardware-software industries, and are partly a reason for the complex form of platform pricing
exhibited by videogame platforms: most platform providers subsidize the sale of hardware to consumers, selling
them close to or below cost, while charging publishers and developers a royalty for every game sold (Hagiu, 2006;
Evans, Hagiu, and Schmalensee, 2006). This “razor blade” model was initially used by Atari with the release of its
VCS console in 1977 – Atari originally sold its hardware at a very slight margin, but its own (p. 87) videogame
cartridges at a 200 percent markup (Kent, 2001) – and Nintendo was the first to charge software royalties with its
NES system nearly a decade later.13 As a result, most platform profits have been and continue to be primarily
derived not from hardware sales, but rather from software sales and royalties.14 Note this stands in contrast to the
traditional pricing model in PCs, where the operating system (e.g., Microsoft Windows) is typically sold to the
consumer at a positive markup, yet no royalties or charges are levied on third-party software developers.
For the past two generations of videogame consoles, initial losses incurred by platform providers due to this pricing
scheme have been substantial: e.g., the original Xbox had estimated production costs of at least $375 yet sold for
an introductory price of $249; in the first four years of existence, Microsoft was estimated to have lost $4B on the
Xbox division alone.15 However, as costs for console production typically fall over time faster than the retail price,
a console manufacturer's profits on hardware typically increases in the later years of a generation: e.g., Sony in
16
Page 3 of 18
Home Videogame Platforms
2010 announced it finally was making a profit on its PS3 console 3.5 years after it launched; 16 a similar path to
profitability was followed by Sony's PS2 after its release. There are, however, some exceptions: e.g., Nintendo's
current generation Wii console was profitable from day one and was never sold below cost.17
2.4. Porting, Multi-homing, and Exclusivity
Typically within a generation, games developed for one console are not compatible with others; in order to be
played on another console, the game must explicitly be “ported” by a software developer and another version of
the game created.18 Due to the additional development and programming time and expense to develop additional
versions of the game, the porting costs of supporting an additional console can be as high as millions of dollars for
the current generation of systems.19
During the early years of the videogame industry (pre-1983), any firm who wished to produce a videogame could
develop and release a game for any console which utilized cartridges or interchangeable media. However, many
console manufacturers recouped console development costs from the sale of their own games, and saw the sale of
rival third-party games as a threat; some console manufacturers even sued rival software developers to
(unsuccessfully) keep them off their systems (Kent, 2001). The inability to restrict the supply of third-party software
led to a subsequent glut of games released in the early 1980s, many of low quality; in turn, this partially caused the
videogame market crash of 1983 in which demand for videogame hardware and software suddenly dried up.
Whereas there used to be over a hundred software developers in 1982, only a handful remained a year later.
As one of the survivors of the crash, Nintendo deviated from the strategy employed by previous console
manufacturers when releasing its NES console in the United States in 1985. First, it actively courted third-party
software developers, understanding that a greater variety of software would increase attractiveness of the platform
to other consumers; at the same time, it prevented unauthorized games (p. 88) from being released via a security
system in which cartridges without Nintendo's proprietary chip could not be played on its console. Nintendo also
imposed other restrictions on its third-party software licensees: each developer was limited to publishing only 5
games a year, had to give the NES exclusivity on all games for at least 2 years, and had to pay a 20 percent
royalty on software sales (Kent, 2001; Evans, Hagiu, and Schmalensee, 2006).
It was not until 1990 that Nintendo – in the midst of lawsuits levied by competitors and an FTC investigation for
anticompetitive behavior – announced that it would no longer restrict the number of games its developers could
produce or prohibit them from producing games for other systems.20 Since then, forced exclusivity – the
requirement that a videogame be only provided for a given console or not at all – has not been used in the
industry.21 Though many software titles now choose to “multihome” and support multiple consoles, there are still
instances in which third-party games are exclusive: some do so voluntarily (perhaps due to high porting costs),
some engage in an exclusive publishing agreement with the console provider (typically in exchange for a lump
sum payment), and others may essentially integrate with a platform by selling the entire development studio
outright.
2.5. Consumers
As mentioned earlier in this chapter, the vast majority of videogame players are no longer children: 75 percent of
gamers are 18 years old or older, with two-thirds of those between the ages of 18–49.22 In addition, there is a wide
degree of variance in usage and purchasing behavior across consumers: in 2007, Nielsen estimated the heaviest
using 20 percent of videogame players accounted for nearly 75 percent of total videogame console usage (by
hours played), averaging 345 minutes per day. Furthermore, although on average 6–9 games were sold per
console between 2000–2005, “heavy gamers” reported owning collections of over 50+ games,23 and on average
purchased more than 1 game per month.24
3. Economics of the Videogame Industry
Although there exist active markets for PC and portable gaming, most research on videogames has focused on the
home videogame market. This is not without reason. First, the home videogame industry is convenient to study
since all relevant firms within a generation are known, and there exist data containing a list of all software
Page 4 of 18
Home Videogame Platforms
produced for nearly all of the consoles released in the past three decades. Compare this to the PC industry, where
there are thousands of hardware manufacturers and product varieties, and even greater numbers of software
developers and products; obtaining detailed price and quantity information, for example, on (p. 89) the universe
of PC configurations, accessories, and software products would be infeasible. Second, there are relatively few
substitutes to a home videogame console, allowing for a convenient market definition. Finally, as videogame
consoles have been refreshed over time, there is the potential for testing repeated market interactions across
multiple generations.
This section provides a brief (and thus by no means comprehensive) review of recent economic research on the
home videogame industry, and emphasizes both the advances and limitations of the literature in capturing
important features and dynamics of the market. As understanding the interactions between the three major types of
players in the industry – software firms, hardware firms, and consumers – provide the foundation for any
subsequent analysis (e.g., how industry structure or welfare changes following a policy intervention or merger), it
is unsurprising that the vast majority of papers have first focused on modeling the strategic decisions of these
agents. Only with these models and estimates in hand have researchers have begun addressing more complicated
questions including the role and impact of vertical relationships in this industry. Across all of these fronts remain
several open questions, and I highlight those areas in which future study would prove useful.
3.1. Consumer Demand and Software Supply
As with many hardware-software industries, videogames exhibit network effects in that the value of purchasing a
videogame console as a consumer increases in the number of other consumers who also decide to purchase that
console. Although there is a direct effect in that people may prefer owning the same videogame console as their
friends or neighbors, the primary means by which this occurs is through an indirect effect: more consumers
onboard a particular console attracts more games to be produced for that console, which in turn makes the
console an even more desirable product.25,26 Such indirect network effects also work in the other direction:
software developers may benefit from other software developers supporting the same console in that more games
attracts more consumers, which further increases the potential returns for developing a game for that console.27
Numerous studies have attempted to empirically document or measure the strength, persistence, and asymmetry
of these kinds of network effects in a variety of industries. Many of these original empirical papers base their
analysis on models in which consumers of competing platforms preferred to purchase the device with a greater
number of compatible software titles. As long as consumers preferred a greater variety of software products –
typically modeled via CES preferences – and certain assumptions on the supply of software held, then a simple loglinear relationship between consumer demand for hardware and the availability of software could be theoretically
shown to arise in equilibrium (c.f Chou and Shy (1990); Church and Gandal (1992)). Empirical research based on
these types of models include studies on adoption of CD players (Gandal, Kende, and Rob, 2000), DVD (p. 90)
players (Dranove and Gandal, 2003), VCRs (Ohashi, 2003), and personal digital assistants (Nair, Chintagunta, and
Dubé, 2004).
In the spirit of this literature, Shankar and Bayus (2003) and Clements and Ohashi (2005) are two of the earliest
papers to empirically estimate the existence and magnitude of network effects in the videogame industry. Whereas
Shankar and Bayus (2003) assume software supply is exogenous and not directly affected by hardware demand,
Clements and Ohashi (2005) estimate two simultaneous equations representing the two-sided relationship between
a console's installed base of users and its games. This approach, motivated by a static model of consumer demand
and software supply, is followed by other papers analyzing the videogame industry (e.g., Corts and Lederman
(2009); Prieger and Hu (2010)), and is useful to describe briefly here.
The model assumes a consumer's utility from purchasing console j at time t is given by:
where xj are console j's observable characteristics (e.g., speed, processing power), pj,t the price, Nj,t the number
of available software titles for console j, ξj,t an error unobservable to the econometrician, and εj,t a standard logit
Page 5 of 18
Home Videogame Platforms
error; a consumer purchases console j at time t if it delivers the maximum utility among all alternatives (including
an outside option). As in Berry (1994), this can be converted into a linear regression via integrating out the logit
errors and using differences in log observed shares for each product to obtain the following estimating equation:
(1)
where sj,t, so,t, and sj,t|j≠o are the share of consumers who purchase platform j, the outside good, and platform j
conditional on purchasing a console at time t. Following prior literature, assuming a spot market for single-product
software firms, free entry, and CES preferences for software, a reduced form equation relating the equilibrium
number of software titles available to a console's installed base of users (IBj,t) can be derived: 28 (2)
where ηj,t is a mean-zero error. Clements and Ohashi (2005) estimate these two equations across multiple
generations of videogame consoles between 1994–2002 using price and quantity information provided by NPD
Group, a market research firm (which also is the source for most of the market level data used in the majority of
videogame papers discussed in this chapter). They employ console and year (p. 91) dummies in estimation, use
the Japanese Yen and US dollar exchange rate and console retail prices in Japan as an instrument for price, and
use the average age of software titles onboard each system as an instrument for the installed base.
The main objects of interest are ω and γ in (1) and (2), which represent the responsiveness of consumer demand
to the number of software titles, and vice versa. In Clements and Ohashi (2005) and similar studies, these
coefficients are found to be significant and positive, which are interpreted as evidence of indirect network
effects.29 Furthermore, these studies often show that such coefficients vary over time: e.g., Clements and Ohashi
(2005) include age-interaction effects with installed base in (2), and find that the responsiveness of software to
installed base decreases over lifetime of videogame console; similarly, Chintagunta, Nair, and Sukumar (2009) use
an alternative hazard rate econometric specification of technology adoption and find that strength of network
effects also varies over time, and that the number of software titles and prices have different effects on demand in
later versus earlier periods. Both of these studies find price elasticities for a console diminish as consoles get older.
It is worth stressing (as these papers have) that these estimates come from a static model, and care must be used
when interpreting estimated parameters. There are several reasons a static model may not be ideal for analyzing
this industry. Since consoles and games are durable goods, consumers do not repurchase the same product which
typically is implied by a static model without inventory consideration; in addition, forward-looking consumers may
delay purchase in anticipation of lower prices or higher utility from consumption in future periods (which may
partially explain the strong seasonal spike in sales around the holidays). Failing to account for both the durability of
goods and the timing of purchases can bias estimates of price and cross-price elasticities (c.f. Hendel and Nevo
(2006)) as well as other parameters – including the strength of network effects.
Most importantly, however, a static model does not allow consumers to anticipate future software releases when
deciding when to purchase a console; since consoles are durable, consumers in reality base their hardware
purchasing decisions on expectations over all software on a platform, including those titles that have not yet been
released. Hence, a consumer's utility function for a console for example, should reflect this. Thus, insofar that ω
can be estimated, it at best represents the extent to which current software variety reflects a consumer's
expectation over the stock of current and future games. That estimated coefficients for these static models are
shown to vary across time and even across consoles suggest dynamic issues are at play, and the underlying
relationship between consumer demand and software availability may be significantly more complex.
3.1.1. Dynamics and Software/Consumer Heterogeneity
In response to these concerns, researchers have begun incorporating dynamics into their analysis of consumer
demand for videogames. For instance, Dubé, Hitsch, and Chintagunta (2010) utilize a dynamic model in which
forward looking (p. 92) consumers time their purchases for consoles based on expectations of future prices and
software availability; using a two-step estimator, they are also able to simultaneously estimate a console provider's
optimal dynamic pricing function. Using estimates from their model, the authors study how indirect network effects
Page 6 of 18
Home Videogame Platforms
can lead to greater platform market concentration, and illustrate how, in spite of strong network effects, multiple
incompatible platforms can co-exist in equilibrium. Nonetheless, this dynamic model of hardware demand still
maintains the assumption used in the previous empirical network effects literature that consumers respond to
software “variety,” which can be proxied by the number of available software titles, and that software variety can
still be expressed as a reduced form function of each platform's installed base (e.g., as in (2)). This may have been
a reasonable assumption for these papers which primarily focused on the period up until and including the 32/64
bit generation of videogames (roughly pre-2000). However, as mentioned previously, the past decade has seen
the dominance of hit games where a small subset of software titles captured the majority of software sales onboard
a console. Given the increasing variance in software quality and skewed distribution of software sales, a model
specifying consumer utility as a function only of the number of software titles as opposed to the identity of
individual games – although tractable and analytically convenient – may be of limited value in analyzing the most
recent generations of the videogame industry as well as other “hit-driven” hardware-software markets.
Mirroring the necessity to control for software heterogeneity in videogames is the additional need to control
carefully for consumer heterogeneity. As has been previously discussed, the variance across consumers in the
number of games purchased and hours spent playing games has been well documented, and capturing this rich
heterogeneity is important for accurate estimates of product qualities and demand parameters. Although controlling
for consumer heterogeneity is also important in a static setting, doing so in a dynamic context adds an additional
complexity in that the characteristics of consumers comprising the installed base of a console evolves over time.
E.g., since early adopters of videogame consoles are predominantly consumers with high valuations for
videogames, software released early in a console's lifetime face a different population of consumers than a game
that is released after a console is mature. Failing to correct for this consumer selection across time will bias
upwards estimates of early-released games’ qualities, and bias downwards estimates of games released later. In
turn, the magnitudes of these parameters underly incentives for almost all strategic decisions on the part of firms:
e.g., firms may engage in intertemporal price discrimination (initially setting high prices before lowering them) in
order to extract profits out of high valuation or impatient consumers first.
In an attempt to control for these issues, Lee (2010a) estimates a dynamic structural model of consumer demand
for both videogame hardware and software between 2000–2005 that explicitly incorporates heterogeneity in both
consumer preferences and videogame quality. By explicitly modeling and linking hardware and software demand,
the analysis is able to extract the marginal impact of a single (p. 93) videogame title on hardware sales, and allow
this impact to differ across titles in an internally consistent manner. An overview of the approach follows.
The model first specifies consumer i's lifetime expected utility of purchasing a hardware console j at time t (given
she owns other consoles contained within her inventory l) as: (3)
where xj,t are observable console characteristics, pj,t the console's price, ξj,t an unobservable product
characteristic, and εi,j,t,ι a logit error. The paper introduces two additional terms to account for inventory concerns
and the anticipation of future software: i.e., D(·) captures substitution effects across consoles and allows a
consumer to value the console less (or more) if she already owns other consoles contained within ι; and Гj,t
reflects a consumer's perception of the utility she would derive from being able to purchase videogames available
today and in the future. Consumers have different preferences for videogaming, captured by the coefficient
and for prices, given by
,
. Finally, the coefficient on Гj,t, α Г , captures how much hardware utility — and hence
hardware demand – is influenced by expected software utility.
Note that this specification of hardware demand does not use a static-period utility function, but rather lifetime
expected utilities. Furthermore, the model incorporates dynamics explicitly by assuming consumers solve a
dynamic programming problem when determining whether or not to purchase a videogame console in a given
period: each consumer compares her expected value from purchasing a given console to the expected value from
purchasing another console or none at all; furthermore, consumers can multihome and return the next period to
purchase any console they do not already own.
On the software side, the setup is similar: every consumer who owns a console is assumed to solve a similar
Page 7 of 18
Home Videogame Platforms
dynamic programming problem for each game she can play but does not already own. This in turn allows for the
derivation of the expected option value of being able to purchase any particular title k onboard console j at time t,
which is denoted EWi,j,k,t. Finally, to link the hardware and software demand together, the model defines Гj,t as the
sum of option values for any software title k available on console j at time t (given by the set Kj,t) plus an
expectation over the (discounted) option values of being able to purchase games to be released in the future,
represented by Λj,t: (4)
Lee (2010a) estimates the underlying structural parameters of the model, which include product fixed effects for
every console and game and consumer (p. 94) preferences over price and software availability, utilizing
techniques pioneered in Rust (1987), Berry (1994), and Berry, Levinsohn, and Pakes (1995), and later synthesized
in a dynamic demand environment by Melnikov (2001) and Gowrisankaran and Rysman (2007).30 An important
extension involves controlling for the selection of consumers onto consoles across time, which requires the
simultaneous estimation of both hardware and software demand.
Estimates indicate that although the vast majority of titles had a marginal impact on hardware demand, the
availability of certain software titles could shift hardware installed bases by as much as 5 percent; furthermore,
only a handful of such “hit” titles are shown to have been able to shift hardware demand by more than one percent
over the lifetime of any given console. A model which assumed consumers valued all titles equally would thus lead
to drastically different predictions on the impact and magnitudes of software on hardware demand. Lee (2010a)
also demonstrates that by failing to account for dynamics, consumer heterogeneity, and the ability for consumers
to purchase multiple hardware devices, predicted consumer elasticities with respect to price and software
availability would be substantially biased.
As is often the case, however, several strong assumptions are required for this more complicated analysis. First,
for tractability, consumers perceive each software title onboard a system as an independent product.31 Second,
consumers have rational expectations over a small set of state variables which are sufficient statistics for
predicting future expected utilities. Although the consistency of beliefs with realized outcomes may have been a
reasonable assumption for the period examined, there may be other instances for which it may not be well suited:
e.g., Adams and Yin (2010) study the eBay resale of the newest generation of videogames consoles released in
2006, and find that prices for pre-sale consoles rapidly adjust after they are released.32
3.1.2. Software Supply and Pricing
Accompanying the development of more realistic models for consumer demand have been richer models for
software supply which treat software firms as dynamic and strategic competitors. One strain of literature focused
on the optimal pricing of videogame software, itself a general durable goods market with forward looking
consumers. Nair (2007) combines a model of dynamic consumer demand for videogame software with a model of
dynamic pricing, and finds that the optimal pricing strategy for a software firm is consistent with a model of
“skimming”: charging high prices early to extract rents from high value (or impatient) consumers before dropping
prices over time to reach a broader market. This corresponds to the pricing patterns observed in the data: the vast
majority of games on a console are released at a single price point (e.g., $49.99), and prices fall in subsequent
periods.33
Inevitably, studies on pricing can only be conducted on games which have been already released for a particular
platform; moving one step earlier in a software developer's decision process is the choice of which console to join.
A first-party game has historically only been released exclusively on the integrated platform; (p. 95) however, a
third-party software developer has a strategic choice: it can release a title on multiple platforms in order to reach a
larger audience but pay additional porting costs, or it can develop exclusively for one console and forgo selling its
game to consumers on other platforms.
Lee (2010b) models what can be considered software's “demand” for a platform. As in consumer demand,
Page 8 of 18
Home Videogame Platforms
dynamics are important in this decision as well: since each software publisher makes this choice months before a
game's release and since a game remains on the market for at least several months, a software developer
anticipates changes in future installed bases of each console as well as the subsequent choices of other software
developers when comparing expected profits of different actions. Using both the consumer demand estimates and
similar assumptions used in Lee (2010a), the model computes a dynamic rational expectations equilibrium in which
every software title chooses the optimal set of platforms to develop for while anticipating the future actions (and reactions) of other agents.
A key input into the model, however, are porting costs for supporting different sets of consoles. These are typically
unobserved. Lee (2010b) estimates these costs for games released between 2000 and 2005 under the revealed
preference assumption that games released in the data were released on the subset of platforms which maximized
their expected discounted profits.34 Via an inequalities estimator developed in Pakes, Porter, Ho, and Ishii (2006),
relative differences in porting costs can be estimated. Estimates show significant variance in costs depending on
the genre or type of game being ported, and that some consoles are cheaper (e.g., Xbox) than others (e.g., PS2) to
develop for. On average, costs for this generation are approximately $1M per additional console, which are
roughly in line with industry estimates.
The final step back in the software production sequence involves the creation and development of new games.
This represents the least developed area of research on software supply, and is the remaining key step in
completely unpacking the mechanism which generates the reduced form relationship shown to exist between
installed base of a console and software availability. On this front are issues related to an investment-quality
tradeoff for game development, a product positioning decision of what genre or type of game to produce, timing
games with release dates as with motion pictures (Einav (2007)), and the make-or-buy decision faced by a
software publisher who can either engage in an arms-length contract with an independent developer or integrate
into software development. Although some papers have studied whether integration with a console provider
improves game quality,35 there remains much to be done.
3.2. Platform Competition
Most of the analysis discussed so far has held fixed the actions of each platform, including choices of royalty rates
charged to third-party software providers, development or porting costs, and integration or exclusive contracting
decisions. (p. 96) Understanding these decisions from a theoretical perspective is complicated; the ability to
analyze these strategic choices is further confounded by the absence of detailed data on these objects of interest
(i.e., royalties, costs, and contracts). Even so, understanding how videogame platforms compete with one another
for consumers and software firms is not only perhaps the most important aspect of this industry, but also the one
that is the most relevant and generalizable to other hardware-software markets and platform industries. Thus
overcoming these challenges should be the focus of future efforts.
3.2.1. Pricing
The most developed area of research on platform strategy has been on the pricing of hardware consoles: both
Dubé, Hitsch, and Chintagunta (2010) and Liu (2010) estimate dynamic models of hardware pricing to consumers,
and highlight the importance of indirect network effects in explaining observed pricing patterns and rationalizing
console “penetration pricing” – that is, consoles are typically priced below marginal costs, but as marginal costs
fall faster than prices, margins tend to increase over time. Dubé, Hitsch, and Chintagunta (2010) further note that
the presence of network effects are not sufficient on their own to make penetration pricing optimal, and rather that
these effects need be sufficiently strong.
Nonetheless, these analyses hold fixed prices charged by platforms to the “other side” of the market in that the
supply of software is only dependent on the installed base of consumers onboard a console, and not the royalty
rates levied by the console. Of course, in reality these royalties are as much a strategic decision as the price
charged to consumers, and in many ways are just as important to a platform's success. For example, Sony
charged a much lower royalty than Nintendo when it introduced its Playstation console ($9 as opposed to $18),
which helped it attract a greater number of third-party software developers (Coughlan, 2001).
To determine the optimal royalty, it's useful to understand why they need be positive at all. The theoretical two-
Page 9 of 18
Home Videogame Platforms
sided market literature (c.f. Armstrong (2006); Rochet and Tirole (2006); Weyl (2010)) has focused on precisely
this question in related networked industries, and emphasized how changing the division of pricing between sides
of a platform market can affect platform demand and utilization; Hagiu (2006) focused on the videogame industry in
particular, and analyzed the relationship between a console's optimal royalty rate and optimal hardware price. As
noted before, the videogame industry differs from most other hardware-software markets such as the PC industry in
that the majority of platform profits derive not from the end user or consumer, but rather from the software
developers in the form of royalty payments. However, providing a single explanation of why this occurs within the
videogame industry proves difficult, as many theory models indicate which side can multihome, how much one side
responds and values the participation of the other, and the heterogeneity in such preferences can drastically
influence the optimal division of prices.36 Thus, there may be many forces at work; Hagiu (2009) provides another
explanation, in which the more that consumers (p. 97) prefer a variety of software products, the greater a
platform's profits derive from software in equilibrium.
There are difficulties testing these alternative explanations in the data. First, although obtaining measurements of
elasticities of consumers with respect to software (and vice versa) is possible, estimating how software supply
would change in response to a change in royalty rates is difficult; not only is data on royalty rates difficult to come
by, but typically they do not change for a particular console during its lifetime and hence there is little identifying
variation.37 Second, given that certain hit software titles dominate the market and games are supplied increasingly
by publishers with market power, it is an open question whether the theoretical results still apply when one side of
the market no longer are price-takers but rather strategic oligopolists.
3.2.2. Porting Costs and Compatibility
Another decision on the part of console manufacturers that has not widely been studied is the ability of a platform
provider to affect the costs of developing or “porting” to its console. The theoretical literature has studied the role
of switching costs in influencing market share and power in general networked industries (c.f. Farrell and Klemperer
(2007)), and these issues are central in the videogame market as well. For example, anecdotal evidence suggests
that one of the main reasons for Sony's success in entering the videogame market was that it was easier and
cheaper to develop for the Sony Playstation than rival consoles at the time: in addition to having lower royalty
rates, Sony actively provided development tools and software libraries to third party developers, and it utilized CDs
as opposed to more costly cartridges (the format used by Nintendo consoles at the time). Incidentally, Microsoft
also leveraged lower development costs as a selling point of its first console: as essentially a modified Microsoft
Windows PC with an Intel CPU, the Xbox was extremely easy for existing PC developers to adjust to and develop
games for (Takahasi, 2002).
Relatedly, platform providers can also decide whether or not to make games compatible across consoles, as
opposed to forcing developers to make different versions. Although cross-platform compatibility across competing
consoles has not been witnessed (instead, requiring software developers to port and create multiple versions of a
game), a platform provider could allow for backwards compatibility – that is, a new console being able to play
games released for the previous generation console. One widely cited advantage of Sony's PS2 over its
competitors at the time was its compatibility with original Playstation games; this gave it an accessible library of
over a thousand games upon release, easily surpassing the number of playable titles on any of its competitors.38
Interestingly, the PS3 initially could play PS2 games, but newer versions of the console eliminated this ability; this
suggests that the benefits to backward compatibility are most substantial early in a console's life before currentgeneration games are widely available, and later may not be worth the cost.39
(p. 98) 3.2.3. Exclusivity and Integration
Although there is some degree of hardware differentiation across consoles, the primary means by which consoles
compete with one another for consumers (in addition to price) is through the availability of exclusive games.40
Before Sony entered the video-game business in 1993 with its Playstation console, it purchased a major software
developer in addition to securing agreements for several exclusive titles (Kent, 2001). Similarly, before launching
the Xbox in 2001, Microsoft bought several software developers to produce exclusive games; many attribute the
(relative) success of Microsoft's Xbox console to its exclusive game Halo, acquired in 2000. In both instances,
having high-quality games available with the release of a console that were not available on competitors
contributed to greater sales.
Page 10 of 18
Home Videogame Platforms
A platform typically obtains an exclusive game in one of two ways: via internal development by a integrated
developer, or via payment to a third party developer. In recent years as development costs for games have been
increasing and porting costs have fallen as a percentage of total costs, most third-party titles have chosen to
multihome and support multiple consoles in order to maximize their number of potential buyers. Thus, even though
exclusive arrangements still occur for third-party titles, they are now increasingly used for only a temporary period
of time (e.g., six months), and console providers have become even more reliant on their own first-party titles to
differentiate themselves.
In general, understanding how platforms obtain exclusive content – either via integration or exclusive contracting –
requires a model of bilateral contracting with externalities between console manufacturers and software
developers (c.f. Segal (1999); Segal and Whinston (2003); de Fontenay and Gans (2007)). For example, the price
Sony would need to pay a software developer for exclusivity depends crucially on how much Sony would benefit,
as well as how much Sony would lose if Microsoft obtained exclusivity over the title instead. Unfortunately, the
applicability of theory to settings with multiple agents on both sides of the market is limited (there are at least three
major console manufacturers and multiple software publishers and developers), and is even further confounded by
the presence of dynamics.41 Although static models of bargaining for exclusivity have been analyzed,42 a general
model that can be taken to the data and inform our ability to predict which games or developers would be
exclusive, and the determinants of the negotiated price, would be extremely useful.43
3.2.4. Other Concerns
Ultimately, one of the biggest hurdles in bringing the theory to the data may very well be identifying the incentives
each major platform provider faces. Both Sony and Microsoft have multiple other platform businesses which are
affected by decisions made within their videogame divisions. For example, Sony faced a much higher marginal cost
than its competitors as a result of including its proprietary Blu-ray player in its PS3 console; such a decision was a
response to its desire to win the standards battle for a next-gen DVD format over consumer electronics rival
Toshiba. (p. 99) In addition, Microsoft viewed the Xbox as partly a means of protecting and expanding its
Windows and PC software business during an era of digital convergence (Takahasi, 2002). In both cases, each
company sustained large initial losses in their videogame divisions ($4B in the first four years of the Xbox for
Microsoft, $3.3B for Sony in the first two years of its PS3),44 but focusing on these numbers alone would understate
the total benefits each company received.45 Furthermore, there is again a dynamic aspect: had Microsoft not
entered in 2000 with a viable platform, it would have had a more difficult time releasing its Xbox360 device in 2005.
Determining the appropriate scope across industries and time horizon each company faces when making strategic
decisions is an open challenge.
3.3. Vertical Issues
3.3.1. Exclusive Software for Consoles
The forced exclusivity contracts employed by Nintendo in the 1980's – whereby developers could only develop
exclusively for Nintendo or not at all – were dropped under legal and regulatory pressure in 1990. Since then,
many have argued that these were anticompetitive not only in videogames (e.g., Shapiro (1999)), but in other
industries (e.g., U.S. v. Visa) as well. Nonetheless, exclusive games persist. A natural question, thus, is whether the
continued presence of exclusive first-party games developed internally by platforms, or the use of lump-sum
payments by platforms in exchange for exclusivity from third-party software developers, can be anticompetitive.
Theory has shown the effects of such exclusive vertical relationships can be ambiguous. Such relationships can
be used to deter entry or foreclose rivals ((Mathewson and Winter (1987), Rasmusen, Ramseyer, and Wiley (1991),
Bernheim and Whinston (1998)), which may be exacerbated by the presence of network externalities (e.g.,
Armstrong and Wright (2007)).46 Furthermore, exclusivity can limit consumer choice and hence welfare by
preventing consumers on competing platforms from accessing content, products, or services available only
elsewhere. On the other hand, exclusive arrangements may have pro-competitive benefits, such as encouraging
investment and effort provision by contracting partners ((Marvel (1982), Klein (1988), Besanko and Perry (1993),
Segal and Whinston (2000)). In networked industries, integration by a platform provider may be effective in solving
the “chicken-and-egg” coordination problem, one of the fundamental barriers to entry discussed in the two-sided
market literature. Furthermore, exclusivity may be an integral tool used by entrant platforms to break into
Page 11 of 18
Home Videogame Platforms
established markets: by preventing contracting partners from supporting the incumbent, an entrant can gain a
competitive advantage, spur adoption of its own platform, and thereby spark greater platform competition.
Both Prieger and Hu (2010) and Lee (2010b) attempt to shed light on this question in the context of the sixth
generation of the videogame industry (2000–2005). (p. 100) Prieger and Hu (2010) use a demand model similar to
Clements and Ohashi (2005) to show that the marginal exclusive game does not affect console demand;
consequently, the paper suggests that a dominant platform cannot rely on exclusive titles to dominate the market.
However, as already discussed in this chapter, controlling for heterogeneity in game quality is crucial, and cannot
be captured in a model where consumers only value the number of software products: estimates from Lee (2010a)
show that games that actually could drastically affect hardware market shares were primarily integrated or
exclusively provided to only one console. Thus, insofar the few hit games onboard the largest platform of the time
period studied could have contributed to its dominant position, exclusive vertical arrangements may have led to
increased market concentration.
To explore this possibility, Lee (2010b) conducts a counterfactual environment in which exclusive vertical
arrangements were prohibited in the industry during the time period studied: that is, all hardware providers both
could not write exclusive software or write exclusive contracts with software providers. Using the techniques
described in the previous chapter and demand estimates from Lee (2010a), Lee (2010b) simulates forward the
market structure if all consumers and games (including those that previously had been integrated) could freely
choose which platforms to purchase or join, and solves for the dynamic equilibrium of this game. The main finding,
focusing on the platform adoption decisions of consumers and software, is that banning exclusive arrangements
between hardware platforms and software publishers would have actually benefited Sony, the dominant
“incumbent” platform (with the one-year head start and larger installed base), and harmed the two smaller
platforms (Microsoft and Nintendo) during the time period studied.
The intuition for this result is straightforward: without exclusive arrangements, the developers of high quality
software would typically multihome and support all three consoles; lower quality titles, constrained by the costs of
porting, would likely develop first for the incumbent due to its larger installed base, and only later, if at all,
developed a version for either entrant platform. As a result, neither entrant platform would have been able to offer
consumers any differentiation or benefit over the incumbent. With exclusivity, however, entrants could create a
competitive advantage, and was hence leveraged by them to gain traction in this networked industry.47
The paper still notes that even though banning exclusive vertical arrangements may have increased industry
concentration, consumers may have benefited from access to a greater selection of software titles onboard any
given platform: consumer welfare would have increased during the five-year period without exclusivity since a
consumer could access a greater number of software titles while needing to only purchase one console.
Nonetheless, the analysis abstracts away from many potential responses to such a counterfactual policy change:
for example, platform providers are assumed to offer the same non-discriminatory contracts to all firms, investment
and product qualities do not change, and prices, entry, and exit of all products are held fixed. Indeed, the paper
notes that if Sony's prices increased as (p. 101) a result of its increased market share (or if either Nintendo or
Microsoft exited in that generation or the subsequent one, or software supply was adversely affected by the
efficiency benefits of integration and exclusivity) the change to consumer welfare could easily have been
significantly negative.
Thus, although it does appear that both Microsoft and Nintendo benefited from the ability to engage in exclusive
dealing in this period, the effects on consumer welfare are ambiguous; furthermore, in order to paint a complete
story of the effects of integration or exclusivity, one might also wish to examine an environment in which only
certain platforms (e.g., the incumbent or dominant player) could not engage in exclusive contracting, but others
could. Such extensions would require developing additional tools to analyze the broader set of strategic decisions
facing software and hardware firms discussed previously.
4. Concluding Remarks
The home videogame market is but a portion of the entire videogame industry, yet has proven to be a rich testing
ground for models of strategic interaction and theories of platform competition. The literature that has developed,
though still nascent, has shown the potential for tackling and addressing myriad issues simply by studying an
Page 12 of 18
Home Videogame Platforms
industry which once was considered just a curiosity and fad.
Looking forward, the continued growth of the videogame industry has the potential for being both a curse and a
boon for research. On one hand, as videogames become even more pervasive and intertwined with other
industries, it becomes – to a certain degree – less suited for “clean” and tractable analysis. Indeed, one of the
advantages of studying the home videogame market was precisely the relative ease in which the relevant agents
and parties could be identified; going forward, this no longer may be the case. Furthermore, as this chapter
discussed, even retrospective analysis of the videogame market when all the players were known required
sophisticated modeling techniques to handle the industry's complexities, which include controlling for dynamic
issues and accounting for consumer and software heterogeneity. Accounting for even more complicated strategic
interactions poses a daunting challenge.
On the other hand, the success and proliferation of videogames will continue to spawn broader questions and
improve our understanding of general networked industries. At the heart of digital convergence is the story of
dominant platforms in once separated markets suddenly finding themselves to be competitors: much as videogame
consoles encroach upon adjacent markets such as content distribution, so to have firms in other markets – for
example, smartphone manufacturers, social networking platforms – ventured into the gaming business. How this
cross-industry platform competition will play out and adapt to changing environments remains a fascinating topic
for exploration.
References
Adams, C., and P.-L. Yin (2010): “Reallocation of Video Game Consoles on eBay,” mimeo.
Armstrong, M. (2006): “Competition in Two-Sided Markets,” RAND Journal of Economics, 37(3), 668–691.
Armstrong, M., and J. Wright (2007): “Two-Sided Markets, Competitive Bottlenecks, and Exclusive Contracts,”
Economic Theory, 32(2), 353–380.
Bernheim, B. D., and M. D. Whinston (1998): “Exclusive Dealing,” Journal of Political Economy, 106(1), 64–103.
Berry, S. (1994): “Estimating Discrete-Choice Models of Product Differentiation,” RAND Journal of Economics, 25(2),
242–262.
Berry, S., J. Levinsohn, and A. Pakes (1995): “Automobile Prices in Market Equilibrium,” Econometrica, 63(4), 841–
890.
Besanko, D., and M. K. Perry (1993): “Equilibrium Incentives for Exclusive Dealing in a Differentiated Products
Oligopoly,” RAND Journal of Economics, 24, 646–667.
Brandenburger, A. (1995): “Power Play (C): 3DO in 32-bit Video Games,” Harvard Business School Case 795–104.
(p. 105)
Chintagunta, P., H. Nair, and R. Sukumar (2009): “Measuring Marketing-Mix Effects in the Video-Game Console
Market,” Journal of Applied Econometrics, 24(3), 421–445.
Chou, C., and O. Shy (1990): “Network Effects without Network Externalities,” International Journal of Industrial
Organization, 8, 259–270.
Church, J., and N. Gandal (1992): “Network Effects, Software Provision, and Standardization,” Journal of Industrial
Economics, 40(1), 85–103.
Clements, M. T., and H. Ohashi (2005): “Indirect Network Effects and the Product Cycle: U.S. Video Games, 1994 –
2002,” Journal of Industrial Economics, 53(4), 515–542.
Corts, K. S., and M. Lederman (2009): “Software Exclusivity and the Scope of Indirect Network Effects in the US
Home Video Game Market,” International Journal of Industrial Organization, 27(2), 121–136.
Page 13 of 18
Home Videogame Platforms
Coughlan, P. J. (2001): “Note on Home Video Game Technology and Industry Structure,” Harvard Business School
Case 9-700-107.
Dranove, D., and N. Gandal (2003): “The DVD vs. DIVX Standard War: Empirical Evidence of Network Effects and
Preannouncement Effects,” Journal of Economics and Management Strategy, 12, 363–386.
Dubé, J.-P., G. J. Hitsch, and P. Chintagunta (2010): “Tipping and Concentration in Markets with Indirect Network
Effects,” Marketing Science, 29(2), 216–249.
Einav, L. (2007): “Seasonality in the U.S. Motion Picture Industry,” RAND Journal of Economics, 38(1), 127–145.
Eisenmann, T., and J. Wong (2005): “Electronic Arts in Online Gaming,” Harvard Business School Case 9-804-140.
Evans, D. S., A. Hagiu, and R. Schmalensee (2006): Invisible Engines: How Software Platforms Drive Innovation and
Transform Industries. MIT Press, Cambridge, MA.
Farrell, J., and P. Klemperer (2007): “Coordination and Lock-In: Competition with Switching Costs and Network
Effects,” in Handbook of Industrial Organization, ed. by M. Armstrong, and R. Porter, vol. 3. North-Holland Press,
Amsterdam.
Farrell, J., and G. Saloner (1986): “Installed Base and Compatibility: Innovation, Product Preannouncements, and
Predation,” American Economic Review, 76, 940–955.
Gandal, N., M. Kende, and R. Rob (2000): “The Dynamics of Technological Adoption in Hardware/Software
Systems: the Case of Compact Disc Players,” RAND Journal of Economics, 31, 43–61.
Gil, R., and F. Warzynski (2009): “Vertical Integration, Exclusivity and Game Sales Performance in the U.S. Video
Game Industry,” mimeo.
Gowrisankaran, G., and M. Rysman (2007): “Dynamics of Consumer Demand for New Durable Goods,” mimeo.
Hagiu, A. (2006): “Pricing and Commitment by Two-Sided Platforms,” RAND Journal of Economics, 37(3), 720–737.
—— (2009): “Two-Sided Platforms: Product Variety and Pricing Structures,” Journal of Economics and Management
Strategy, 18(4), 1011–1043.
Hagiu, A., and R. S. Lee (2011):”Exclusivity and Control,” Journal of Economics and Management Strategy, 20(3),
679–708.
Hendel, I., and A. Nevo (2006): “Measuring the Implications of Sales and Consumer Inventory Behavior,”
Econometrica, 74(6), 1637–1673.
Katz, M., and C. Shapiro (1985): “Network Externalities, Competition, and Compatibility,” American Economic
Review, 75, 424–440. (p. 106)
Kent, S. L. (2001): The Ultimate History of Video Games. Three Rivers Press, New York, NY.
Klein, B. (1988): “Vertical Integration as Organizational Ownership: The Fisher Body-General Motors Relationship
Revisited,” Journal of Law, Economics and Organization, 4, 199–213.
Lee, R. S. (2010a): “Dynamic Demand Estimation in Platform and Two-Sided Markets,” mimeo,
http://pages.stern.nyu.edu/~rslee/papers/DynDemand.pdf.
—— (2010b): “Vertical Integration and Exclusivity in Platform and Two-Sided Markets,” mimeo,
http://pages.stern.nyu.edu/~rslee/papers/VIExclusivity.pdf.
Lee, R. S., and K. Fong (2012): “Markov-Perfect Network Formation: An Applied Framework for Bilateral Oligopoly
and Bargaining in Buyer-Seller Networks” mimeo,
http://pages.stern.nyu.edu/~rslee/papers/MPNENetworkFormation.pdf
Liu, H. (2010): “Dynamics of Pricing in the Video Game Console Market: Skimming or Penetration?,” Journal of
Page 14 of 18
Home Videogame Platforms
Marketing Research, 47(3), 428–443.
Marvel, H. P. (1982): “Exclusive Dealing,” Journal of Law and Economics, 25, 1–25.
Mathewson, G. F., and R. A. Winter (1987): “The Competitive Effects of Vertical Agreements,” American Economic
Review, 77, 1057–1062.
Melnikov, O. (2001): “Demand for Differentiated Products: The case of the U.S. Computer Printer Market,” mimeo.
Nair, H. (2007): “Intertemporal Price Discrimination with Forward-looking Consumers: Application to the US Market
for Console Video-Games,” Quantitative Marketing and Economics, 5(3), 239–292, forthcoming.
Nair, H., P. Chintagunta, and J.-P. DubE (2004): “Empirical Analysis of Indirect Network Effects in the Market for
Personal Digital Assistants,” Quantitative Marketing and Economics, 2(1), 23–58.
Ohashi, H. (2003): “The Role of Network Effects in the U.S. VCR Market: 1978–1986,” Journal of Economics and
Management Strategy, 12, 447–496.
Pakes, A., J. Porter, K. Ho, and J. Ishii (2006): “Moment Inequalities and Their Application,” mimeo, Harvard
University.
Prieger, J. E., and W.-M. Hu (2010): “Applications Barriers to Entry and Exclusive Vertical Contracts in Platform
Markets,” Economic Inquiry, forthcoming.
Rasmusen, E. B., J. M. Ramseyer, and J. S. Wiley (1991): “Naked Exclusion,” American Economic Review, 81(5),
1137–1145.
Rey, P., and J. Tirole (2007): “A Primer on Foreclosure,” in Handbook of Industrial Organization, ed. by M.
Armstrong, and R. Porter, vol. 3. North-Holland Press, Amsterdam.
Riordan, M. H. (2008): “Competitive Effects of Vertical Integration,” in Handbook of Antitrust Economics, ed. by P.
Buccirossi. MIT Press, Cambridge, MA.
Rochet, J.-C., and J. Tirole (2006): “Two-Sided Markets: A Progress Report,” RAND Journal of Economics, 37(3),
645–667.
Rust, J. (1987): “Optimal replacement of GMC bus engines: An empirical model of Harold Zurcher,” Econometrica,
55, 999–1033.
Segal, I. (1999): “Contracting with Externalities,” Quarterly Journal of Economics, 64(2), 337–388.
Segal, I., and M. D. Whinston (2000): “Exclusive Contracts and Protection of Investments,” RAND Journal of
Economics, 31, 603–633.
—— (2003): “Robust Predictions for Bilateral Contracting with Externalities,” Econo-metrica, 71(3), 757–791. (p.
107)
Shankar, V., and B. L. Bayus (2003): “Network Effects and Competition: An Empirical Analysis of the Home Video
Game Market,” Strategic Management Journal, 24(4), 375384.
Shapiro, C. (1999): “Exclusivity in Network Industries,” George Mason Law Review, 7, 673–683.
Stennek, J. (2007): “Exclusive Quality – Why Exclusive Distribution May Benefit the TV Viewers,” IFN WP 691.
Takahasi, D. (2002): Opening the Xbox. Prima Publishing, Roseville, CA.
Weyl, E. G. (2010): “A Price Theory of Multi-Sided Platforms,” American Economic Review, 10(4).
Whinston, M. D. (2006): Lectures on Antitrust Economics. MIT Press, Cambridge, MA
Notes:
Page 15 of 18
Home Videogame Platforms
(1.) DFC Intelligence, 2009. 2010 Essential Facts, Entertainment Software Association.
(2.) Ibid.
(3.) Ibid.
(4.) 2010 Media Industry Fact Sheet, Nielsen Media Research.
(5.) Sony also released a general purpose computer system called the MSX in 1983 that could be used for gaming.
(6.) http://www.scei.co.jp/corporate/release/100118e.html.
(7.) 2010 Media Industry Fact Sheet, Nielsen Media Research.
(8.) “Natal vs. Sony Motion Controller: is the console cycle over?” guardian.co.uk, February 26, 2010.
(9.) ESA 2010 Essential Facts.
(10.) Originally, the first videogame consoles were essentially integrated hardware and software devices provided
by a single firm; not until after 1976, with the release of the Fairchild System F and the Atari VCS, were other firms
able to produce software for videogame consoles via the use of interchangeable cartridges.
(11.) Average costs reached $6M during the late 1990s (Coughlan, 2001), and today can range between $20 –
30M for the PS3 and Xbox360 (“The Next Generation of Gaming Consoles,” CNBC.com, June 12, 2009).
(12.) E.g., “Call of Duty: Modern Warfare 2 tops $1 billion in sales,” Los Angeles Times, January 14, 2010.
(13.) 3DO was a console manufacturer who tried a different pricing scheme by significantly marking up its
hardware console, but charging no software royalties. 3DO survived for 3 years before filing for bankruptcy in
2003 (Brandenburger, 1995).
(14.) See Hagiu, “Software Platforms,” chapter 3 in this Handbook for more discussion.
(15.) “Will Xbox drain Microsoft?,” CNET News, March 6, 2001. “Microsoft's Midlife Crisis,” Forbes, September 13,
2005.
(16.) “Sony Eyes Return to Profit,” Wall Street Journal, May 14, 2010.
(17.) “Nintendo takes on PlayStation, Xbox,” Reuters, September 14, 2006.
(18.) A notable exception is “backwards compatibility,” which refers to the ability of a new console to use software
developed for the previous version of that particular console. E.g., the first version of the PS3 could play PS2
games, and the PS2 could play PS1 games; the Xbox360 can play Xbox games.
(19.) Industry sources; Eisenmann and Wong (2005) cite $1M as the porting cost for an additional console for the
sixth generation of platforms.
(20.) “Nintendo to ease restrictions on U.S. game designers,” The Wall Street Journal, October 22, 1990. Kent
(2001).
(21.) This may also partially be a result of the fact that no console has since matched Nintendo's 80–90 percent
market share achieved in the late 1980s.
(22.) ESA Essential Facts, 2010.
(23.) “Video Game Culture: Leisure and Play Preferences of B.C. Teens,” Media Awareness Network, 2005.
(24.) “Digital Gaming in America Survey’ Results,” Gaming Age, August 12, 2002.
(27.) Whether the negative competition effect between two substitutable games dominates this positive network
effect depends on the relative elasticities for adoption, which in turn typically depends on how early it is in a
console's lifecycle.
Page 16 of 18
Home Videogame Platforms
(28.) See also Dubé, Hitsch, and Chintagunta (2010) for the derivation of a similar estimating equation.
(29.) Corts and Lederman (2009) also find evidence of “cross-platform” network effects from 1995 to 2005: i.e.,
given the ability of software to multihome, software supply for one console was shown to be responsive to the
installed bases across all platforms; as a result, users on one console could benefit from users on another
incompatible console in that their presence would increase software supply for all consoles.
(30.) Estimation of the model follows by matching predicted market shares for each hardware and software product
over time from the model with those observed in the data (obtained from the NPD Group), and minimizing a GMM
criterion based on a set of conditional moments. The main identifying assumption is that every product's one
dimensional unobservable characteristic (for hardware, represented by ξ in (3)) evolves according to an AR(1)
process, and innovations in these unobservables are uncorrelated with a vector of instruments.
(31.) Since videogames are durable goods, keeping track of each consumers’ inventory and subsequent choice
sets for over 1500 games was computationally infeasible. However, both Nair (2007) and Lee (2010a) provide
evidence which suggests independence may not be unreasonable for videogames.
(32.) Whether or not consumer beliefs can be estimated or elicited without imposing an assumption such as rational
expectations is an important area of research for dynamic demand estimation in general.
(33.) Nair (2007) provides anecdotal evidence that managers follow rules-of-thumb pricing strategies in which
prices are revised downward if sales are low for a game, and keep prices high if sales are high. There is also
evidence that consumers prefer newer games over older ones (e.g., Nair (2007) and Lee (2010a) both find
significant decay effects in the quality of a game over time).
(34.) The analysis ignores games that are contractually exclusive, which are discussed later in this chapter; it
furthermore assumes publishers maximize profits individually for each game.
(35.) E.g., Gil and Warzynski (2009) study videogames released between 2000 and 2007 and find reduced form
evidence that indicates once release timing and marketing strategies are controlled for, vertically integrated games
are not of higher quality than non-integrated games. However, regressions on the software fixed effects recovered
in Lee (2010a) for a similar time period show first-party games are generally of higher quality.
(36.) See also Evans, Hagiu, and Schmalensee (2006) for discussion.
(37.) Appealing to cross-platform variation in royalty rates would require considerable faith that other console
specific factors affecting software supply can be adequately controlled for.
(38.) Nintendo and Microsoft followed suit with their seventh generation consoles.
(39.) The original PS3 console included the PS2 graphic chip, which was eliminated in subsequent versions.
(40.) Clearly, any game that multihomes and is available on multiple systems yields no comparative advantage
across consoles.
(41.) For instance, the gains to exclusivity depend on the age of the console (among other things), and platforms
may choose to divest integrated developers later. E.g., Microsoft acquired the developer Bungie prior to launch of
original Xbox in 2000; in 2007, it was spun off as Microsoft reasoned Bungie would be more profitable if it could
publish for other consoles (“Microsoft, ‘Halo’ maker Bungie split,” The Seattle Times, October 6, 2007).
(42.) For example, Hagiu and Lee (2011) apply the framework of Bernheim and Whinston (1998) to analyze
exclusive contracting in platform industries; see also Stennek (2007).
(43.) See Lee and Fong (2012) for progress along these lines.
(44.) “Microsoft's Midlife Crisis,” Forbes, September 13, 2005; “PlayStation Poorhouse,” Forbes, June 23, 2008.
(45.) Further confounding matters are each console manufacturer's online gaming businesses; Microsoft's online
service generates over $1B a year (“Microsoft's Online Xbox Sales Probably Topped $1 Billion,” Bloomberg, July 7,
2010), and all 3 current-generation platforms have downloadable gaming stores as well.
Page 17 of 18
Home Videogame Platforms
(46.) Whinston (2006), Rey and Tirole (2007), and Riordan (2008) overview the theoretical literature on vertical
foreclosure and the competitive effects of exclusive vertical arrangements.
(47.) Note that had Sony's exclusive titles been significantly higher quality than those onboard Microsoft's or
Nintendo's consoles, this result may have been different: i.e., even though the two entrant platforms would have
lost their exclusive titles, they would have gained access (albeit non-exclusively) to Sony's hit exclusive titles.
Nevertheless, demand estimates clearly indicate this was not the case. The question of how Nintendo and Microsoft
were able to get access to higher quality software in the first place is beyond the scope of the paper, as it requires
addressing questions raised in the previous section regarding software supply and hardware-software
negotiations.
Robin S. Lee
Robin S. Lee is Assistant Professor of Economics at the Stern School of Business at New York University.
Page 18 of 18
Digitization of Retail Payments
Oxford Handbooks Online
Digitization of Retail Payments
Wilko Bolt and Sujit Chakravorti
The Oxford Handbook of the Digital Economy
Edited by Martin Peitz and Joel Waldfogel
Print Publication Date: Aug 2012
Online Publication Date: Nov
2012
Subject: Economics and Finance, Economic Development
DOI: 10.1093/oxfordhb/9780195397840.013.0005
Abstract and Keywords
This article reports the research on electronic payment systems. Debit, credit, and prepaid are three forms of
payment card. The rapid growth in the use of electronic payment instruments, especially payment cards, is a
striking feature of most modern economies. Payment data indicate that strong scale economies exist for electronic
payments. Payment costs can be decreased through consolidation of payment processing operations to realize
economies of scale. Competition does not necessarily enhance the balance of prices for two-sided markets. The
ability of merchants to charge different prices is a powerful incentive to convince consumers to use a certain
payment instrument. The effect of interventions in Australia, Spain, the European Union, and the United States is
dealt with. The theoretical literature on payment cards continues to grow. However, there are a few areas of
payment economics that deserve greater attention.
Keywords: debit cards, credit cards, prepaid cards, payment economics, payment cards, Australia, Spain, European Union, United States
1. Introduction
Rapid advancements in computing and telecommunications have enabled us to interact with each other digitally.
Instead of visiting a travel agent for information regarding our next vacation, we can purchase our vacation
package online in the middle of the night. Prior to boarding, we can purchase and download a book to our Kindle to
read during the flight. We no longer have to return home to share our vacation experience with our friends and
family but can instead share our digital pictures taken with our iPhone via email or post them on Facebook. Despite
the digital economy being upon us, we still rely on paper payment instruments such as cash, checks, and paper
giros for a significant amount of face-to-face and remote bill payments in advanced economies. While we have not
attained the cashless society, we have made significant strides to adopt electronic payment instruments.
The proliferation of payment cards continues to change the way consumers shop and merchants sell goods and
services. Recently, some merchants have started to accept only card payments for safety and convenience
reasons. For example, several US airlines only accept payment cards for inflight purchases on all their domestic
routes. Also, many quick service restaurants and coffee shops now accept payment cards to capture greater sales
and increase transaction speed. Wider acceptance and usage of payment cards suggest that a growing number of
(p. 109) consumers and merchants prefer payment cards to cash and checks. Furthermore, without payment
cards, Internet sales growth would have been substantially slower.
The increased usage of cards has increased the value of payment networks, such as Visa Inc., MasterCard
Worldwide, Discover Financial Services, and others. In 2008, Visa Inc. had the largest initial public offering (IPO) of
equity, valued at close to $18 billion, in US history (Benner, 2008). The sheer magnitude of the IPO suggests that
financial market participants value Visa's current and future profitability as a payment network.
Page 1 of 21
Digitization of Retail Payments
Over the last decade or so, public authorities have questioned the underlying fee structures of payment networks
and often intervened in these markets.1 The motivation to intervene varies by jurisdiction. Public authorities may
intervene to improve the incentives to use more efficient payment instruments. For example, they may encourage
electronic payments over cash and checks. Public authorities may also intervene because fees are “too high.”
Finally, public authorities may enable adoption of payment standards that may be necessary for market
participants to invest in new payment instruments and channels especially during times of rapid innovation and
competing standards.
In this chapter, we emphasize regulation of certain prices in the retail payment system. To date, there is still little
consensus—either among policymakers or economic theorists—on what constitutes an efficient fee structure for
payments. There are several conclusions that we draw from the academic literature. First, there are significant
scale economies and likely scope economies in attracting consumers and merchants and payment processing.
Second, cross-subsidies between consumers and merchants may be socially optimal suggesting that there are
benefits to having a limited number of networks. Third, allowing merchants to price differentiate among different
types of payment instruments generally more accurately reflect underlying costs to all participants. Fourth,
merchant, card issuer, or network competition may result in lower social welfare contrary to standard economic
principles. Finally, public authorities should not only consider the costs of payment processing but also consider
the benefits received by consumers and merchants, such as convenience, security, and access to credit that may
result in greater sales if they choose to intervene in payments markets.
The rest of our article is organized as follows. We first explain the structure of payment networks. Having
established a framework, we discuss consumer choice and the migration to electronic payments. Next, we
describe the provision of payment services emphasizing the economies of scale and scope that are generally
present. Then, we summarize the key contributions to the theoretical payment card literature focusing on
economic surplus and cross subsidies and their impact on social welfare. In the following section, we discuss
several market interventions by public authorities. Finally, we offer some concluding remarks and suggest future
areas for research.
(p. 110) 2. Structure of Payment Markets
When focusing on payment cards, most card transactions occur in three- or four-party networks.2 These networks
comprise of consumers and their banks (known as issuers), as well as merchants and their banks (known as
acquirers). Issuers and acquirers are a part of a network that sets the rules and procedures for clearing and
settling payment card receipts among its members. In principle, other forms of electronic payments, such as credit
transfers and direct debits, have the same three or four party structure.
Figure 5.1 Payment Card Fees.
Source: Bolt and Chakravorti (2008b).
In Figure 5.1, we diagram the four participants and their interactions with one another. First, a consumer establishes
a relationship with an issuer and receives a payment card. Consumers often pay annual membership fees to their
issuers. They generally do not pay per transaction payment card fees to their banks. On the contrary, some
Page 2 of 21
Digitization of Retail Payments
payment card issuers, usually more common for credit card issuers, give their customers per transaction rewards,
such as cash back or other frequentuse rewards. Second, a consumer makes a purchase from a merchant.
Generally, the merchant charges the same price regardless of the type of payment instrument used to make the
purchase. Often the merchant is restricted from charging more for purchases that are made with payment cards.
These rules are called no-surcharge rules.3 Third, if a merchant has established a relationship with an acquirer, it
is able to accept payment card transactions. The merchant either pays a fixed per transaction fee (more common
for debit cards) or a proportion of the total purchase amount, known as the merchant discount fee (more common
for credit cards), to its acquirer.4 For credit cards, the merchant discount can range from one percent to five
percent depending on the type of transaction, type of merchant, and (p. 111) type of card, if the merchant can
swipe the physical card or not, and other factors. Fourth, the acquirer pays an interchange fee to the issuer.
Debit, credit, and prepaid cards are three forms of payment cards. Debit cards allow consumers to access funds at
their banks to pay merchants; these are sometimes referred to as “pay now” cards because funds are generally
debited from the cardholder's account within a day or two of a purchase.5 Credit cards allow consumers to access
lines of credit at their banks when making payments and can be thought of as “pay later” cards because
consumers pay the balance at a future date.6 Prepaid cards can be referred to as “pay before” cards because
they allow users to pay merchants with funds transferred in advance to a prepaid account.7
3. Payment Choice and Migration to Electronic Payments
The rapid growth in the use of electronic payment instruments, especially payment cards, is a striking feature of
most modern economies. In Table 5.1, we have listed the annual per capita payment transactions for ten advanced
economies in 1988 and 2008. In all cases, there was tremendous growth, but countries differ significantly from one
another and over time. For example, Italy had 0.33 per capita annual payment card transactions and the United
States had 36.67 per capita annual payment card transactions in 1988 and 23.5 per capita annual payment card
transactions and 191.1 per capita annual payment card transactions, respectively, in 2008. Also note that
differences within Europe remain large. Countries like Italy, Germany, and Switzerland still have a strong
dependence on cash use, whereas countries like the Netherlands, France, and Sweden show high annual per
capita payment card volumes. Amromin and Chakravorti (2009) find that greater usage of debit cards resulted in
lower demand for small-denomination banknotes and coins that are used to make change although demand for
large-denomination notes has not been affected. From Table 5.1, the United States, where credit cards have
traditionally been popular at the point of sale, shows the highest annual per capita payment card use in 2008.
The payment literature stresses consumer payment choice and merchant acceptance in response to price and
non-price characteristics. Attempts to determine the main drivers of changes in payment composition across and
within countries are difficult due to a lack of time-series data, and often only reported as annual national-level
aggregates. Moreover, data on the aggregate cash usage is difficult. Thus, quantifying the pace of migration from
paper-based toward electronic means of payment is difficult.
To determine the main drivers of change in payment composition across and within countries is difficult because
only aggregate data at the national level along (p. 112) with a limited time series dimension is generally available.
Given these data problems payment researchers have tried to infer consumer payment behavior from household
surveys in Europe and the United States (Stavins, 2001; Hayashi and Klee, 2003; Stix, 2003; Bounie and Francois,
2006; Mester, 2006; Klee, 2008; and Kosse, 2010). Analysis of demographic data indicates that age, education,
and income influence the adoption rates of the newer electronic forms of making a payment. However, Schuh and
Stavins (2010) show that demographic influences on payment instrument use are often of less importance than the
individuals’ assessment of the relative cost, convenience, safety, privacy, and other characteristics of different
payment instruments. Other survey-based studies by Jonker (2007) and Borzekowski, Kiser, and Ahmed (2008) that
have incorporated similar payment characteristics find that payment instrument characteristics (real or perceived)
importantly augment the socio-demographic determinants of the use of electronic payment instruments.8
Page 3 of 21
Digitization of Retail Payments
Table 5.1 Annual Per Capita Card Transactions 1988 and 2008
Country
1988
2008
Percent Change
Belgium
6.23
87.1
Canada
28.34
187.8
563
France
15.00
102.0
580
Germany
0.76
27.3
3492
Italy
0.33
23.5
7021
Netherlands
0.34
113.7
33,341
Sweden
5.45
176.5
3139
Switzerland
2.34
62.8
2583
United Kingdom
10.47
123.7
1081
United States
36.67
191.1
421
1298
Source: Committee on Payment and Settlement Systems (1993) and (2010).
The payment characteristics approach (requiring survey information on which characteristics are favored in one
instrument over another) allows estimation of a “price equivalent” trade-off among payment instruments. This
approach applies (ordered) probit/logit models to determine price responsiveness relationships among payment
instruments. Borzekowski, Kiser, and Ahmed (2008) find a highly elastic response to fees imposed on US PIN debit
transactions (an effort by banks to shift users to signature debit cards where bank revenue is higher). Zinman
(2009) finds a strong substitution effect between debit and credit cards during 1995–2004, and he concludes that
debit card use is more common among consumers who are likely to be credit-constrained in the United States.
Another consumer survey suggests that Austrian consumers who often use debit cards hold approximately 20
percent less cash (Stix, 2003). Using French payment data in 2005, Bounie and Francois (2006) estimate the
determinants of the probability of (p. 113) a transaction being paid by cash, check, or payment card at the point
of sale. They do not only find a clear effect of transaction size but also find evidence that type of good and
spending location matter for payment instrument choice.
Another approach in the literature has been to infer consumer choice from aggregate data on payment systems
and data from industry sources (e.g. Humphrey, Pulley and Vesala, 2000; Garcia-Swartz, Hahn and Layne-Farrar,
2006; Bolt, Humphrey and Uittenbogaard, 2008). Bolt et al. (2008) try to determine the effect of differential
transaction-based pricing of payment instruments has on the adoption rate of electronic payments. This is done by
comparing the shift to electronic payments during 1990–2004 in two countries—one that has transaction pricing
(Norway) and one that does not (the Netherlands). Overall, controlling for country-specific influences, they find that
explicit per-transaction payment pricing induces consumers to shift faster to more efficient electronic payment
instruments. However, non-price attributes, like convenience and safety, as well as terminal availability may play
an even bigger role than payment pricing for payments at the point of sale.
There are only few retail payment empirical studies that have used merchant or consumer level transaction data.
Klee (2008) provides a simple framework that links payment choice to money holdings. In order to evaluate the
model, she uses grocery store scanner data paired with census-tract level demographic information to measure
the influence of transaction costs and interest rate sensitivity on payment choice. The data comprise over 10
Page 4 of 21
Digitization of Retail Payments
million checkout transactions from September to November 2001. Using a Dubin-McFadden (1984) simultaneous
choice econometric specification, Klee finds that a major determinant of consumers’ payment choice is transaction
size, with cash being highly favored for small-value transactions. Analysis of the same dataset shows a marked
transaction-time advantage for debit cards over checks, helping to explain the increasing popularity of the former.
4. Provision of Payment Services
Significant real resources are required to provide payment services. Recent payment cost analyses have shown
that the total cost of a nation's retail payment system may easily approach 1 percent of GDP annually (Humphrey,
2010). Even higher cost estimates can be obtained depending on current payment composition and how much of
bank branch and ATM network costs are included as being essential for check deposit, cash withdrawal, and card
issue and maintenance activity.
On the supply side, cost considerations first induced commercial banks to shift cash acquisition by consumers
away from branch offices to less costly ATMs. Later, similar cost considerations led banks to try to replace cash,
giro and checks with cards using POS terminals although such transactions are likely to lower the (p. 114)
demand for ATM cash withdrawals. Greater adoption of mobile payments and online banking solutions will enable a
further shift from cash and other paper-based instruments toward more digitized payments. Indeed, payment data
—albeit scarcely available—suggest that strong scale economies exist for electronic payments.
4.1. Costs and Benefits of Different Payment Methods
Studying the costs to banks to provide payment services is difficult, given the proprietary nature of the cost data.
However, there are some European studies that attempt to quantify the real resource costs of several payment
services. In these studies, social cost refers to the total cost for society net any monetary transfers between
participants reflecting the real resource costs in the production and usage of payment services. For the
Netherlands in 2002, Brits and Winder (2005) report that the social costs of all point-of-sale (POS) payments (cash,
debit cards, credit cards, and prepaid cards) amounted to 0.65 percent of GDP. The social cost of payment
services for Belgium in 2003 was 0.75 percent of GDP (Quaden, 2005). Bergman, Guibourg, and Segendorff (2007)
find that the social cost of providing cash, debit card payments, and credit card payments was approximately 0.4
percent of GDP in Sweden for 2002. For Norway, Humphrey, Kim, and Vale (2001) estimate the cost savings from
switching from a fully paper-based system (checks and paper “giro,” or a payment in which a payer initiates a
transfer from her bank to a payee's bank) to a fully electronic system (debit cards and electronic giro) at the bank
level at 0.6 percent of Norway's GDP. Based on a panel of 12 European countries during the period 1987–99,
Humphrey et al. (2006) conclude that a complete switch from paper-based payments to electronic payments could
generate a total cost benefit close to 1 percent of the 12 nations’ aggregate GDP.
These numbers confirm the widespread agreement that the ongoing shift from paper-based payments to electronic
payments may result in significant economic gains. Compared with cash, electronic payments also offer benefits in
terms of greater security, faster transactions, and better recordkeeping; in addition, electronic payments offer
possible access to credit lines.9 Merchants may also benefit from increased sales or cost savings by accepting an
array of electronic payment instruments. However, these benefits are often difficult to quantify.
Using US retail payments data, Garcia-Swartz, Hahn, and Layne-Farrar (2006) attempt to quantify both the costs
and benefits of POS payment instruments.10 They estimate costs and benefits of different components of the
payment process and subtract out pure transfers among participants. They find that shifting payments from cash
and checks to payment cards improves social welfare as measured by the aggregate surplus to consumers,
merchants, and payment providers. However, they also conclude that merchants may pay more for certain
electronic payment instruments than some paper-based instruments.
(p. 115)
4.2. Economies of Scale and Scope in Payments
As more consumers and merchants adopt payment cards, providers of these products may benefit from economies
of scale and scope. Size and scalability are important in retail payment systems due to their relatively high capital
intensity. In general, electronic payment systems require considerable up-front investments in processing
Page 5 of 21
Digitization of Retail Payments
infrastructures, require highly secure telecommunication facilities and data storage, and apply complex operational
standards and protocols. As a consequence, unit cost should fall as payment volume increases (when
appropriately corrected for changes in labor and capital costs). In addition, scope economies come into play when
different payment services can be supplied on the same electronic network in a more cost-efficient way than the
“stand-alone” costs of providing these services separately.
In the United States, being able to operate on a national level allowed some issuers (banks that issue cards to
consumers), acquirers (banks that convert payment card receipts into bank deposits for merchants), and payment
processors to benefit from economies of scale and scope. We discuss two large consolidations that occurred
within the Federal Reserve System over the last two decades that resulted in large cost savings. First, the Federal
Reserve's real-time gross settlement large-value payment system, Fedwire, consolidated its 12 separate payment
processing sites into a single site in 1996. As a result, Fedwire's average cost per transaction fell by about 62
percent in real terms from scale economies due to the expanded volume and technological change which lowered
processing and telecommunication costs directly.11 A similar process occurred in Europe where in 2007, 15
separate national real time gross settlement systems were consolidated into one single technical platform, TARGET2, that guaranteed a harmonized level of service for European banks combined with one single transaction price
for domestic and cross-border payments.
Second, with the passage of Check Clearing for the 21st Century Act in 2003 and the reduction in the number of
checks written in the United States, the Federal Reserve reduced the number of its check processing sites from 45
to 1 by March 2010. Today, almost 99 percent of checks are processed as images, thus enabling greater
centralization in check processing. In addition, many checks are converted to ACH payments. Both check imaging
and conversion have resulted in significant cost savings to the Federal Reserve and market participants.
Some European payment providers might enjoy similar scale and scope benefits in the future as greater crossborder harmonization occurs with the introduction of the Single Euro Payments Area (SEPA).12 The goal of SEPA,
promoted by the European Commission, is to facilitate the emergence of a competitive, intra-European goods
market by making cross-border payments as easy as domestic transactions. Separate domestic national payments
infrastructures are to be replaced with a pan-European structure that would lower payment costs through
economies of scope and scale. Volume expansion can best be achieved by consolidating processing operations
across European borders.
(p. 116) One of the first European scale economies study on payment systems was carried out by Khiaonarong
(2003). He estimates a simple loglinear cost function by using data of 21 payment systems and finds substantial
scale economies.13 In Bolt and Humphrey (2007), a data set including 11 European countries over 18 years is
used to explain movements of operating costs in the banking sector as a function of transaction volumes of four
separate payment and delivery instruments (card payments, bill payments, ATMs, and branch offices), controlling
for wages and capital costs. Their primary focus is on scale economies of card payments. In particular, using a
translog function specification, the average scale economy is (significantly) estimated in the range 0.25–0.30,
meaning that a doubling of payment volume corresponds to only a 25 to 30 percent increase in total costs.
Consequently, volume expansion should lead to significantly lower average costs per transaction. Based on cost
data specific to eight European payment processor operations over the period 1990 to 2005, Beijnen and Bolt
(2009) obtain similar estimates of payment scale economies, which allow them to quantify the potential benefits of
SEPA arising from consolidation of electronic payment processing centers across the euro area. Finally, Bolt and
Humphrey (2009) estimate payment scale and scope economies using previously unavailable (confidential)
individual bank data for the Netherlands from 1997 to 2005. Their analysis confirms the existence of strong
payment scale economies, thus furthering the goal of SEPA.
One key result stands out: payment costs can be markedly reduced through consolidation of payment processing
operations to realize economies of scale. Ultimately, this allows banks, consumers, and merchants to benefit from
these cost efficiencies in the form of lower payment fees. However, how each participant benefits from this
reduction in payment costs and exactly how it is allocated in terms of lower payment and service fees, lower loan
rates, higher deposit rates, or higher bank profit is an issue of great interest to public authorities.
5. Economic Surplus and Cross Subsidies
Page 6 of 21
Digitization of Retail Payments
To study the optimal structure of fees between consumers and merchants in payment markets, economists have
developed the two-sided market or platform framework. This literature combines the multiproduct firm literature,
which studies how firms set prices on more than one product, with the network economics literature, which studies
how consumers benefit from increased participation of consumers in the network.14 The price structure or balance
is the share of the total price of the payment service that each type of end-user pays. Rochet and Tirole (2006b)
define a two-sided market as a market where end-users are unable to negotiate prices (p. 117) among
themselves and the price structure affects the total volume of transactions.15 In practice, the existence of nosurcharge rules do not allow consumers and merchants to negotiate prices based on the underlying costs of the
payment instrument used. Furthermore, even in jurisdictions where such practices have been outlawed, most
merchants have been reluctant to differentiate their prices. An important empirical observation of two-sided
markets is that platforms tend to heavily skew the price structure to one side of the market to get both sides “on
board,” using one side as a “profit center” and the other side as a “loss leader,” or at best financially neutral.16 In
the rest of this section, we will discuss several externalities that arise in payment networks.17
5.1. Adoption and Usage Externalities
A key externality examined in the payment card literature is the ability of the network to convince both consumers
and merchants to participate in a network. Baxter (1983) argues that the equilibrium quantity of payment card
transactions occurs when the total transactional demand for payment card services, which are determined by
consumer and merchant demands jointly, is equal to the total transactional cost for payment card services,
including both issuer and acquirer costs.18 A consumer's willingness to pay is based on her net benefits received.
The consumer will participate if her net benefit is greater than or equal to the fee. Similarly, if the merchants’ fee is
less than or equal to the net benefit they receive, merchants will accept cards. Net benefits for consumers and
merchants are defined by the difference in benefits from using or accepting a payment card and using or
accepting an alternative payment instrument. Pricing each side of the market based on marginal cost—as would be
suggested by economic theory for one-sided competitive markets—need not yield the socially optimal allocation.
To arrive at the socially optimal equilibrium, a side payment may be required between the issuer and acquirer if
there are asymmetries of demand between consumers and merchants, differences in costs to service consumers
and merchants, or both. This result is critically dependent on the inability of merchants to price differentiate
between card users and those who do not use cards or among different types of card users. While most
economists and antitrust authorities agree that an interchange fee may be necessary to balance the demands of
consumers and merchants resulting in higher social welfare, the level of the fee remains a subject of debate.
Schmalensee (2002) extends Baxter's (1983) analysis by considering issuers and acquirers that have market
power, but still assumes that merchants operate in competitive markets. His results support Baxter's conclusions
that the interchange fee balances the demands for payment services by each end-user type and the cost to banks
to provide them. Schmalensee finds that the profit-maximizing interchange fee of issuers and acquirers may also
be socially optimal.
(p. 118)
5.2. Instrument-Contingent Pricing
In many jurisdictions, merchants are not allowed to add a surcharge for payment card transactions because of
legal or contractual restrictions. No-surcharge restrictions do not allow merchants to impose surcharges for
payment card purchases. However, merchants may be allowed to offer discounts for noncard payments instead of
surcharges.19 If consumers and merchants were able to negotiate prices based on differences in costs that
merchants face and the benefits that both consumers and merchants receive, the interchange fee would be
neutral, assuming full pass-through. The interchange fee is said to be neutral if a change in the interchange fee
does not change the quantity of consumer purchases and the profit level of merchants and banks. There is general
consensus in the payment card literature that if merchants were able to recover costs to accept a given payment
instrument directly from those consumers that use it, the impact of the interchange fee would be severely
dampened.
Even if price differentiation based on the payment instrument used is not common, the possibility to do so may
enhance the merchants’ bargaining power in negotiating their fees.20 Merchants can exert downward pressure on
fees by having the possibility to set instrument-contingent pricing. Payment networks may prefer non-instrument-
Page 7 of 21
Digitization of Retail Payments
contingent pricing because some consumers may not choose payment cards if they had to explicitly pay for using
them at the point of sale (POS).
Carlton and Frankel (1995) extend Baxter (1983) by considering when merchants are able to fully pass on payment
processing costs via higher consumption goods prices. They find that an interchange fee is not necessary to
internalize the externality if merchants set pricing for consumption goods based on the type of payment instrument
used. Furthermore, they argue that cash users are harmed when merchants set one price because they subsidize
card usage.21
Schwartz and Vincent (2006) study the distributional effects among cash and card users with and without nosurcharge restrictions. They find that the absence of pricing based on the payment instrument used increases
network profit and harms cash users and merchants.22 The payment network prefers to limit the merchant's ability
to separate card and cash users by forcing merchants to charge a uniform price to all of its customers. Issuer
rebates to card users boost their demand for cards while simultaneously forcing merchants to absorb part of the
corresponding rise in the merchant fee, because any resulting increase in the uniform good's price must apply
equally to cash users.
Gans and King (2003) argue that, as long as there is “payment separation,” the interchange fee is neutral
regardless of the market power of merchants, issuers, and acquirers. When surcharging is costless, merchants will
implement pricing based on the payment instrument used, taking away the potential for cross-subsidization across
payment instruments and removing the interchange fee's role in balancing the demands of consumers and
merchants. In effect, the cost pass-through is such that lower consumer card fees (due to higher interchange fees)
are exactly offset by higher goods prices from merchants. Payment separation can occur if one of (p. 119) the
following is satisfied: There are competitive merchants, and they separate into cash-accepting or card-accepting
categories, in which each merchant only serves one type of customer and is prevented from charging different
prices; or merchants are able to fully separate customers who use cash from those who use cards by charging
different prices.
5.3. Merchant, Network, and Issuer Competition
When asking merchants why they accept certain types of payment cards if they are too costly, they answer that
they would lose business to their competitors. Rochet and Tirole (2002) were the first to consider business stealing
as a motivation for merchants to accept payment cards. Rochet and Tirole study the cooperative determination of
the interchange fee by member banks of a payment card association in a model of two-sided markets with network
externalities. They develop a framework in which banks and merchants may have market power and consumers
and merchants decide rationally on whether to use or accept a payment card. In particular, Rochet and Tirole
consider two identical Hotelling merchants in terms of their net benefits of accepting a payment cards. Consumers
face the same fixed fee but are heterogeneous in terms of the net benefits they derive from using the payment
card. They assume that the total number of transactions is fixed and changes in payment fees do not affect the
demand for consumption goods.
They have two main results. First, the interchange fee that maximizes profit for the issuers may be greater than or
equal to the socially optimal interchange fee, depending on the issuers’ margins and the cardholders’ surplus. An
interchange fee set too high may lead to overprovison of payment card services. Second, merchants are willing to
pay more than the socially optimal fee if they can steal customers from their competitors. Payment card networks
can exploit each merchant's eagerness to obtain a competitive edge over other merchants. Remarkably, this rent
extraction has also some social benefits since, on the consumer side, it offsets the underprovision of cards by
issuers with market power. However, overall social welfare does not improve when merchants steal customers from
their competitors by accepting payment cards.
Wright (2004) extends Rochet and Tirole (2002) by considering a continuum of industries where merchants in
different industries receive different benefits from accepting cards. In his environment, consumers and merchants
pay per transaction fees. Each consumer buys goods from each industry. Issuers and acquirers operate in markets
with imperfect competition. He assumes that consumers face the same price regardless of which instrument they
use to make the purchase. Similar to Rochet and Tirole (2002), Wright concludes that the interchange fee that
maximizes overall social welfare is generally higher than the interchange fee that maximizes the number of
Page 8 of 21
Digitization of Retail Payments
transactions.
Economic theory suggests that competition among suppliers of goods and services generally reduces prices,
increases output, and improves welfare. However, (p. 120) within two-sided market framework, network
competition may yield an inefficient price structure. A key aspect of network competition is the ability of end-users
to participate in more than one network. When end-users participate in more than one network, they are said to be
“multihoming.” If they connect only to one network, they are said to be “single-homing.” As a general finding,
competing networks try to attract end-users who tend to single-home, since attracting them determines which
network has the greater volume of business. Using data from Visa, Rysman (2007) demonstrates that even though
consumers carry multiple payment cards in their wallet, they tend to use the same card for most of their
purchases.23 Accordingly, the price structure is tilted in favor of end-users who single-home.24
Some models of network competition assume that the sum of consumer and merchant fees is constant and focus
on the price structure. Rochet and Tirole (2003) find that the price structures for a monopoly network and
competing platforms may be the same, and if the sellers’ demand is linear, this price structure in the two
environments generates the highest welfare under a balanced budget condition.
Guthrie and Wright (2007) extend Rochet and Tirole (2003) by assuming that consumers are able to hold one or
both payment cards. They find that network competition can result in higher interchange fees than those that would
be socially optimal. In their model, Guthrie and Wright take into account that in a payment network, merchants, who
are on one side of the market, compete to attract consumers, who are on the other side. This asymmetry causes
payment system competition to over-represent interests of cardholders because they generally single-home
causing heterogenous merchants to be charged more and cardholders less. This skewed pricing effect is
reinforced when consumers are at least as important as merchants in determining which card will be adopted by
both sides. The result that system competition may increase interchange fees illustrates the danger of using onesided logic to make inferences in two-sided markets.
Chakravorti and Roson (2006) consider the effects of network competition on total price and on price structure
where networks offer differentiated products. They only allow consumers to participate in one card network,
whereas merchants may choose to participate in more than one network. However, unlike Guthrie and Wright
(2007) and Rochet and Tirole (2003), Chakravorti and Roson consider fixed fees for consumers. They compare
welfare properties when the two networks operate as competitors and as a cartel, where each network retains
demand for its products from end-users but the networks set fees jointly.
Like Rochet and Tirole (2003) and Guthrie and Wright (2007), Chakravorti and Roson (2006) find that competition
does not necessarily improve or worsen the balance of consumer and merchant fees from the socially optimal one.
However, they find that the welfare gain from the drop in the sum of the fees from competition is generally larger
than the potential decrease in welfare from less efficient fee structures.
Competition does not necessarily improve the balance of prices for two-sided markets. Furthermore, if competition
for cardholders is more intense because (p. 121) consumers ultimately choose the payment instrument, issuers
may provide greater incentives to attract consumers even if both issuers belong to the same network. If issuers
have greater bargaining power to raise interchange fees, they can use this power to partially offset the cost of
consumer incentives.25
5.4. Credit Functionality of Payment Cards
The payment card literature has largely ignored the benefits of consumer credit.26 Given the high level of antitrust
scrutiny targeted toward credit card fees, including interchange fees, this omission in most of the academic
literature is rather surprising. In the long run, aggregate consumption over consumers’ lives may not differ because
of access to credit, but such access may enable consumers to increase their utility. In addition to extracting
surplus from all consumers and merchants, banks may extract surplus from consumers that borrow in the form of
finance charges.27
Chakravorti and Emmons (2003) consider the costs and benefits of consumer credit when consumers are subject
to income shocks after making their credit card purchases and some are unable to pay their credit card debt.
Chakravorti and Emmons assume that all markets for goods and payment services are competitive. They impose a
Page 9 of 21
Digitization of Retail Payments
participation constraint on individuals without liquidity constraints such that the individuals will only use cards if
they are guaranteed the same level of consumption as when they use cash including the loss of consumption
associated with higher prices for consumption goods. To our knowledge, they are the first to consider the credit
payment functionality of credit cards. Observing that over 75 percent of US card issuer revenue is derived from
cash-constrained consumers, they consider the viability of the credit card system if it were completely funded by
these types of consumers.28
They find that if consumers sufficiently discount future consumption, liquidity-constrained consumers who do not
default would be willing to pay all credit card network costs ex ante, resulting in all consumers being better off.
However, they also find that the inability of merchants to impose instrument-contingent prices results in a lower
level of social welfare because costly credit card infrastructure is used for transactions that do not require credit
extensions.
Chakravorti and To (2007) consider an environment with a monopolist bank that serves both consumers and
merchants where the merchants absorb all credit and payment costs in a two-period dynamic model. Chakravorti
and To depart from the payment card literature in the following ways. First, rather than taking a reduced-form
approach where the costs and benefits of payment cards are exogenously assigned functional forms, they
construct a model that endogenously yields costs and benefits to consumers, merchants, and banks from credit
card use. Second, their model considers a dynamic setting where there are intertemporal tradeoffs for all
participants. Third, they consider consumption and income uncertainty.
(p. 122) Their model yields the following results. First, the merchants’ willingness to pay bank fees increases as
the number of credit card consumers without income increases. Note that up to a point, merchants are willing to
subsidize credit losses in exchange for additional sales. Second, a prisoner's dilemma situation may arise: Each
merchant chooses to accept credit cards, but by doing so, each merchant's discounted two-period profit is lower.
Unlike other models, business stealing occurs across time and across monopolist merchants.
5.5. Competition Among Payment Instruments
Most of the payment card literature ignores competition between payment instruments.29 If consumers carry
multiple types of payment instruments, merchants may be able to steer them away from more costly payment
instruments.30 Rochet and Tirole (2011) argue that merchants may choose to decline cards after they have
agreed to accept them. In their model a monopoly card network would always select an interchange fee that
exceeds the level that maximizes consumer surplus. If regulators only care about consumer surplus, a
conservative regulatory approach is to cap interchange fees based on retailers’ net avoided costs from not having
to provide credit themselves. This always raises consumer surplus compared to the unregulated outcome,
sometimes to the point of maximizing consumer surplus. This regulatory cap is conceptually the same as the
“tourist test” where the merchant accepts cards even when it can “effectively steer” the consumer to use another
payment instrument. However, if the consumer is unable to access cash or another form of payment, the merchant
would lose the sale.
Merchants may steer consumers through price incentives, if allowed to do so. Bolt and Chakravorti (2008a) study
the ability of banks and merchants to influence the consumers’ choice of payment instrument when they have
access to three payment forms—cash, debit card, and credit card. In their model, consumers only derive utility
from consuming goods from the merchant they are matched to in the morning. Merchants differ on the types of
payment instruments that they accept and type of consumption good they sell. Each merchant chooses which
instruments to accept based on its production costs and merchant heterogeneity is based on these differences in
production costs. They consider the merchants’ ability to pass on payment processing costs to consumers in the
form of higher uniform and differentiated goods prices. Unlike most two-sided market models, where benefits are
exogenous, they explicitly consider how consumers’ utility and merchants’ profits increase from additional sales
resulting from greater security and access to credit.31
Bolt and Chakravorti's (2008a) key results can be summarized as follows. First, with sufficiently low processing
costs relative to theft and default risk, the social planner sets the credit card merchant fee to zero, completely
internalizing the card acceptance externality. Complete merchant acceptance maximizes card usage at (p. 123)
the expense of inefficient cash payments.32 The bank may also set the merchant fees to zero, but only if
Page 10 of 21
Digitization of Retail Payments
merchants are able to sufficiently pass on their payment fees to their consumers or if their payment processing
costs are zero. Second, if the real resource cost of payment cards is sufficiently high, the social planner sets a
higher merchant fee than the bank does, resulting in lower card acceptance and higher cash usage. Third, bank
profit is higher when merchants are unable to pass on payment costs to consumers because the bank is better
able to extract merchant surplus. On the other hand, full pass-through would retrieve the neutrality result of Gans
and King (2003) where all payment costs are shifted onto consumers’ shoulders relieving potential two-sided
tension. However, if merchants need to absorb part of these cost as well, two-sided externalities remain to play a
role for optimal payment pricing. Finally, in their model, the relative cost of providing debit and credit cards
determines whether the bank will provide both or only one type of payment card.
6. Market Interventions
In this section, we discuss several market interventions in various jurisdictions. Specifically, we focus on three
different types of market interventions. First, we discuss the removal of pricing restrictions placed on merchants
that prevent them from surcharging customers that are using certain types of payment instruments. Second, we
study the impact of adoption and usage of payment cards when public authorities impose caps on interchange
fees. Third, we discuss the forced acceptance of all types of cards, that is, credit, debit, and prepaid, when
merchants enter into contracts with acquirers.
6.1. Removal of No-Surcharge Policies
There are several jurisdictions where merchants are able to surcharge card transactions. Most of the academic
research suggests that if merchants are allowed to surcharge, the level of the interchange fee would be neutral. If
the interchange fee is neutral, regulating the interchange fee would have little impact. In this section, we explore
whether merchants surcharge if they are allowed to do so.
To encourage better price signals, the RBA removed no-surcharge restrictions in 2002. The Australian authorities
argued that consumers did not receive the proper price incentives to use debit cards, the less costly payment
instrument. The Reserve Bank of Australia (RBA) reported that the average cost of the payment functionality of the
credit card was AUS$0.35 higher than a debit card using a consistent AUS$50 transaction size.33
(p. 124) While most Australian merchants do not impose surcharges for any type of payment card transaction
today, the number of merchants surcharging credit card transactions continues to increase. At the end of 2007,
around 23 percent of very large merchants and around ten percent of small and very small merchants imposed
surcharges. The average surcharge for MasterCard and Visa transactions is around one percent, and that for
American Express and Diners Club transactions is around two percent (Reserve Bank of Australia, 2008a). Using
confidential data, the Reserve Bank of Australia (2008a) also found that if one network's card was surcharged more
than another networks’ cards, consumers dramatically reduced their use of the card with the surcharge.
Differential surcharging based on which network the card belongs to may result in greater convergence in
merchant fees across payment card networks.
In the United States, merchants are allowed to offer cash discounts but may not be allowed to surcharge credit
card transactions. In the 1980s, many US gas stations explicitly posted cash and credit card prices. Barron, Staten,
and Umbeck (1992) report that gas station operators imposed contingent-instrument pricing when their credit card
processing costs were high but later abandoned this practice when acceptance costs decreased because of new
technologies such as electronic terminals at the point of sale. Recently, some gas stations brought back price
differentiation based on payment instrument type, citing the rapid rise in gas prices and declining profit margins.
On the other hand, in some instances, policymakers may prefer if merchants did not surcharge certain types of
transactions. For example, Bolt, Jonker, and van Renselaar (2010) find that a significant number of merchants
surcharge debit transactions vis-à-vis cash in the Netherlands. Debit card surcharges are widely assessed when
purchases are below 10 euro, suggesting that merchants are unwilling to pay the fixed transaction fee below this
threshold. They find that merchants may surcharge up to four times their fee. In addition, when these surcharges
are removed, they argue that consumers start using their debit cards for these small payments, suggesting that
merchant price incentives do affect consumer payment choice. Interestingly, in an effort to promote a more
efficient payment system, the Dutch central bank has supported a public campaign to encourage retailers to stop
Page 11 of 21
Digitization of Retail Payments
surcharging to encourage consumers to use their debit cards for small transactions. This strategy appears to be
successful. In 2009, debit card payments below ten euro accounted for more than 50 percent of the total annual
growth of almost 11 percent in debit card volume.
There are instances when card payments are discounted vis-à-vis cash payments. The Illinois Tollway charges
motorists who use cash to pay tolls twice as much as those who use toll tags (called I-PASS), which may be loaded
automatically with credit and debit cards when the level of remaining funds falls below a certain level (Amromin,
Jankowski, and Porter, 2007). In addition to reducing cash handling costs, the widespread implementation of toll
tags decreased not only congestions at toll booths but also pollution from idling vehicles waiting to pay tolls, since
tolls could be collected as cars drove at highway speeds through certain points on the Illinois Tollway.
(p. 125) The ability for merchants to charge different prices is a powerful incentive to convince consumers to use
a certain payment instrument. In reality, merchants may surcharge or discount card transactions depending on
their underlying cost structures along with benefits accrued. However, in some instances, surcharges may result in
a less desirable outcome as evidenced in the Dutch example suggesting a potential holdup problem whereby
merchants impose higher surcharges than their costs. Furthermore, there is also evidence of cash surcharges
suggesting that card acceptance costs are lower than the costs of handling cash
6.2. Regulation of Interchange Fees
There are several jurisdictions where interchange fees were directly regulated or significant pressure was exerted
by the public authorities on networks to reduce their interchange fees.34 In this section, we will discuss the impact
of interventions in four jurisdictions—Australia, Spain, the European Union, and the United States.
6.2.1. Australia
In 2002, the Reserve Bank of Australia (RBA) imposed weighted-average MasterCard and Visa credit card
interchange fee caps and later imposed per transaction targets for debit cards. As of April 2008, the weightedaverage credit card inter-change fees in the MasterCard and Visa networks must not exceed 0.50 percent of the
value of transactions. The Visa debit weighted-average interchange fee cap must not exceed 12 cents (Australian)
per transaction. The EFTPOS (electronic funds transfer at point of sale) interchange fees for transactions that do
not include a cash-out component must be between four cents (Australian) and five cents (Australian) per
transaction.
The Reserve Bank of Australia (2008a) reports that the interchange fee regulation, coupled with the removal of the
no-surcharge rule, improved the price signals that consumers face when deciding which payment instruments to
use. Specifically, annual fees for credit cards increased and the value of the rewards decreased. The Reserve
Bank of Australia (2008a) calculates that for an AUS$100 transaction, the cost to consumers increased from –
AUS$1.30 to –AUS$1.10 for consumers who pay off their balances in full every month. A negative per transaction
cost results when card benefits such as rewards and interest-free loans are greater than payment card fees.35
Those who oppose the Australian interchange fee regulation argue that consumers have been harmed by reduced
rewards and higher fees and have not shared in the cost savings—in terms of lower prices for goods and services.
However, measuring price effects over time of interchange fee regulation is difficult.36
6.2.2. Spain
Unlike in Australia, the Ministry of the Economy, the Ministry of Industry, Tourism, and Trade along with the antitrust
authority, and not the central bank, intervened in payment card markets in Spain several times during the period
1999 to (p. 126) 2009. Part of the motivation was based on directives by the European Commission regarding
fees that were set by networks that had significant market power. These regulations had significant impact on debit
and credit card usage. Over the period 1997–2007, debit card transactions increased from 156 million to 863
million and credit card transactions increased from 138 million to 1.037 billion.
Carbó-Valverde, Chakravorti, and Rodriguez Fernandez (2009) study the effects of interchange fee reductions in
Spain from 1997 to 2007. To our knowledge, they are the first to use bank-level data to study the impact of several
episodes of interchange fee reductions for debit and credit cards resulting from moral suasion and agreements
Page 12 of 21
Digitization of Retail Payments
between market participants intermediated by the government authorities. They demonstrate that merchants
benefited from lower fees and consumers benefited from greater merchant acceptance. Surprisingly, they found
that issuer revenues increased during the period when interchange fees decreased. While the effect of these
reductions is positive on banks’ revenues, their effect on banks’ profits could not be determined because of data
limitations. Furthermore, there may be a critical interchange fee below which issuer revenue decreases.
6.2.3. European Commission
In December 2007, the European Commission (EC) ruled that the multilateral interchange fees for cross-border
payments in the European Union applied by MasterCard Europe violated Council Regulation (EC) No. 1/2003. The
EC argued that MasterCard's fee structure restricted competition among acquiring banks and inflated the cost of
card acceptance by retailers without leading to proven efficiencies.37 In response, MasterCard reached an interim
understanding with the European Commission on these interchange fees for cross-border consumer payments in
the EU in April 2009. Effective July 1, 2009 MasterCard, Europe has established interchange fees for consumer card
transactions that, on average, will not exceed 30 basis points for credit cards and 20 basis points for debit cards.
With these fee changes, the EC will not further pursue MasterCard either for non-compliance with its December
2007 decision or for infringing the antitrust rules.
The EC conducted a separate antitrust investigation against Visa and will monitor the behavior of other market
players to ensure that competition is effective in this market to the benefit of merchants and consumers. The EC
and Visa have agreed to 20 basis point debit card interchange fees but have not agreed to the level of credit card
interchange fees. The dialogue between Visa and MasterCard vis-à-vis the Commission has to date not led to an
agreement concerning the application of the “merchant indifference methodology” based on the tourist test to
consumer credit (and deferred debit) transactions—discussions on this issue continue.
6.2.4. United States
As part of the financial reform bill signed into law on July 21, 2010, a section of Title 10 of the Dodd-Frank Wall
Street Reform and Consumer Protection Act grants the Federal Reserve Board the authority to set rules regarding
the setting of debit card (p. 127) interchange fees that are “reasonable and proportional to cost.” Financial
institutions with less than $10 billion in assets are exempt. Debit cards include payment cards that access
accounts at financial institutions to make payment along with prepaid cards. Certain types of prepaid cards are
exempt such as those disbursing funds as a part of government-sponsored programs or geared toward the
underbanked or lower-income households.38 The level of the debit card interchange fee has not been decided at
the time of writing.
6.3. Honor-All-Cards Rules
A payment card network may require merchants that accept one of its payment products to accept all of its
products.39 Such a rule is a type of honor-all-cards rule. In other words, if a merchant accepts a network's credit
card, it must accept all debit and prepaid cards from that network. In the United States, around 5 million merchants
sued the two major networks, MasterCard and Visa, over the required acceptance of the network's signature-based
debit card when accepting the same network's credit card.40 The case was settled out of court. In addition to a
monetary settlement, MasterCard and Visa agreed to decouple merchants’ acceptance of their debit and credit
products. While few merchants have declined one type of card and accepted another type, the decoupling of debit
and credit card acceptance may have increased bargaining power for merchants in negotiating fees.
As part of the payment system reforms in Australia, MasterCard and Visa were mandated to decouple merchants’
acceptance of their debit and credit cards as well. The Payments System Board (Reserve Bank of Australia, 2008b)
is unaware of any merchant that continues to accept debit cards but does not accept credit cards from the same
network.
7. Conclusion
Technological advances in mobile phone technology have the potential to replace many remaining paper-based
transactions. This will also increase the usage of electronic payment usage. In other words, how rapidly payment
Page 13 of 21
Digitization of Retail Payments
innovations are introduce and adopted is critically dependent on the potential profitability of the retail payment
system as a whole. However, the rate at which these shifts will occur depends on the underlying benefits and costs
to payment system participants. Most policymakers and economists agree that the digitization of payments is
socially beneficial. However, there is considerable debate regarding the optimal pricing of these payment services.
Payment markets are complex with many participants engaging in a series of interrelated bilateral transactions.
(p. 128) The determination of optimal prices is difficult for several reasons. First, there are significant scale and
scope economies in payment processing because of large fixed costs to setup sophisticated secure networks to
process, clear and settle payment transactions. Thus, established payment providers may generally enjoy some
level of market power because these markets are generally not contestable.
Second, payment networks must convince two distinct sets of end users—consumers and merchants—to
simultaneously participate. Networks often set asymmetric prices to get both sides on board. Such pricing is based
on cost to serve end-users, as well as their demand elasticities. It is extremely difficult for policymakers to
disentangle optimal pricing strategies from excessive rent extraction.
Third, efficiency of payment systems is measured not only by the costs of resources used, but also by the social
benefits generated by them. Measuring individual and social benefits is particularly difficult. The central question is
whether the specific circumstances of payment markets are such that intervention by public authorities can be
expected to improve economic welfare.
The theoretical literature on payment cards continues to grow. However, there are a few areas of payment
economics that deserve greater attention. First, what is the effect of the reduction of banks’ and networks’ surplus
extraction on future innovation? Second, how should payment system participants pay for fraud containment and
distribute the losses when fraud occurs? Third, should public entities step in and start providing payment services
and, if so, at what price?
Finally, empirical studies about payment system pricing using data from payment networks and providers are
extremely scarce. Such analysis would be helpful in understanding how effective regulatory interventions were in
meeting the stated objectives and studying any potential unintended consequences. We hope that recent
regulatory changes in different parts of the world will generate rich sets of data that can be exploited by
economists to test how well the theories fit the data.
References
Adams, R., Bauer, P., Sickles R. 2004. Scale Economies, Scope Economies, and Technical Change in Federal
Reserve Payment Processing. Journal of Money, Credit and Banking 36(5), pp. 943–958.
Agarwal, S., Chakravorti, S., Lunn, A., 2010. Why Do Banks Reward Their Customers to Use Their Credit Cards,
Federal Reserve Bank of Chicago Working Paper, WP-2010–19.
Alvarez, F., Lippi, F., 2009. Financial Innovation and the Transactions Demand for Cash, Econometrica, 77 (2), pp.
363–402.
Amromin, G., Chakravorti, S., 2009. Whither Loose Change? The Diminishing Demand for Small Denomination
Currency. Journal of Money, Credit and Banking 41(2–3), pp. 315–335.
Amromin, G., Jankowski, C., Porter, R., 2007. Transforming Payment Choices by Doubling Fees on the Illinois
Tollway. Economic Perspectives, Federal Reserve Bank of Chicago 31(2), pp. 22–47.
Armstrong, M., 2006. Competition in Two-sided Markets. RAND Journal of Economics 37(3), pp. 668–691.
Ausubel, L., 1991. The Failure of Competition in the Credit Card Market. American Economic Review 81(1), pp. 50–
81.
Barron, J., Staten, M., Umbeck, J., 1992. Discounts for Cash in Retail Gasoline Marketing. Contemporary Policy
Issues 10(4), pp. 89–102.
Page 14 of 21
Digitization of Retail Payments
Baxter, W., 1983. Bank Interchange of Transactional Paper: Legal and Economic Perspectives. Journal of Law and
Economics 26(3), pp. 541–588.
Bedre-Defolie, Ö., Calvano, E., 2010. Pricing Payment Cards. Toulouse School of Economics and Princeton
University, mimeo.
Beijnen, C., Bolt, W., 2009. Size Matters: Economies of Scale in European Payments Processing. Journal of Banking
and Finance 33(2), pp. 203–210.
Benner, K., 2008. Visa's Record IPO Rings Up 28 Percent Gain. CNNMoney.com, March 19. Available at:
http://money.cnn.com/2008/03/19/news/companies/visa_ipo_opens.fortune/index.htm
Bergman, M., Guibourg, G., Segendorff, B., 2007. The Costs of Paying—Private and Social Costs of Cash and Card.
Sveriges Riksbank, working paper, No. 212.
Bolt, W., 2008. The European Commission's Ruling in MasterCard: A Wise Decision? GCP, April 1. Available at:
www.globalcompetitionpolicy.org/index.php?id=981&action=907.
Bolt, W., Carbó-Valverde, S., Chakravorti, S., Gorjón, S., Rodríguez Fernández, F., 2010. What is the Role of Public
Authorities in Retail Payment Systems? Federal Reserve Bank of Chicago Fed Letter, 280a, pp. 1–4.
Bolt, W., Chakravorti, S., 2008a. Consumer Choice and Merchant Acceptance of Payment Media. Federal Reserve
Bank of Chicago Working Paper, WP-2008–11.
Bolt, W., Chakravorti, S., 2008b. Economics of Payment Cards: A Status Report. Economic Perspectives, Federal
Reserve Bank of Chicago 32(4), pp. 15–27.
Bolt, W., Humphrey, D., 2007. Payment Network Scale Economies, SEPA, and Cash Replacement. Review of
Network Economics 6(4), pp. 453–473.
Bolt, W., Humphrey, D., 2009. Payment Scale Economies from Individual Bank Data. Economics Letters 105(3), pp.
293–295. (p. 132)
Bolt, W., Humphrey, D., Uittenbogaard, R., 2008. Transaction Pricing and the Adoption of Electronic Payments: A
Cross-country Comparison. International Journal of Central Banking 4(1), 89–123.
Bolt, W., Jonker, N., van Renselaar, C., 2010. Incentives at the Counter: An Empirical Analysis of Surcharging Card
Payments and Payment Behavior in the Netherlands. Journal of Banking and Finance 34(8), pp. 1738–1744.
Bolt, W., Schmiedel, H., 2011. Pricing of Payment Cards, Competition, and Efficiency: A Possible Guide for SEPA.
Annals of Finance, pp. 1–21.
Bolt, W., Tieman, A., 2008. Heavily Skewed Pricing in Two-Sided Markets. International Journal of Industrial
Organization 26(5), pp. 1250–1255.
Borzekowski, R., Kiser, E., Ahmed, S., 2008. Consumers’ Use of Debit Cards: Patterns, Preferences, and Price
Response. Journal of Money, Credit and Banking 40(1), pp. 149–172.
Bounie, D., Francois, A., 2006. Cash, Check or Bank Card? The Effects of Transaction Characteristics on the Use of
Payment Instruments. Telecom Paris Tech, working paper ESS-06–05.
Bourreau, M., Verdier, M., 2010. Private Cards and the Bypass of Payment Systems by Merchants. Journal of
Banking and Finance 34(8), pp. 1798–1807.
Bradford, T., Hayashi, F., 2008. Developments in Interchange Fees in the United States and Abroad. Payments
System Research Briefing, Federal Reserve Bank of Kansas City, April.
Brito, D., Hartley, P., 1995. Consumer Rationality and Credit Cards. Journal of Political Economy 103(2), pp. 400–
433.
Brits, H., Winder, C., 2005. Payments Are no Free Lunch. Occasional Studies, De Nederlandsche Bank 3(2), pp. 1–
Page 15 of 21
Digitization of Retail Payments
44.
Carbó-Valverde, S., Chakravorti, S., Rodriquez Fernandez, F., 2009. Regulating Two-sided Markets: An Empirical
Investigation. Federal Reserve Bank of Chicago Working Paper, WP-2009–11.
Carbó-Valverde, S., Humphrey, D., Liñares Zegarra, J.M., Rodríguez Fernández, F., 2008. A Cost–Benefit Analysis of
a Two-sided Card Market. Fundación de las Cajas de Ahorros (FUNCAS), working paper, No. 383.
Carbó-Valverde, S., Liñares Zegarra, J.M., 2009. How Effective Are Rewards Programs in Promoting Payment Card
Usage? Empirical Evidence. Fundación BBVA, working paper, No. 1.
Carlton, D., Frankel, A., 1995. The Antitrust Economics of Credit Card Networks. Antitrust Law Journal, 63(2), pp.
643–668.
Chakravorti, S., 2007. Linkages Between Consumer Payments and Credit. In: S. Agarwal, Ambrose, B.W. (Eds.),
Household Credit Usage: Personal Debt and Mortgages, New York, Palgrave MacMillan, pp. 161–174.
Chakravorti, S., 2010. Externalities in Payment Card Networks: Theory and Evidence. Review of Network
Economics, 9(2), pp. 99–134.
Chakravorti, S., Emmons, W., 2003. Who Pays for Credit Cards? Journal of Consumer Affairs, 37(2), pp. 208–230.
Chakravorti, S., Lubasi, V., 2006. Payment Instrument Choice: The Case of Prepaid Cards. Economic Perspectives,
Federal Reserve Bank of Chicago 30(2), pp. 29–43.
Chakravorti, S., Roson, R., 2006. Platform Competition in Two-sided Markets: The Case of Payment Networks.
Review of Network Economics 5(1), pp. 118–143.
Chakravorti, S., Shah, A., 2003. Underlying Incentives in Credit Card Networks. Antitrust Bulletin 48(1), pp. 53–75.
(p. 133)
Chakravorti, S., To, T., 2007. A Theory of Credit Cards. International Journal of Industrial Organization 25(3), pp.
583–595.
Chang, H., Evans, D., Garcia Swartz, D., 2005. The Effect of Regulatory Intervention in Two-sided Markets: An
Assessment of Interchange-Fee Capping in Australia. Review of Network Economics 4(4), pp. 328–358.
Ching, A., Hayashi, F., 2010. Payment Card Rewards Programs and Consumer Payment Choice. Journal of Banking
and Finance 34(8), pp. 1773–1787.
Committee on Payment and Settlement Systems 1993 and 2010. Statistics on Payment and Settlement Systems in
Selected Countries, Basel, Switzerland: Bank for International Settlements.
Constantine, L., 2009. Priceless, New York: Kaplan Publishing.
Donze, J., Dubec, I., 2009. Pay for ATM Usage: Good for Consumers, Bad for Banks? Journal of Industrial
Economics, 57(3), pp. 583–612.
Dubin, J., McFadden, D., 1984. An Econometric Analysis of Residential Electric Appliance Holdings and
Consumption. Econometrica 52(2), pp. 345–362.
Evans, D., 2003. The Antitrust Economics of Multi-Sided Markets. Yale Journal on Regulation 20(2), pp. 325–381.
Farrell, J., 2006. Efficiency and Competition Between Payment Instruments. Review of Network Economics 5(1), pp.
26–44.
Frankel, A., 1998. Monopoly and Competition in the Supply and Exchange of Money. Antitrust Law Journal 66(2), pp.
313–361.
Gans, J., King, S., 2003. The Neutrality of Interchange Fees in Payment Systems. Topics in Economic Analysis &
Policy 3(1), pp. 1–26. Available at: www.bepress.com/bejeap/topics/vol3/iss1/art1.
Page 16 of 21
Digitization of Retail Payments
Garcia-Swartz, D., Hahn, R., Layne-Farrar, A., 2006. A Move Toward a Cashless Society: A Closer Look at Payment
Instrument Economics. Review of Network Economics 5(2), pp. 175–198.
Green, J., 2008. Exclusive Bankcard Profitability Study and Annual Report 2008. Card & Payments, pp. 36–38.
Guthrie, G., Wright, J., 2007. Competing Payment Schemes. Journal of Industrial Economics 55(1), pp. 37–67.
Hancock, D., Humphrey, D., Wilcox, J., 1999. Cost Reductions in Electronic Payments: The Roles of Consolidation,
Economies of Scale, and Technical Change. Journal of Banking and Finance 23(2–4), pp. 391–421.
Hayashi, F., Klee, E., 2003. Technology Adoption and Consumer Payments: Evidence from Survey Data. Review of
Network Economics, 2(2) pp. 175–190.
Hayes, R., 2007. An Economic Analysis of the Impact of the RBA's Credit Card Reforms. mimeo, University of
Melbourne.
Humphrey, D., 2010. Retail Payments: New Contributions, Empirical Results, and Unanswered Questions. Journal of
Banking and Finance 34(8), pp. 1729–1737.
Humphrey, D., Kim, M., Vale, B., 2001. Realizing the Gains from Electronic Payments: Costs, Pricing, and Payment
Choice. Journal of Money, Credit and Banking 33(2), pp. 216–234.
Humphrey, D., Pulley, L., Vesala, J., 2000. The Check's in the Mail: Why the United States Lags in the Adoption of
Cost-saving Electronic Payments. Journal of Financial Services Research 17(1), pp. 17–39.
Humphrey, D., Willesson, M., Bergendahl, G., Lindblom, T., 2006. Benefits from a Changing Payment Technology in
European Banking. Journal of Banking and Finance 30(6), pp. 1631–1652. (p. 134)
Jonker, N., 2007. Payment Instruments as Perceived by Consumers—Results from a Household Survey. De
Economist 155(3), pp. 271–303.
Kahn, C., Roberds, W., 2009. Why pay? An Introduction to Payment Economics. Journal of Financial Intermediation
18(1), pp. 1–23.
Khiaonarong, T., 2003. Payment Systems Efficiency, Policy Approaches, and the Role of the Central Bank. Bank of
Finland, discussion paper, No. 1/2003.
Klee, E., 2008. How People Pay? Evidence from Grocery Store Data. Journal of Monetary Economics 55(3), pp. 526–
541.
Kosse, A., 2010. The Safety of Cash and Debit Cards: A Study on the Perception and Behavior of Dutch
Consumers. De Nederlandsche Bank, working paper, No. 245.
McAndrews, J., Wang, Z., 2008. The Economics of Two-sided Payment Card Markets: Pricing, Adoption and Usage.
Federal Reserve Bank of Kansas City Working Paper, RWP 08–12.
Mester, L., 2006. Changes in the Use of Electronic Means of Payment: 1995–2004. Business Review, Federal
Reserve Bank of Philadelphia, Second Quarter 2006, pp. 26–30.
Prager, R., Manuszak, M., Kiser, E., Borzekowski, R., 2009. Interchange Fees and Payment Card Networks:
Economics, Industry Developments, and Policy Issues. Federal Reserve Board Finance and Economics Discussion
Series, 2009–23.
Quaden, G. (presenter), 2005. Costs, Advantages, and Disadvantages of Different Payment Methods. National Bank
of Belgium, report, December.
Reserve Bank of Australia, 2008a. Reform of Australia's Payments System: Preliminary Conclusions of the 2007/08
Review, April.
Reserve Bank of Australia, 2008b. Reform of Australia's Payments System: Conclusions of the 2007/08 Review,
September.
Page 17 of 21
Digitization of Retail Payments
Rochet, J.-C., Tirole, J., 2002. Cooperation Among Competitors: Some Economics of Payment Card Associations.
RAND Journal of Economics 33(4), pp. 549–570.
Rochet, J.-C., Tirole, J., 2003. Platform Competition in Two-Sided Markets. Journal of the European Economic
Association 1(4), pp. 990–1029.
Rochet, J.-C., Tirole, J., 2006a. Externalities and Regulation in Card Payment Systems. Review of Network
Economics 5(1), pp. 1–14.
Rochet, J.-C., Tirole, J., 2006b. Two-sided Markets: A progress Report. RAND Journal of Economics 37(3), pp. 645–
667.
Rochet, J.-C., Tirole, J., 2010. Tying in Two-sided Markets and Honour all Cards Rule. International Journal of
Industrial Organization 26(6), pp. 1333–1347.
Rochet, J.-C., Tirole, J., 2011. Must-Take Cards: Merchant Discounts and Avoided Costs. Journal of the European
Economic Association, 9(3), pp. 462–495.
Rochet, J.-C., Wright, J., 2010. Credit Card Interchange Fees. Journal of Banking and Finance 34(8), pp. 1788–1797.
Rysman, M., 2007. An Empirical Analysis of Payment Card Usage. Journal of Industrial Economics 55(1), pp. 1–36.
Schmalensee, R., 2002. Payment Systems and Interchange Fees. Journal of Industrial Economics 50(2), pp. 103–
122.
Scholtes, S., 2009. Record Credit Card Losses Force Banks into Action. Financial Times, July 1.
Schuh, S., Shy, O., Stavins, J., 2010. Who Gains and Who Loses from Credit Card Payments? Theory and
Calibrations. Federal Reserve Bank of Boston Public Policy Discussion Papers, 10–03. (p. 135)
Schuh, S., Stavins, J., 2010. Why Are (Some) Consumers (Finally) Writing Fewer Checks? The Role of Payment
Characteristics. Journal of Banking and Finance 34(8), pp. 1745–1758.
Schwartz, M., Vincent, D., 2006. The no Surcharge Rule and Card User Rebates: Vertical Control by a Payment
Network. Review of Network Economics 5(1), pp. 72–102.
Stavins, J., 2001. Effect of Consumer Characteristics on the Use of Payment Instruments. New England Economic
Review, issue 3, pp. 19–31.
Stix, H., 2003. How Do Debit Cards Affect Cash Demand? Survey Data Evidence. Empirica 31, pp. 93–115.
Wang, Z., 2010. Market Structure and Payment Card Pricing: What Drives the Interchange? International Journal of
Industrial Organization, 28(1), pp. 86–98.
Weyl, E.G., 2010. A Price Theory of Multi-sided Platforms. American Economic Review, 100(4), pp. 1642–1672.
Wright, J., 2004. The Determinants of Optimal Interchange Fees in Payment Systems. Journal of Industrial Economics
52(1), pp. 1–26.
Zinman, J., 2009. Debit or Credit? Journal of Banking and Finance, 33(2), pp. 358–366.
Notes:
(1.) For a recent discussion about the role of public authorities in retail payment systems, see Bolt et al. (2010).
(2.) There are two types of payment card networks—open (four-party) and proprietary (three-party) networks.
Open networks allow many banks to provide payment services to consumers and merchants, whereas in
proprietary networks, one institution provides services to both consumers and merchants. When the issuer is not
also the acquirer, the issuer receives an interchange fee from the acquirer. Open networks have interchange fees,
whereas proprietary systems do not have explicit interchange fees because one institution serves both consumers
Page 18 of 21
Digitization of Retail Payments
and merchants using that network's payment services. However, proprietary networks still set prices for each side
of the market to ensure that both sides are on board.
(3.) Although no surcharge rules are present in the United States, merchants are free to offer cash discounts along
with discounts for other payment instruments. A section of Title 10 of the Dodd-Frank Wall Street Reform and
Consumer Protection Act expands the ability of merchants to provide discounts for types of payment instruments
such as debit cards, checks, cash, and other forms of payments.
(4.) In some instances, merchants are charged a fixed fee and a proportional fee.
(5.) There are countries, for example, France, where the cardholder's account is debited much later. These types
of cards are referred to as delayed debit cards.
(6.) There are transaction accounts that allow overdrafts. Such accounts are common in Germany and the United
States. While these types of transactions are similar to credit card transactions, however without interest-free
“grace” periods when monthly balances are paid in full, potential cross-subsidies between merchants and
consumers are generally more limited.
(7.) For a summary of the US prepaid card market, see Chakravorti and Lubasi (2006).
(8.) For a brief survey on the empirical payment literature, see Kahn and Roberds (2009).
(9.) Some key benefits of using cash include privacy and anonymity that are not provided by other types of
payment instruments.
(10.) Carbó-Valverde et al. (2008) conduct a similar exercise for Spain, and find that when summing net costs and
benefits across participants, debit cards are the least costly and checks are the most costly, with credit cards and
cash ranking second and third, respectively.
(11.) For empirical estimation of scale and scope economies resulting from Fedwire consolidation, see Hancock,
Humphrey and Wilcox (1999) and Adams, Bauer and Sickles (2004).
(12.) SEPA applies to all countries where the euro is used as the common currency. The implementation of SEPA
started in January 2008 with the launching of the SEPA credit transfer scheme and should be completed when all
national payment instruments are phased out; these instruments may not be entirely phased out until 2013. For a
first theoretic analysis of SEPA and payment competition, see Bolt and Schmiedel (2011).
(13.) His results were somewhat biased because the cost of labor across countries was not specified in the
analysis.
(14.) For a more general treatment of two-sided markets, see Armstrong (2006), Rochet and Tirole (2006b), and
Weyl (2010).
(15.) For a review of the academic literature on two-sided payment networks, see Bolt and Chakravorti (2008b).
(16.) For more details, see Bolt and Tieman (2008).
(17.) In this section, we build upon Chakravorti (2010) and Rochet and Tirole (2006a).
(18.) He considers an environment where consumers are homogeneous, merchants are perfectly competitive, and
the market for issuing and acquiring payment cards are competitive.
(19.) For more discussion about no-surcharge rules and discounts, see Chakravorti and Shah (2003).
(20.) Frankel (1998) refers to merchants’ reluctance to set different prices even when they are allowed to do so as
price cohesion.
(21.) More recently, since cash may be more expensive from a social viewpoint, McAndrews and Wang (2008)
argue that card users may ultimately be subsidizing cash users as opposed to the standard view that cash users
subsidize card users (Carlton and Frankel, 1995, Schuh, Shy, and Stavins, 2010, and Schwartz and Vincent, 2006).
Page 19 of 21
Digitization of Retail Payments
(22.) Schwartz and Vincent (2006) relax the common assumption made in the literature that the demand for the
consumption good is fixed. However, they assume that consumers are exogenously divided into cash and card
users and cannot switch between the groups.
(23.) In a theoretic model, Bedre-Defolie and Calvano (2010) stress that consumers make two distinct decisions
(membership and usage) whereas merchants make only one (membership).
(24.) For more discussion, see Evans (2003).
(25.) Donze and Dubec (2009) discuss the impact of collective setting of interchange fees and downstream
competition in the ATM market.
(26.) We limit our focus here to consumption credit. Payment credit—the credit that is extended by the receiver of
payment or by a third party until it is converted into good funds—is ignored. For more discussion, see Chakravorti
(2007).
(27.) The empirical literature on credit cards has suggested interest rate stickiness along with above-market
interest rates, although some have argued that the rate is low compared with alternatives such as pawn shops. For
more discussion, see Ausubel (1991) and Brito and Hartley (1995).
(28.) For a breakdown of issuer revenue percentages, see Green (2008).
(29.) Farrell (2006) studies the impact of higher interchange fees on consumers who do not use cards and argues
that policymakers should not ignore these redistributive effects. Wang (2010) finds that given a monopolistic
network with price taking issuers and acquirers, networks tend to set higher than socially optimal interchange fees
to boost card transaction value and compete with other types of payment instruments.
(30.) Bourreau and Verdier (2010) and Rochet and Wright (2010) consider competition between merchant-specific
private label and bank-issued general-purpose cards.
(31.) McAndrews and Wang (2008) also explicitly consider card benefits and attach monetary values to them.
(32.) Default rates and probability of theft will differ across countries. For Italy, Alvarez and Lippi (2009) estimate
the probability of being pickpocketed at around 2 percent in 2004. For the United States, Scholtes (2009) reported
that credit card default rates hit a record of more than 10 percent in June 2009.
(33.) Reserve Bank of Australia (2008a), 17.
(34.) For a summary of antitrust challenges in various jurisdictions, see Bradford and Hayashi (2008).
(35.) For more discussion about card rewards, see Agarwal, Chakravorti, and Lunn (2010), Carbó-Valverde and
Liñares Zegarra (2009), and Ching and Hayashi (2010).
(36.) For more discussion, see Chang, Evans, and Garcia Swartz (2005) and Hayes (2007).
(37.) On December 16, 2002, the Council of the European Union adopted Council Regulation (EC) No. 1/2003 on
the implementation of the rules on competition laid down in Articles 81 and 82 of the Treaty Establishing the
European Community (that is, the 1997 consolidated version of the Treaty of Rome). The new regulation came into
effect on May 1, 2004. For more discussion on the EC's ruling on MasterCard, see Bolt (2008).
(38.) Prager et al. (2009) review the US payment card market and consider potential regulations.
(39.) For a theoretical model on honor-all-cards rules, see Rochet and Tirole (2010).
(40.) For a detailed account of the merchants' position, see Constantine (2009).
Wilko Bolt
Wilko Bolt is an economist at the Economics and Research Division of De Nederlandsche Bank.
Sujit Chakravorti
Page 20 of 21
Digitization of Retail Payments
Sujit "Bob" Chakravorti is the Chief Economist and Director of Quantitative Analysis at The Clearing House.
Page 21 of 21
Mobile Telephony
Oxford Handbooks Online
Mobile Telephony
Steffen Hoernig and Tommaso Valletti
The Oxford Handbook of the Digital Economy
Edited by Martin Peitz and Joel Waldfogel
Print Publication Date: Aug 2012
Online Publication Date: Nov
2012
Subject: Economics and Finance, Economic Development
DOI: 10.1093/oxfordhb/9780195397840.013.0006
Abstract and Keywords
This article, which covers the recent advances in the economics of mobile telephony, first deals with specific
market forces and how they shape competitive outcomes. It then investigates the regulation of certain wholesale
and retail prices, closely following regulatory practice. Competition in mobile telephony is usually characterized by
the presence of a fairly small number of competitors. Competition for market share is fierce, which benefits
consumers through lower subscription fees. It is clear that with more networks, consumers will benefit from
termination rates closer to cost. “Traffic management” technologies have made it possible to direct mobile phones
to roam preferentially onto specific networks. While regulation has been employed to termination rates, and also
international mobile roaming, no such intervention appears likely concerning retail prices. Mobile networks'
business model may change from mainly providing calls and text messages to providing access to content to
users.
Keywords: mobile telephony, economics, wholesale prices, retail prices, traffic management, mobile networks, competition, regulation
1. Introduction
Mobile (or cellular) communications markets have been growing at an impressive rate over the last two decades,
with worldwide subscriptions increasing from several millions to five billions of users in 2010 on all continents. This
growth was fueled not only by high adoption rates in developed economics, but increasingly by penetration in
developing economies where often a fixed communications infrastructure was lacking.
In the beginning the only purpose of mobile phones was to make and receive telephone calls to and from fixed-line
and other mobile networks. In addition to being a telephone, modern mobile phones also support many additional
services, such as text messages, multimedia messages, email, and Internet access, and have accessories and
applications such as cameras, MP3 player, radio, and payment systems.
The underlying technology has undergone regular changes. Pre-1990 networks were based on analog
transmissions, after which digital standards such as GSM (global system for mobile communications) were
introduced, denoted “second generation” networks. These networks were designed primarily for voice
communications. As of this writing, the transition to third generation networks [e.g., universal mobile
telecommunications system (UMTS) or code-division multiple access (CDMA)] is occurring, which are prepared for
higher rates of data traffic. But already auctions are being run in many countries in order to hand out spectrum for
fourth generation (long term evolution, LTE) services. The latter should allow transmission speeds high enough as
to rival with fixed broadband connections.
(p. 137) The mobile telephony market has provided fertile territory for a large number of theoretical and empirical
papers in economics. Perhaps one reason is that its institutional features span so many interesting phenomena:
Page 1 of 18
Mobile Telephony
competition, regulation, network effects and standards (between parallel and different generations), two-sided
platforms (mobile networks as platforms for media services), auctions (radio spectrum), etcetera. In this chapter we
set out recent developments in the academic literature as concerns the analysis of competition and regulation in
mobile telephony markets, mainly as concerns voice calls. Our central interest lies in the core principles of pricing
of mobile networks and their interconnection. Issues related to other services (Internet, music, social networking),
two-sided platforms, standards, and competition policy are dealt with in other chapters of this Handbook.
Naturally, competition and regulation interact, so our division of this chapter in two parts needs to be justified. The
first part of the chapter (“competition”) deals with specific market forces and how they shape competitive
outcomes (be they good or bad for consumers or society). The second part (“regulation”) then provides an
analysis of the regulation of certain wholesale and retail prices, closely following regulatory practice. Since both
authors live in the European Union, policy issues relevant to Europe are at the forefront of this chapter. The
economics questions we tackle are of more general interest, however, and extend beyond these boundaries.
We hope that our chapter will provide the reader with the necessary tools to understand the upcoming decade of
significant changes in mobile (but not only) communications markets. Needless to say, there have been earlier
surveys of competition in mobile telephony and the accompanying literature, covering a wide range of different
topics. We refer the reader to Laffont and Tirole (2000), Armstrong (2002), Vogelsang (2003), Gruber and Valletti
(2003), and Gans et al. (2005).
2. Competition
Competition in mobile telephony is usually characterized by the presence of a fairly small number of competitors
(typically, two to four physical networks of different sizes). Barriers to entry are mainly due to a limited number of
licenses granted by national authorities, reflecting a scarcity in the spectrum of electromagnetic frequencies that
are needed to operate a mobile telephony network. This physical restriction has been overcome to some extent in
recent years due to the creation of mobile virtual network operators (MVNOs), which are independent firms who do
not own a physical network but rather rent airtime on existing ones.
Furthermore, networks sell wholesale services (“termination”) to each other and often compete in tariffs which
endogenously create network effects at the network level (rather than at the industry level). These features of the
market can have (p. 138) significant short-term and long-term effects on how the market functions. In the
following, we consider the setting of retail and wholesale prices, market structure, and dynamic issues.
2.1. On-Net and Off-Net Retail Pricing
The year 1998 saw the publication of three seminal articles for the study of the economics of mobile
communications: Armstrong (1998), Laffont et al. (1998a, LRTa), and Laffont et al. (1998b, LRTb). Of these, the
former two consider nondiscriminatory or uniform pricing, that is, calls within the same network (on-net) and to
other networks (off-net) are charged at the same price. With both linear and two-part tariffs, they find that call
prices are set on the basis of a perceived marginal cost which is given by c = co + (1 – α)a, where co denotes the
marginal cost of origination (the corresponding cost of termination is ct), α denotes the own market share of an
operator, and a is the mobile termination rate (MTR) paid only to terminate those calls that are made to the rival,
which has a market share of (1 – α).
The latter article studies the same setup with discrimination between on-net and off-net calls, that is,
“discriminatory tariffs,” where the relevant perceived marginal cost levels are con = co + ct for on-net calls and cof
= co + a for off-net calls. LRTb find that, since networks’ costs are higher for off-net calls if the MTR is larger than
the cost of termination (a 〉 ct), networks will charge more for off-net calls than for on-net calls. Under competition in
two-part tariffs, where networks charge a fixed monthly fee plus per-minute prices for calls, networks would set call
prices equal to perceived marginal cost, that is, pon = ct and pof = ct + a. Any difference between on-net and offnet prices would then be entirely explained by termination rates above cost.
As LRTb have pointed out, this price differential has an impact on how networks compete. It creates “tariffmediated network effects” by making it cheaper to call recipients on the same network and thus creating
incentives to be on the same network as one's primary contacts. Under the standard assumption that subscribers
Page 2 of 18
Mobile Telephony
of all networks call each other randomly (this is called a “uniform calling pattern”), this result implies that
consumers will want to subscribe to the largest network. Therefore competition for market share will be fierce,
which benefits consumers through lower subscription fees. On the other hand, it makes life harder for recent
entrants, who will find it more difficult to sign up subscribers.
Hoernig et al. (2010) analyze nonuniform calling patterns in this respect and find that, if calling patterns get more
concentrated in the sense that subscribers mostly call “neighbors” rather than random people, this effect on
competition becomes weaker because on-net (respectively off-net) prices will be distorted upward (respectively
downward) due to networks’ attempt at price-discriminating against more captive consumers.1
An important recent addition to this literature is the consideration of subscribers’ utility of receiving (rather than just
making) calls, which results in a positive (p. 139) externality imposed on receivers by callers. The implications for
network competition were first studied in Jeon et al. (2004). They show that, as concerns on-net calls, the
externality is internalized by charging the efficient price below cost. More precisely, if the utility of receiving the
quantity q of calls is given by γu (q), where γε [0, 1] measures the strength of the call externality, the on-net price
will be equal to pon = con/(1 + γ) 〈 con.
On the other hand, off-net call prices will be distorted upward for strategic reasons: Since a receiver's utility
increases with the number of calls received, each competing network tries to restrict the number of off-net calls
made by increasing its off-net price. This both penalizes own customers (as they will make fewer calls) as well as
rival's customers (as they will receive fewer calls). If the call externality is very strong, the latter effect prevails and
the distortion in off-net call prices may become so large that a “connection breakdown” occurs, that is, no more
off-net calls will be made as they are too expensive. Cambini and Valletti (2008) show that the probability of a
connection breakdown is reduced if one takes into account that calls give rise to return calls, which arises when
calls made and received are complements to each other.
Hoernig (2007) confirms the distortion in off-net prices if networks are of asymmetric size, but also shows that
larger networks charge a higher off-net price than smaller networks, under both linear and two-part tariffs. The
reason is that for a large network the cost of increasing off-net prices, which consists of a corresponding
compensation of own customers through lower on-net or subscription prices, is smaller, while the benefit in terms of
switching consumers who want to receive calls from a large subscriber base, is larger. As a result, under a uniform
calling pattern more calls will be made from the small network's customers to those of the large network than vice
versa, which implies net wholesale payments from small to large networks.
A further issue studied in Hoernig (2007) is whether increasing the differential between on-net and off-net prices
can be used with anti-competitive intent. More precisely, he investigates whether it could be used as a means to
predation, that is to reduce the competitor's profits. He finds that setting high off-net prices can indeed reduce the
competitor's wholesale profits, but that it is also a rather costly means of doing so given the implied compensation
to its own customers.
2.2. A Puzzle—Why Termination Rates Are Above Cost Despite What the Models Predict
A large part of the literature on competition between networks is concerned primarily with their interconnection and
the setting of the corresponding wholesale prices, which are denoted as call “termination rates.” Armstrong (1998),
LRTa, LRTb, and Carter and Wright (1999) considered the question of whether networks could achieve collusive
outcomes in the retail market by jointly choosing the termination (p. 140) rate. This research question should be
seen in the light of the broader question of whether competition between firms owning communications
infrastructures should involve only minimal regulation, such as an obligation to give access and negotiate over the
respective charges, or whether wholesale prices should be regulated directly. A concern is that wholesale rates
might be set in such a way as to relax competition in the retail market, that is, that termination rates could be used
as an instrument of “tacit” collusion.
What these papers found is that the answer depends on the types of tariffs used. With linear tariffs, that is, tariffs
that only charge for calls made, networks would coordinate on termination rates above cost in order to raise the
cost of stealing each other's clients. LRTa also consider two-part tariffs, that is, tariffs with a fixed monthly payment
and additional charges for calls, with uniform pricing, and find that networks’ profits are neutral with respect to the
Page 3 of 18
Mobile Telephony
termination rate—any gain from changing the access charge in equilibrium is handed over to consumers through
lower fixed fees. This is not to say that the termination rate is irrelevant in this case, quite on the contrary, since it
continues to determine the level of retail prices.
The puzzle arises when one considers the final and practically very relevant case, that of two-part tariffs under
termination-based discrimination between on-net and off-net prices: Gans and King (2001), correcting a mistake in
LRTb, show that networks would actually want to set a termination rate below cost in order to reduce the
competitive intensity maintained by high on-net/off-net differentials and the resulting tariff-mediated network
effects. Furthermore, Berger (2004) showed that if call externalities are strong, then networks will want to choose a
termination rate below cost even if competition is in linear discriminatory tariffs.
In contrast with these latter findings, market participants and sectoral regulators in particular have repeatedly
voiced concern that unregulated wholesale charges for mobile termination are too high (and significantly above
cost). A simple explanation for this fact may be that networks may set their termination rates unilaterally, that is, by
maximizing profits over their own termination rate without actually talking to their competitors, as discussed for
example in Carter and Wright (1999). This corresponds to a typical situation of “double marginalization,” where two
simultaneous margins at the wholesale and at the retail level will lead to a very inefficient outcome. The “market
analysis” performed under the European Regulatory Framework for Communications adheres to this logic: Each
(mobile) network is a monopolist on termination of calls to its own customers and therefore has the market power to
raise wholesale prices significantly above cost.
Even so, there have been a number of approaches that show that mobile networks may want to set termination
rates above cost even if they negotiate with each other as might have been imposed as a “light” regulatory
obligation. This is, for instance, the approach followed in the United States, where under the 1996
Telecommunications Act operators must negotiate pair-wise reciprocal termination (p. 141) rates. In all the cases
outlined below, it is the interaction with additional economic effects that makes networks choose a high termination
rate, while the underlying model is the same as in Gans and King (2001). We now turn the discussion to these
additional effects, which include termination from both fixed and mobile calls (section 1.2.1), differences among
customers in the number of calls they make or in the number of calls destined to them (section 1.2.2), and impact
of termination rates on market structure (section 1.2.3).
2.2.1. Arbitrage
One issue that had been neglected, on purpose, in the work on two-way interconnection was interconnection with
the fixed network. The economics of the latter is in fact quite different, as pointed out by Gans and King (2000),
Mason and Valletti (2001), and Wright (2002). The fixed network, most of the times, is heavily regulated in the
sense that it is forced to interconnect with mobile networks and must charge a termination rate for incoming calls at
cost-oriented rates an order of magnitude below current mobile termination rates. This implies that mobile networks
have all the bargaining power in setting their termination rates for fixed-to-mobile calls.2 Competition between
mobile networks for subscribers does not at all lower fixed-to-mobile termination rates. Rather, networks attempt to
maximize the corresponding profits received from calls initiated and paid by fixed users calling their mobile
subscribers. Hence, mobile networks set high termination rates and spend part of this money on customer
acquisition. If callers on the fixed network cannot identify the mobile network they are calling, then the resulting
pricing externality (the network that raises its termination rate only suffers a part of the reduction in demand) leads
to termination rates even above the monopoly level.
A relation of two-way interconnection with this problem arises in two ways: Either both mobileto-mobile and fixedto-mobile termination rates are forced by regulation to be set at the same level, or “arbitrage” possibilities force
them to be so, as discussed in Armstrong and Wright (2009).
The typical case of “arbitrage” and its effects is France. Before the introduction of a unique mobile termination rate
in 2005, mobileto-mobile calls were exchanged at a termination rate of zero (“bill & keep,” discussed later), while
fixed-to-mobile termination rates where high. The discrepancy in the rates attracted arbitrageurs, using the socalled “GSM gateways.” Fixed operators could cut their costs by routing all the fixed-to-mobile traffic via a GSM
gateway that transformed their calls into mobile-to-mobile calls and, by doing so, avoided the termination rate.
Mobile networks in the end reacted to this and preferred to charge equally for both types of termination, thus
preventing further arbitrage attempts.
Page 4 of 18
Mobile Telephony
Taking as the model for mobile-to-mobile interconnection the Gans and King (2001) framework, Armstrong and
Wright (2009) analyze whether the incentives to set high or low termination rates are stronger if the rate is the
same for both types of termination. They find that networks will want to set rates above cost, though somewhat
lower than under pure fixed-to-mobile interconnection.
(p. 142) 2.2.2. Elastic Subscription Demand and Customer Heterogeneity
The basic result of Gans and King (2001) arises because of a business stealing effect: competing operators prefer
termination rates below cost in order to soften competition for subscribers. This result arises in the context of
“mature” markets, that is, markets with a perfectly price-inelastic demand for subscription. If, instead, total
subscription demand were elastic, by setting high retail prices to consumers, firms would suffer from a reduced
network externality effect. Firms may thus have a common incentive to increase market penetration, as this
increases the value of subscription to each customer. Note that this second force works against the first one, since
softening competition would cause a reduction in the number of subscribers. Hurkens and Jeon (2009) study this
problem, generalizing earlier results in Dessein (2003), and still find that operators prefer below-cost termination
rates.
Allowing for heterogeneity of customers of the type analyzed by Cherdron (2002), Dessein (2003), and Hahn
(2004) does not suffice to overturn this result, either. They consider situations where consumers differ in the
amount of calls they place, and operators set fully nonlinear pricing schedules rather than the (simpler) two-part
pricing schedules studied in the earlier literature.
Heterogeneity of calling patterns, however, can produce situations where operators are better off by setting
above-cost termination rates. Jullien et al. (2010) show that this can arise when there are two different groups of
users, heavy users and light users, and when the light users have an elastic subscription demand. Also, light users
are assumed to receive far more calls than they make. They find that, in the absence of termination-based price
discrimination, firms prefer termination rates above cost because of a competition-softening effect: losing a caller
to a rival network increases the profit that the original network makes from terminating calls on its light users which
are receivers. Termination-based price discrimination dilutes this result, however, as it re-introduces the basic
effect of Gans and King (2001).
Hoernig et al. (2010) study the role of “calling clubs.” That is, people are more likely to call friends (people similar
to themselves in terms of preferences for an operator) than other people. In this case, the calling pattern is not
uniform as typically assumed in the literature, but skewed. They show that if the calling pattern becomes more
concentrated, then firms prefer to have a termination rate above cost. Essentially, the “marginal” consumer who is
indifferent between the offers of the two networks does not care much about the size of the two networks, and
therefore the role of “tariff-mediated” network externalities is much watered down. A conceptually similar
mechanism arises in Hurkens and Lopez (2010). They relax the assumption of rationally responsive expectations
of subscribers and replace it by one of fulfilled equilibrium expectations, where the expectations of consumers
about market shares do not change with price variations (off the equilibrium path). Again, this diminishes the role
played by the mechanism identified by Gans and King (2001).
(p. 143) Calling clubs are also studied by Gabrielsen and Vagstad (2008) and—together with switching costs—
they may give a reason for operators to charge above-cost termination rates. If all members of a calling club are
subscribing to the same network, price discrimination will tend to increase individual switching costs, and this may
enable firms to charge higher fixed fees. To reach this result, it is essential that some customers face very high
exogenous switching costs to make other people reluctant to relocate away from those friends who are locked in.
In Gabrielsen and Vagstad (2008), as well as in Calzada and Valletti (2008) who also study calling clubs,
subscribers with identical preferences are always part of the same calling club, and these clubs are completely
separated from each other. Instead, Hoernig et al. (2010) allow for more arbitrary overlaps between calling patterns
of different subscribers. Each consumer is more likely to call other consumers that are more closely located in the
space of preferences, yet calls will be placed also to people that are far away. This is used to describe ties
between partially overlapping social networks.
2.2.3. Foreclosure and Entry
Page 5 of 18
Mobile Telephony
Another reason for high termination rates that has been advanced is potentially anti-competitive practices put in
place by incumbent networks. This does not only relate to a frequent complaint of recent entrants in mobile
markets, as the academic literature has shown that higher termination rates can in principle be used to foreclose
entrants. Essentially, in order for entry to be successful, an entrant needs termination access to the networks of its
competitors and needs to be able to offer calls to these networks at reasonable prices to its own customers. High
termination rates make this difficult.
LRTa and LRTb argue that if entrants are small in terms of network coverage then an incumbent full-coverage
network might simply refuse interconnection or charge a termination rate so high that the entrant cannot survive.
Lopez and Rey (2009) revisit and broaden the analysis of the case of a monopoly incumbent facing entry, and
show that the incumbent may successfully foreclose the entrant while charging monopoly prices at the retail level.
However foreclosure strategies are profitable only when they result in complete entry deterrence, while using high
termination rates just to limit the scale of entry, without deterring it entirely, is not profitable for the incumbent.
Calzada and Valletti (2008) consider a setting similar to Gans and King (2001), with the extra twist that existing
networks negotiate an industrywide (nondiscriminatory) termination rate, which will then apply also to potential
entrants. Hence incumbents can decide whether to accommodate all possible entrants, only a group of them, or
use the termination rate charge to completely deter entry. Incumbents thus face a trade-off. If they set an efficient
(that is, industry profit maximizing) mark-up, they maximize profits ex post for a given number of firms. However,
this makes entry more appealing ex ante, thus potentially attracting too many entrants and reducing profits. Faced
with this threat, incumbents may want (p. 144) to distort the mark-up away from efficiency in order to limit the
attractiveness of entry.
2.3. The Impact of the Size and Number of Firms
While the seminal papers on network interconnection concentrated on symmetric duopoly,3 a range of additional
interesting results can be obtained by allowing for asymmetrically sized networks, on the one hand, and more than
two networks, on the other. Apart from being more realistic in terms of assumptions, these effects are relevant in
practice but cannot be captured in symmetric duopoly models.
Asymmetry can have important effects on network interconnection, but one simple fallacy should be avoided.
Assuming that the large network has M subscribers and the small network N subscribers, under a balanced calling
pattern the number of calls from the large to the small network is proportional to M*N, while the number of calls in
the opposite direction is proportional to N*M. That is, the number of calls in either direction is equal. Thus any
effects of asymmetry will arise due to additional phenomena.
One issue is that asymmetry affects networks bargaining power in disputes over termination rates in different ways.
With retail competition in linear tariffs the large network has high bargaining power and can impose termination
rates that maintain its large market share and harm consumers, while with two-part tariffs and reciprocal termination
rates the large network would prefer a termination rate equal to cost, if the asymmetry is large enough (Carter and
Wright 1999, 2003, with asymmetry due to brand preferences). On the other hand, Cambini and Valletti (2004)
argue that different conclusions can be reached if the asymmetry between networks arises from quality differences
and affects the number of calls a customer makes.
Smaller networks benefit from being able to charge higher termination charges than incumbents during the entry
phase. Apart from avoiding foreclosure as mentioned above, raising the entrant's termination rate increases
competitive intensity, consumer surplus, and the entrant's profits as shown in Peitz (2005a, b). On the other hand, if
smaller networks’ termination rates are not regulated then they may have incentives to set the latter at excessive
levels. Dewenter and Haucap (2005) show that if callers cannot distinguish which network they are calling, then
due to the resulting pricing externality (the demand reduction due to a higher price is spread over all networks) a
reduction in the termination rate of larger networks should lead smaller networks to further increase theirs.
As concerns the setting of retail prices, Hoernig (2007) underlines that due to call externalities large networks have
a strategic incentive to reduce the number of calls received by their rivals’ customers through higher off-net
prices. The resulting traffic flows between networks will be unbalanced, with more call minutes emanating from
smaller networks than from larger ones, which results in an “access deficit,” that is, small networks make net
Page 6 of 18
Mobile Telephony
access payments to larger ones.
(p. 145) With respect to the number of networks, several new results obtain if one goes beyond duopoly. One
question is whether the result of Gans and King that with multi-part tariffs networks would jointly agree on a
termination rate below cost was robust to the presence of more networks. Calzada and Valletti (2008) show that it
is, but that as the number of networks increases it will approach cost from below.4
Hoernig (2010a) considers an arbitrary number of asymmetric networks.5 Among his findings are the following: onnet/off-net differentials become smaller as the number of networks increases; while the fixed-to-mobile waterbed
(see Section 3.1) is full with two-part tariffs, with linear tariffs networks retain a share of termination profits, which
decreases in the number of networks and competitive intensity. An important duopoly result that is not robust to
changes in the number of networks is that under multi-part tariffs consumer surplus decreases with lower
termination rates: With at least three networks and sufficiently strong call externalities, consumer surplus increases
when termination rates are lowered. This result is essentially due to the larger share of off-net calls. Furthermore,
while it is known that in duopoly and linear tariffs higher termination rates lead to lower on-net prices, with at least
three networks on-net price can increase because networks will compete less for customers. Thus with more
networks it is clearer that consumers will benefit from termination rates closer to cost.
2.4. Multi-Stage Models of Interconnection
There are as of yet relatively few truly dynamic studies of mobile communications markets, as opposed to the wider
literature on network effects, which concentrates on competition between “incompatible” or non-interconnected
networks. While most existing studies use a static model to portray a “steady state” of the market, some issues are
inherently dynamic in nature. One is the dynamics of entry and market structure under network effects, and a
second is the issue of incentives for network investment. Since neither can be dealt with in static models, we have
grouped them in this section.
2.4.1. Market Structure Dynamics
Cabral (2010) presents a dynamic network competition framework where a constant number of consumers enters
and exits the market, and where depending on the circumstances market shares either equalize or the larger
network will increase its dominance over time. He shows that regulated termination rates above cost increase the
tendency for dominance of the larger network. Furthermore, a rate asymmetry in favor of the smaller network
makes the larger network compete harder and may lower the former's discounted profits.
Hoernig (2008b) considers a dynamic model of late entry into a growing communications market. He finds that
tariff-mediated network effects hamper (p. 146) entry in the presence of locked-in consumers. A higher
asymmetric termination rate increases the market share and discounted profits of the network concerned. Profits
from fixed-to-mobile termination are fully handed over to mobile consumers in the aggregate (i.e., there is a full
waterbed effect), but the large (small) network hands over less (more) than its actual termination profits due to the
strategic effects of customer acquisition.
Lopez (2008) considers the effect of switching costs on how networks compete for market share over time. He
shows that future levels of termination rates above cost reduce earlier competition for subscribers. Thus the profit
neutrality result of LRTa breaks down once one takes repeated competition into account.
2.4.2. Network Investment
Cambini and Valletti (2003) and Valletti and Cambini (2005) introduce an investment stage, where firms first choose
a level of quality for their networks and then compete in prices. They show that operators now have an incentive to
set reciprocal termination rates above cost, in order to underinvest and avoid a costly investment battle. Abovecost termination rates make firms more reluctant to invest: the reason is that an increase of the investment in
quality, relative to the rival, generates more outgoing calls and thus leads to a call imbalance which is costly
precisely when termination rates are set above cost. Key to generating this result is that firms can commit to an
access pricing rule prior to the investment stage.
A future avenue of research that needs to be explored is how the increasing penetration of data-based retail
Page 7 of 18
Mobile Telephony
services impacts on competition and investment into third and fourth generation networks. Coverage could again
become an issue, but the retail and interconnection pricing structures should be quite different from those in use
today. While current models of competition in mobile markets capture the (largely symmetric) bi-directional nature
of voice and text communications among users, the literature still lacks models that study more asymmetric data
flows between, for instance, users and websites. These models should be addressed by future research aimed at
discussing issues such as net neutrality, network investments, and pricing models for mobile web and wireless
broadband.
3. Regulation
In this section we discuss key regulatory questions that have arisen in the industry. We have organized the
material as follows. First (Section 3.1), we look at the regulation of wholesale prices when mobile operators receive
calls from other, noncompeting operators (most important, fixed-line users calling mobile phones). Because the
termination end on the mobile networks is a bottleneck, competition problems are expected to arise there. Then
(Section 3.2) we look at the problem of termination of calls between competing mobile operators. In this case,
(wholesale) (p. 147) termination prices are both a source of revenues (when terminating rival traffic) and a cost
(when sending traffic to the rival). In Section 3.3, we further look at the strategic impact that wholesale charges
have among competing operators when they create price discrimination between on-net and off-net charges.
Regulators sometime want to limit the extent of such discrimination. Finally (Section 3.4), we discuss regulatory
problems related to specific access services supplied to entrants without a spectrum license, and to foreign
operators to allow their customers to roam internationally.
3.1. Regulation of Fixed-to-Mobile Termination Rates: Theory and Empirics of the Waterbed Effect
As anti-cipated in Section 2.2.1, in a setting where mobile operators compete against each other and receive calls
from the fixed network, competition does not help to keep fixed-to-mobile (FTM) termination rates low. This situation
has been called one of “competitive bottlenecks.” Mobile operators have the ability and incentives to set monopoly
prices in the market for FTM calls (as the price there is paid by callers on the fixed line, not by own mobile
customers), but the rents thus obtained might be exhausted via cheaper prices to mobile customers in case
competition among mobile operators is vigorous.
The intuition for the monopoly pricing result for the FTM market is simple: imagine FTM termination rates were set at
cost; then one mobile operator, by raising its FTM termination rate, would be able to generate additional profits that
it could use to lower subscription charges and attract more customers. While the mobile sector would therefore not
be making any excess profits overall, an inefficiently low number of FTM calls would be made.
In the absence of network externalities, Gans and King (2000), Armstrong (2002), and Wright (2002) show that the
welfare maximizing level of the FTM termination rate is equal to cost. This rate level can only be achieved by direct
price regulation. If, instead, there are network externalities (more specifically, benefits that accrue to fixed
customers from the growth of the mobile subscriber base, as they have more subscribers that they can call), it is
welfare enhancing to allow mobile operators to earn additional profits from fixed customers that can be used to
attract more mobile subscribers.6 Even in this case, Armstrong (2002) and Wright (2002) show that the welfare
maximizing level of the FTM termination rate would always be below the level that mobile operators would set if
unregulated—thus the presence of network externalities would not obviate the need for regulation. Valletti and
Houpis (2005) further extend these arguments to account for the intensity of competition in the mobile sector and
for the distribution of customer preferences.
Given the strong case for regulatory intervention, it is not a surprise that many countries have decided to intervene
to cut these rates. Indeed, all EU member states, (p. 148) as well as several other countries, have done so, to the
benefit of consumers calling mobile phones. However, reducing the level of FTM termination rates can potentially
increase the level of prices for mobile subscribers, causing what is known as the “waterbed” or “seesaw” effect.
The negative relationship between termination rates and prices to mobile consumers is a rather strong theoretical
prediction that holds under many assumptions about the details of competition among mobile operators (Hoernig,
2010a).
Over the last decade, there has been considerable variation in the toughness of regulatory intervention both
Page 8 of 18
Mobile Telephony
across various states and, within states, over time and among different operators (typically, entrants have been
treated in a more lenient way compared to incumbents, at least in the earlier days). This has provided interesting
data to test various aspects of the theories that gave intellectual support to the case for regulatory intervention.
Genakos and Valletti (2011a) document empirically the existence and magnitude of the waterbed phenomenon by
using a monthly panel of mobile operators’ prices across more than 20 countries over six years. Their results
suggest that although regulation reduced termination rates by about 10 percent to the benefit of callers to mobile
phones from fixed lines, this also led to a 5 percent increase (varying between 2 and15 percent, depending on the
estimate) in mobile retail prices. They use a difference-in-difference design in the empirical specification,
controlling for country-specific differences (as, say, the United Kingdom has special features different from those of
Italy), for operator-specific differences (as, say, Vodafone in the UK has access to different radio frequencies than
H3G UK), time-specific differences (as technological progress varies over time, as well as controlling for
seasonality effects), and consumer-specific differences (as the data they use report information for different usage
profiles, which may differ among themselves). After controlling for all these differences, they show that MTR
regulation had a significant additional effect on retail prices set by mobile operators. They also show that the
profitability of mobile firms is negatively affected by regulation, which is interpreted as evidence that the industry is
oligopolistic and does not pass one-for-one termination rents to their customers.
In follow-up work, Genakos and Valletti (2011b) take into account that the overall impact of regulation of termination
rates will balance both effects arising from fixed-to-mobile calls and mobile-to-mobile calls. While the first effect
unambiguously should push up mobile retail prices, the latter is less clear, and will depend on the type of tariff the
customers subscribe to, as reviewed in section 1.2. They show that the waterbed effect is stronger for post-paid
than for pre-paid contracts, where the former can be seen as an example of nonlinear tariffs, while the latter are a
proxy for linear tariffs.
Cunningham et al. (2010) also find evidence of the waterbed effect in a cross-section of countries. This is also the
conclusion of Dewenter and Kruse (2011), although they follow an indirect approach, as they test the impact of
mobile termination rates on diffusion, rather than looking directly at the impact on prices. Since the waterbed effect
predicts that high MTRs should be associated with low (p. 149) mobile prices, it also predicts that diffusion will be
faster in those markets with high termination rates, which is what Dewenter and Kruse (2011) find. Andersson and
Hansen (2009) also provide some empirical evidence on the waterbed effect. They test the impact on the overall
profitability of changes in mobile termination rates. They find that MNOs’ profits do not seem to vary statistically with
changes in these MTRs. This is consistent with a hypothesis of a “full” (one-for-one) waterbed effect.
The literature just reviewed falls short of predicting, from the data, what the optimal level of intervention would be,
possibly because of empirical approaches taken (cross-country comparisons, rather than empirical structural
models at a single-country level). This is a fruitful area for future research. An alternative is to calibrate theoretical
models with realistic demand and supply parameters (see Harbord and Hoernig, 2010).
3.2. Regulation of Mobile-to-Mobile Termination Rates: Who Should Pay for Termination?
The literature reviewed in Section 3.1 concentrates on the optimal setting of termination rates in one direction only.
However, as interconnection between two networks typically requires two-way access, there have been some
significant developments in the literature on two-way termination rates, which we reviewed in Section 2.2. The
results of this research are more directly applicable to environments where operators compete for the same
subscriber base (e.g., mobile-to-mobile). A relevant theme developed in this literature is “call externalities” (as
opposed to “network externalities,” which were more relevant in the literature on FTM termination) resulting from
the benefit a called party obtains from receiving a call, and how these should be taken into account by optimal
(welfare-maximizing) regulation. DeGraba (2003) provides an analysis of off-net calls between subscribers of
different networks. He shows that a “bill-and-keep” mechanism, where the reciprocal termination rate is set at zero,
is a way of sharing efficiently the value created by a call when callers and receivers obtain equal benefit from it.
Indeed, bill-and-keep is a system which is in place in the United States, where—in contrast to most of the rest of the
world—it is also the case that mobile customers pay to receive calls (this system is called receiving party pays,
RPP).7 This has led to some ambiguities in the regulatory debate addressed at finding remedies to high termination
rates in Europe, where instead a calling party pays (CPP) system is in place. Sometimes, CPP and RPP have been
Page 9 of 18
Mobile Telephony
considered as mutually exclusive pricing systems (Littlechild, 2006), rather than tariff structures that are chosen by
market participants. Cambini and Valletti (2008), following Jeon et al. (2004), show that there is a link between
outgoing and incoming prices as a function of the termination rate. They argue that RPP can emerge endogenously
when termination rates are set sufficiently low, for example using bill-and-keep. In other words, (p. 150) the
remedy lies in the way termination rates are set, not in the pricing structure: if the termination rate is very low, then
RPP might be introduced, not the other way around. Conversely, if termination rates are high, only a CPP system will
emerge. Lopez (2011) confirms this result for a model where both senders and receivers decide when to hang up.
Cambini and Valletti (2008) also establish the possibility of a first-best out-come when termination rates are
negotiated under the “light” regulatory obligation of reciprocity. They show that under some circumstances the
industry can self-regulate and achieve an efficient allocation via negotiated termination rates that internalize
externalities, in particular when there is “induced” traffic (i.e., incoming and outgoing calls are complements). If
instead incoming and outgoing calls are perfectly separable (as studied by Berger, 2004 and 2005; Jeon et al.,
2004), it would be impossible to achieve efficiency without intervention.
Another interesting attempt to see if it is possible to find less intrusive ways to regulate termination rates, and yet
achieve efficient outcomes, is proposed by Jeon and Hurkens (2008). While most of the literature on two-way
access pricing considers the termination rate as a fixed (per minute) price, they depart from this approach and
apply a “retail benchmarking” approach to determine termination rates. They show how competition over
termination rates can be introduced by linking the termination rate that an operator pays to the rivals to its own
retail prices. This is a very pro-competitive rule: By being more aggressive, an operator both gains customers and
also saves on termination payments.
3.3. Retail Prices: Mandatory Uniform Pricing
Charging higher prices for off-net calls than for on-net calls, even if the underlying network cost is the same,
creates welfare losses due to distorted off-net prices and leads to tariff-mediated network externalities which make
it harder for small networks to compete. Still, these externalities can benefit consumers because they raise
competitive intensity in general and result in higher consumer surplus. Thus an imposition of nondiscriminatory or
uniform pricing may protect entrant networks with positive consequences in the long run but may also raise prices
in the short run.
In the presence of call externalities there are additional strategic effects which lead to even higher off-net prices.
These effects disappear entirely under uniform pricing, thus call externalities have no strategic effect with
mandatory uniform prices.
There have been several attempts in practice at evaluating whether imposing such a restriction would be a useful
regulatory measure. This should also be considered on the background that under the existing European
Regulatory Framework for Communications (domestic) retail prices on mobile networks can-not be regulated, in
particular because discriminatory pricing occurs independently of whether networks are dominant or not in the
retail market. This does (p. 151) not mean, though, that such a restriction has never been tried out: When in 2006
the Portuguese Competition Authority (AdC) cleared the merger of Sonaecom and Portugal Telecom, which owned
the first and third largest mobile networks, respectively, some of the undertakings accepted by AdC were meant to
facilitate the entry of a new “third” mobile network. The merged entity (though evidently not its competitor
Vodafone, given that this was not a case of sectoral regulation but of merger remedies) would commit to charge
nondiscriminatory tariffs toward the potential new entrant during a transitory phase.8
Economic theory provides conflicting answers, though. LRTb conclude that, with linear tariffs, banning price
discrimination may hurt consumers in two ways. First, with discriminatory pricing the termination rate only affects
off-net prices and therefore reduces the extent of double marginalization implicit in the uniform price. Second, as
pointed out above, tariff-mediated network externalities make networks compete harder and thus raise consumer
surplus.
Hoernig (2008a) shows that under multi-part tariffs depending on the form of demand the welfare-maximizing
outcome may be achieved under any of three scenarios: (unregulated) price discrimination, uniform pricing, or an
intermediate cap on the on-net/off-net differential. The reason why generically each outcome may be optimal under
Page 10 of 18
Mobile Telephony
some circumstances is that imposing a cap on the difference between on-net and off-net prices not only brings offnet prices down but also raises on-net prices to inefficient levels.
Sauer (2010) shows that price discrimination has a detrimental effect on total welfare when network operators
compete in multi-part tariffs and the total market size and termination rates are fixed. When market expansion
possibilities exist, however, price discrimination becomes socially desirable. Cherdron (2002), assuming
unbalanced calling patterns, shows that a ban on price discrimination makes networks choose efficient termination
rates, instead of distorting them upward to reduce competitive intensity. In his model uniform pricing achieves the
first-best outcome.9 Hoernig et al. (2010) show that, in a model with nonuniform calling patterns, the imposition of
uniform pricing is always welfare increasing when access charges are equal to cost. Through creating tariffinduced network externalities, a difference between the price of on-net calls and that of off-net calls also affects
the degree of competition: they also show that it is only when calling circles are sufficiently relevant that price
discrimination can, through intensifying competition, increase consumer surplus.
3.4. Regulation of Nonessential Facilities
We now turn the analysis to problems arising from the oligopolistic, and possibly collusive, nature of the mobile
industry. While it is recognized that mobile operators compete to some extent against each other in retail
markets,10 there are allegations that competition is more muted in wholesale markets. Two cases stand out for their
practical relevance: access to rivals who do not have a spectrum license, and (p. 152) access to foreign
operators when their customers roam abroad. We analyze both cases in turn.
3.4.1. Call Origination
Radio spectrum is a scarce resource that limits the number of mobile network operators (MNOs) that can use their
own radio spectrum to provide services. Without securing spectrum, facility-based entry cannot occur in this
industry. In many countries, however, a different type of entry has been observed. In particular, MNOs have
reached agreements to lease spare spectrum capacity to so-called mobile virtual network operators (MVNOs), that
is, operators that provide mobile communications services without their own radio spectrum. In other countries,
MVNOs have not yet emerged and only licensed MNOs compete against each other. Why do these differences
arise? Can any inference be made on the intensity of competition in markets with and without MVNOs?
As is well known, the monopolist owner of a bottleneck production factor who is also present in the downstream
retail market may have both the incentive and the ability to restrict access to the bottleneck production factor, in
order to reduce competition in the downstream retail market. An example of this would be a monopolist owner of a
public switched telephone network, who may want to restrict access to its local loop through price or non-price
means, in order to avoid competition on the markets of fixed telephony or broadband access. Even if he could reap
the monopoly profit by setting a sufficiently high unregulated wholesale price (as the Chicago School has pointed
out, but which may not be credible if wholesale prices are negotiated successively), the resulting retail prices
would also be high, the very outcome that market liberalization intends to avoid. Low regulated wholesale prices
will then strongly reduce the possibility of reaping monopoly rents and lead to stronger incentives to use non-price
means to exclude competitors.
In mobile telephony, however, there are reasons to believe that MNOs have different incentives than fixed
telephony incumbents with respect to giving access to their networks. First and most important, MNOs are not
monopolist providers of network access. The relevant case for mobile operators is where several incumbents,
without exclusive control over an essential input, may or may not supply potential competitors. Therefore, even if a
MNO denies access to its network to an entrant, there is no guarantee that the entrant will be blocked as it may
obtain access elsewhere. Second, also because MNOs are not monopolists, a MNO that gives access to a MVNO
will share with other MNOs the retail profit loss caused by the entrant. Third, and especially when entry cannot be
blocked, it may be better for each MNO to be the one that gives access to the entrant. This allows the host MNO to
earn additional wholesale revenues that at least partially compensate the loss in retail revenues (cannibalization)
caused by the entrant.
To address this problem in a structured way, imagine a stylized situation with two integrated oligopolists (the MNOs)
that consider whether or not to grant wholesale access to an entrant (the MVNO). Each incumbent decides first and
Page 11 of 18
Mobile Telephony
(p. 153) independently whether or not to grant access to the MVNO, taking as given the access decision of the
rival MNO. Then incumbents (and eventually the entrant if access has been granted) compete at the retail level.
The solution to the access game is found by analyzing the following payoff matrix, where we imagine that
incumbents are symmetric:
MNO2
Access
No access
Access
(a, a)
(c, b)
No access
(b, c)
(d, d)
MNO1
Each MNO decides, independently from the other MNO, whether or not to grant access to the MVNO. The payoffs in
each cell are derived according to the possible market interactions under the various outcomes. For instance, (c,
b) in the top-right cell summaries the payoffs that arise when access is granted by MNO1 alone. Thus, three firms
compete at the retail level, and firm MNO1 earns both at the retail and at the wholesale level (the total amounts to
payoff c) while MNO2 does not provide access and its payoff b arises when competing against both the other
incumbent and the entrant.
Clearly, the solution to the access game depends on the entirety of payoffs under the various possible outcomes.
These payoffs are affected by the characteristics of both the access providers and the access seeker and the
complex competitive interactions between them. From the simple matrix above we can derive these results:
• If d 〉 c, then {No access, No access} is a noncooperative Nash equilibrium of the access game. While taking
an MVNO on board creates wholesale revenues, his presence also cannibalizes retail profits and drives down
the equilibrium price level. If the latter effects are stronger then we have d 〉 c.
• If a 〉 b, then {Access, Access} is a noncooperative Nash equilibrium of the access game. This outcome
occurs when having an MVNO on board is preferable to him being located on some other network.11 This would
occur principally if retail cannibalization is weak as compared to the increase in access profits.
In other words, multiple equilibria arise when both a 〉 b and d 〉 c. This multiplicity is endemic in access games with
“nonessential” facilities. It simply derives from the fact that there may be market situations where MNO1 reasons
unilaterally along these lines: “If my rival gives access, I will too; but if my rival does not, I won’t either.” Once
again, all this happens under the maintained assumption of uncoordinated competitive behavior.
To establish the conditions that make a candidate noncooperative equilibrium more or less likely to happen, one
needs to analyze very carefully the characteristics of both the MNOs and the MVNO. By providing access, a MNO
may expand its market if the MVNO is able to appeal to previously unserved market segments. On (p. 154) the
downside, the MVNO may cannibalize the MNO's sales. In addition, the entry of the MVNO may also affect the
overall market interaction with the other incumbents. In general, the MVNO may make the incumbents softer or
tougher competitors according to whether the MVNO affects the incumbents’ profits symmetrically or differentially
(see Ordover and Shaffer, 2007; Brito and Pereira, 2010; Bourreau et al., 2011).
The literature typically assumes that MNOs compete against each other. However, there is also the possibility that
incumbent MNOs compete, perhaps imperfectly, at the retail level, but may co-ordinate their behavior at the
wholesale level, in particular by refusing access to entrant MVNOs. This scenario has much policy relevance as it
has been widely discussed by European NRAs, e.g., in Ireland, Italy, France, and Spain (in the latter case collusion
at the wholesale level was indeed found, despite of no (direct) collusion at the retail level). To analyze this case,
look again at the matrix above that portrays the general access game: There may be situations when {Access,
Access} is a noncooperative equilibrium, but incumbents would be better off under {No access, No access}. That
is, a sort of prisoner's dilemma could arise. If firms could coordinate (that is, collude at the wholesale level), they
would choose this latter outcome without access. For this situation to occur, the following requirements must be
jointly satisfied:
Page 12 of 18
Mobile Telephony
1. Without coordinated behavior, the natural noncooperative outcome of the market would be one with
access, that is, it must be that the payoff a 〉 payoff b
2. Without coordinated behavior, a noncooperative outcome without access would not arise, that is, it must
be that the payoff c 〉 payoff d
3. Incumbents must be better off without access than with access, that is, the payoff d 〉 payoff a
3.4.2. International Roaming
International roaming, that is, the use of mobile phones on visited networks in foreign countries, is made possible
by compatible handset technologies and bilateral agreements between networks. It has been clear, though, that
multiple market failures exist at both at the wholesale and at the retail level, and that retail prices for roaming calls,
SMS and data usage have been set above network costs.
A fundamental market failure originates from the transnational nature of roaming services. A roaming voice call is
either originated or terminated on a visited foreign network, for which the customer's home operator makes a
wholesale payment to the visited network. Then the customer pays a retail price to his home network. SMS and
mobile data also involve the corresponding wholesale and retail payments. Then, if a national regulator attempts to
protect customers in his country, he can only do this by affecting the retail prices charged by the home network—
but if the corresponding wholesale prices, over which this regulator has no authority, remain high, low retail prices
are not viable. Conversely, if a regulator intervenes to cut wholesale prices, this will go to the benefit of foreign
users alone.
(p. 155) For this reason, any feasible attempt to regulate roaming prices must involve the sectoral regulators of all
countries involved, so that retail and wholesale prices can be controlled jointly. This is just what the European
Union meant to achieve with the “Roaming Regulation” of 2007, which was amended in 2009.12 In its present form,
it involves several transnational transparency and price control measures, both at the retail and at the wholesale
level.
A second fundamental problem in this market is that roaming services are sold as part of national bundles and that
the large majority of mobile customers only spends a small fraction of their time traveling abroad. Therefore,
customers may fail to take roaming prices into account when choosing their home operator and their bundle of
calls, which implies that there is not much competition at the retail level for the provision of roaming calls.
A further problem, which has started to be mitigated by technological advances, was that the choice of the visited
foreign network used to be more or less random. Mobile phones were programmed to roam on the network with the
strongest signal.13 While this made sense at a time when network coverage was not full, it implied that roaming
phones were constantly and unpredictably switching the roaming supplier. Note that in this case customers
rationally perceived roaming prices to be the same across operators. At the pricing level, this means that home
networks as much as their customers could not predict which network would be used, and therefore visited
networks could charge high wholesale prices without suffering a corresponding reduction in demand.
“Traffic management” technologies have made it possible to direct mobile phones to roam preferentially onto
specific networks, creating the opportunity for networks to select their foreign partners and enter into specific
roaming agreements. Salsas and Koboldt (2004) analyze the effect of cross-border mergers and traffic
management. They conclude that the latter may help create price competition at the wholesale level, but only if
nondiscrimination constraints for wholesale pricing were lifted. Lupi and Manenti (2009) conclude that even then
the market failure would persist if traffic could not be directed perfectly.
Bühler (2010) analyzes the competitive effects of roaming alliances under various institutional settings. He
develops a two-country model with two competing duopolists in each country. In a first stage, alliances are formed
if joint profits of partners of an alliance exceed joint profits of these firms if the alliance is not formed. An alliance
fixes the reciprocal wholesale price for roaming calls and each member of an alliance is committed to purchase the
wholesale service from the partner of the alliance. Then, in a second stage, mobile network operators
simultaneously set roaming prices that apply to operators who do not purchase the wholesale service from an
alliance. These operators choose which supplier to buy from. Finally, in a third and last stage, operators set twopart retail tariffs. Since higher roaming prices imply higher perceived marginal costs of firms, an alliance has the
means to relax competition. By forming an alliance, firms are able to commit to be less aggressive. While higher
Page 13 of 18
Mobile Telephony
roaming prices lead to lower retail profits and, thus, from a downstream perspective are not profitable, they lead to
higher (p. 156) wholesale profits. The overall profit change is always positive for a small increase of the roaming
price above marginal costs. Bühler also shows that retail prices would be even higher and social welfare lower
under a nondiscrimination constraint at the wholesale level. Such a measure may mistakenly be introduced by a
sectoral regulator in line with the idea that all firms should have access to wholesale products on equal terms,
independent of whether they belong to an alliance.
4. Conclusions
In this chapter we have considered the main competitive and regulatory issues that have occupied research about
mobile communications markets for the last decade. Clearly the most important of these is the setting of termination
rates. While the Gans and King “paradox” of networks wanting to set low termination rates is still partly unresolved,
a consensus has emerged that termination rates closer to or even below cost are best for consumers, especially
when the fixed network is also taken into account. A second issue is network effects that are created through retail
pricing strategies, which have an impact on competitive intensity and market structure. While regulation has been
applied to termination rates, and also international mobile roaming, no such intervention seems likely concerning
retail prices.
Looking forward, as mobile termination rates are brought down and convergence speeds up, with mobile offers
increasingly being integrated technologically and from a pricing perspective into “quadruple-play” bundle offers, it
seems possible that flat-rate (or bucket) offers will come to dominate the market, even in Europe. These tariffs may
involve charging for the reception of calls, which, if the rebalancing from calling charges is done correctly, should
improve efficiency. Mobile networks’ business model may change from mainly providing calls and text messages to
providing access to content to users. This will not be easy, though, as networks tend to own the “pipes” but not the
content.
Acknowledgment
We would like to thank Carlo Cambini and the Editors for very useful comments.
References
Ambjørnsen, T., Foros, O., Wasenden, O.-C., 2011. Customer Ignorance, Price Cap Regulation, and Rent-seeking in
Mobile Roaming. Information Economics and Policy 23, pp. 27–36.
Andersson, K., Hansen, B., 2009. Network Competition: Empirical Evidence on Mobile Termination Rates and
Profitability. Mimeo.
Armstrong, M., 1998. Network Interconnection in Telecommunications. Economic Journal 108, pp. 545–564.
Armstrong, M., 2002. The Theory of Access Pricing and Interconnection. In: Cave, M., S. Majumdar, I. Vogelsang
(Eds.), Handbook of Telecommunications Economics, North-Holland, Amsterdam, pp. 397–386.
Armstrong, M., Wright, J., 2007. Mobile Call Termination. Mimeo. (p. 158)
Armstrong, M., Wright, J., 2009. Mobile Call Termination. Economic Journal 119, pp. 270–307.
Berger, U., 2004. Access Charges in the Presence of Call Externalities. Contributions to Economic Analysis & Policy
3(1), Article 21.
Berger, U., 2005. Bill-and-keep vs. Cost-based Access Pricing Revisited. Economics Letters 86, pp.107–112.
Binmore, K., Harbord, D., 2005. Bargaining over Fixed-to-mobile Termination Rates: Countervailing Buyer Power as
a Constraint on Monopoly Power. Journal of Competition Law and Economics 1(3), pp. 449–472.
Birke, D., 2009. The Economics of Networks: a Survey of the Empirical Literature. Journal of Economic Surveys
Page 14 of 18
Mobile Telephony
23(4), pp. 762–793.
Birke, D., Swann, G.M.P., 2006. Network Effects and the Choice of Mobile Phone Operator. Journal of Evolutionary
Economics 16(1–2), pp. 65–84.
Bourreau, M., Hombert, J., Pouyet, J., Schutz, N., 2011. Upstream Competition between Vertically Integrated Firms.
Journal of Industrial Economics 56(4), pp, 677–713.
Brito, D., Pereira, P., 2010. Access to Bottleneck Inputs under Oligopoly: A Prisoners’ Dilemma? Southern Economic
Journal 76, pp. 660–677.
Bühler, B., 2010. Do International Roaming Alliances Harm Consumers? mimeo.
Busse, M.R., 2000. Multimarket Contact and Price Coordination in the Cellular Telephone Industry. Journal of
Economics & Management Strategy 9(3), pp. 287–320.
Cabral, L., 2010. Dynamic Price Competition with Network Effects. Review of Economic Studies 78, pp. 83–111.
Calzada, J., Valletti, T., 2008. Network Competition and Entry Deterrence. Economic Journal 118, pp. 1223–1244.
Cambini, C., Valletti, T., 2003. Network Competition with Price Discrimination: “Bill-and-Keep” Is Not So Bad after All.
Economics Letters 81, pp. 205–213.
Cambini, C., Valletti, T., 2004. Access Charges and Quality Choice in Competing Networks. Information Economics
and Policy 16(3), pp. 411–437.
Cambini, C., Valletti, T., 2008. Information Exchange and Competition in Communications Networks. Journal of
Industrial Economics 56(4), pp. 707–728.
Carter, M., Wright, J., 1999. Interconnection in Network Industries. Review of Industrial Organization 14(1), pp. 1–25.
Carter, M., Wright, J., 2003. Asymmetric Network Interconnection. Review of Industrial Organization 22, pp. 27–46.
Cherdron, M., 2002. Interconnection, Termination-based Price Discrimination, and Network Competition in a Mature
Telecommunications Market. Mimeo.
Corrocher, N., Zirulia, L., 2009. Me and You and Everyone We Know: An Empirical Analysis of Local Network Effects
in Mobile Communications. Telecommunications Policy 33, pp. 68–79.
Cunningham, B.M., Alexander, P.J., Candeub, A., 2010. Network Growth: Theory and Evidence from the Mobile
Telephone Industry. Information Economics and Policy 22, pp. 91–102.
DeGraba, P., 2003. Efficient Intercarrier Compensation for Competing Networks when Customers Share the Value of
a Call. Journal of Economics & Management Strategy 12(2), pp. 207–230.
Dessein, W., 2003. Network Competition in Nonlinear Pricing. RAND Journal of Economics 34(4), pp. 593–611. (p.
159)
Dewenter, R., Haucap, J., 2005. The Effects of Regulating Mobile Termination Rates for Asymmetric Networks.
European Journal of Law and Economics 20, pp. 185–197.
Dewenter, R., Kruse, J., 2011. Calling Party Pays or Receiving Party Pays? The Diffusion of Mobile Telephony with
Endogenous Regulation. Information Economics & Policy 23, pp. 107–117.
Gabrielsen, T.S., Vagstad, S., 2008. Why is On-net Traffic Cheaper than Off-net Traffic? Access Markup as a
Collusive Device. European Economic Review 52, pp. 99–115.
Gans, J., King, S., 2000. Mobile Network Competition, Consumer Ignorance and Fixed-to-mobile Call Prices.
Information Economics and Policy 12, pp. 301–327.
Gans, J., King, S., 2001. Using “Bill and Keep” Interconnect Arrangements to Soften Network Competition.
Page 15 of 18
Mobile Telephony
Economics Letters 71, pp. 413–420.
Gans, J., King, S., Wright, J., 2005. Wireless Communications. In: M. Cave et al. (Eds.), Handbook of
Telecommunications Economics (Volume 2), North Holland, Amsterdam, pp. 243–288.
Genakos, C., Valletti, T., 2011a. Testing the Waterbed Effect in Mobile Telecommunications. Journal of the European
Economic Association 9(6), pp. 1114–1142.
Genakos, C., Valletti, T., 2011b. Seesaw in the Air: Interconnection Regulation and the Structure of Mobile Tariffs.
Information Economics and Policy 23(2), pp. 159–170.
Gruber, H., Valletti, T., 2003. Mobile Telecommunications and Regulatory Frameworks. In: G. Madden (Ed.), The
International Handbook of Telecommunications Economics (Volume 2), Edward Elgar, pp. 151–178.
Hahn, J.-H., 2004. Network Competition and Interconnection with Heterogeneous Subscribers. International Journal
of Industrial Organization 22, pp. 611–631.
Harbord, D., Hoernig, S., 2010. Welfare Analysis of Regulating Mobile Termination Rates in the UK (with an
Application to the Orange/T-Mobile Merger). CEPR Discussion Paper 7730.
Harbord, D., Pagnozzi, M., 2009. “Network-Based Price Discrimination and ‘Bill-and-Keep’ vs. ‘Cost-Based’
Regulation of Mobile Termination Rates,” Review of Network Economics 9(1), Article 1.
Hoernig, S., 2007. On-Net and Off-Net Pricing on Asymmetric Telecommunications Networks. Information Economics
and Policy 19(2), pp. 171–188.
Hoernig, S., 2008a. Tariff-Mediated Network Externalities: Is Regulatory Intervention Any Good? CEPR Discussion
Paper 6866.
Hoernig, S., 2008b. Market Penetration and Late Entry in Mobile Telephony. NET Working Paper 08–38.
Hoernig, S., 2010a. Competition Between Multiple Asymmetric Networks: Theory and Applications. CEPR Discussion
Paper 8060.
Hoernig, S., Inderst, R., Valletti, T., 2010. Calling Circles: Network Competition with Non-Uniform Calling Patterns.
CEPR Discussion Paper 8114.
Hurkens, S., Jeon, D.S., 2009. Mobile Termination and Mobile Penetration. Mimeo.
Hurkens, S., Lopez, A., 2010. Mobile Termination, Network Externalities, and Consumer Expectations. Mimeo.
Jeon, D.-S., Hurkens, S., 2008. A Retail Benchmarking Approach to Efficient Two-Way Access Pricing: No
Termination-based Price Discrimination. Rand Journal of Economics 39, pp. 822–849. (p. 160)
Jeon, D.-S., Laffont, J.-J., Tirole, J., 2004. On the ‘Receiver-pays Principle’. RAND Journal of Economics 35(1), pp.
85–110.
Jullien, B., Rey, P., Sand-Zantman, W., 2010. Mobile Call Termination Revisited. Mimeo.
Laffont, J.-J., Rey, P., Tirole, J., 1998a. Network Competition I: Overview and Nondiscriminatory Pricing. RAND Journal
of Economics 29(1), pp.1–37.
Laffont, J.-J., Rey, P., Tirole, J., 1998b. Network Competition II: Price Discrimination. RAND Journal of Economics
29(1), pp. 38–56.
Laffont, J.-J., Tirole, J., 2000. Competition in Telecommunications, MIT Press, Cambridge, Mass.
Littlechild, S.C., 2006. Mobile Termination Rates: Calling Party Pays vs. Receiving Party Pays. Telecommunications
Policy 30, pp. 242–277.
Lopez, A., 2008. Using Future Access Charges to Soften Network Competition. Mimeo.
Page 16 of 18
Mobile Telephony
Lopez, A., 2011. Mobile Termination Rates and the Receiver-pays Regime. Information Economics and Policy 23(2),
pp. 171–181.
Lopez, A., Rey, P., 2009. Foreclosing Competition through Access Charges and Price Discrimination. Mimeo.
Lupi, P., Manenti, F., 2009. Traffic Management in Wholesale International Roaming: Towards a More Efficient
Market? Bulletin of Economic Research 61, pp. 379–407.
Majer, T., 2010. Bilateral Monopoly in Telecommunications: Bargaining over Fixed-to-mobile Termination Rates.
Mimeo.
Marcus, J. S., 2004. Call Termination Fees: the U.S. in Global Perspective. Mimeo.
Mason, R., Valletti, T., 2001. Competition in Communications Networks: Pricing and Regulation. Oxford Review of
Economic Policy 17(3), pp. 389–415.
Ordover, J., Shaffer, G., 2007. Wholesale Access in Multi-firm Markets: When is It Profitable to Supply a Competitor?
International Journal of Industrial Organization 25, pp. 1026–1045.
Parker, P.M., Röller, L.-H., 1997. Collusive Conduct in Duopolies: Multimarket Contact and Cross-Ownership in the
Mobile Telephone Industry. RAND Journal of Economics 28(2), pp. 304–322.
Peitz, M., 2005a. Asymmetric Access Price Regulation in Telecommunications Markets. European Economic Review
49, pp. 341–358.
Peitz, M., 2005b. Asymmetric Regulation of Access and Price Discrimination in Telecommunications. Journal of
Regulatory Economics 28(3), pp. 327–343.
Salsas, R., Koboldt, C., 2004. Roaming Free? Roaming Network Selection and Inter-operator Tariffs. Information
Economics and Policy 16, pp. 497–517.
Sauer, D., 2010. Welfare Implications of On-Net/Off-Net Price Discrimination. Mimeo.
Sutherland, E., 2010. The European Union Roaming Regulations. Mimeo.
Valletti, T., Cambini, C., 2005. Investments and Network Competition. Rand Journal of Economics 36(2), pp. 446–
467.
Valletti, T., Houpis, G., 2005. Mobile Termination: What is the “Right” Charge? Journal of Regulatory Economics 28,
pp. 235–258.
Vogelsang, I., 2003. Price Regulation of Access to Telecommunications Networks. Journal of Economic Literature
41, pp. 830–862.
Wright, J., 2002. Access Pricing under Competition: an Application to Cellular Networks. Journal of Industrial
Economics 50(3), pp. 289–315.
Notes:
(1.) See Birke (2009), Birke and Swann (2006), and Corrocher and Zirulia (2009) for empirical evidence on the
strength of these effects.
(2.) For a dissenting view, see Binmore and Harbord (2005). Indeed, the fixed network is also typically a large
operator that should be able to affect the termination rate it pays to terminate calls, despite having to be subject to
some regulatory oversight. This problem of bargaining over termination rates between two large operators, and in
the “shadow of regulation,” still has to be studied in full. A first attempt in this direction is Majer (2010).
(3.) LRTa and LRTb do study entry with marginal coverage. The work discussed in this section considers entrants
with full coverage but a small number of subscribers.
Page 17 of 18
Mobile Telephony
(4.) See also Armstrong and Wright (2007) for a model of multiple symmetric networks with call externalities.
(5.) Harbord and Hoernig (2010) apply this model to calibrate the effects of termination regulation and the 2010
merger between Orange and T-Mobile in the UK.
(6.) This aspect was particularly relevant in earlier days of mobile growth, but has diminished in importance given
the current high levels of mobile penetration.
(7.) See Marcus (2004) and Harbord and Pagnozzi (2009).
(8.) In the end the merger did not go ahead, so that this measure was never actually applied.
(9.) Cambini and Valletti (2008) instead show that when mobile operators can charge for both making and receiving
calls, they endogenously choose a termination rate that does not create differences in on-net and off-net prices at
equilibrium when calls made and received are complements in the subscriber's utility function.
(10.) Parker and Röller (1997) study the early (duopoly) days of the U.S. cellular industry, and find that prices were
significantly above competitive levels. Their findings are in line with a theory of retail collusion due to crossownership and multi-market contacts. Busse (2000) shows that these firms used identical price schedules set
across different markets as their strategic instruments to coordinate across markets.
(11.) Note that this case is not the opposite of the previous one since the actions taken by the other player are
different.
(12.) The term “Regulation” here has a specific legal meaning. It is a measure adopted jointly by the European
Commission and the Parliament, and has legal force in parallel to the existing Regulatory Framework. See
Sutherland (2010) for a more detailed description.
(13.) Ambjørnsen et al. (2011) argue that this system lead mobile operators to compete on “wrong” type of
strategic variable, namely by overinvesting in particular hotspots to attract visiting traffic with the strongest signal
rather than with the cheapest wholesale price.
Steffen Hoernig
Steffen Hoernig is Associate Professor with "Agregação" at the Nova School of Business and Economics in Lisbon.
Tommaso Valletti
Tommaso Valletti is Professor of Economics at the Business School at Imperial College, London.
Page 18 of 18
Two-Sided B to B Platforms
Oxford Handbooks Online
Two-Sided B to B Platforms
Bruno Jullien
The Oxford Handbook of the Digital Economy
Edited by Martin Peitz and Joel Waldfogel
Print Publication Date: Aug 2012
Online Publication Date: Nov
2012
Subject: Economics and Finance, Economic Development
DOI: 10.1093/oxfordhb/9780195397840.013.0007
Abstract and Keywords
This article contributes to the understanding of business-to-business (B2B) platforms, drawing on insights from the
two-sided market literature. The concept of this market refers to a situation where one or several competing
platforms present services that help potential trading partners to interact. A platform that intermediates transactions
between buyers and sellers is addressed, where the service consists in identifying the profitable trade
opportunities. Two-sided markets provide a framework to investigate platforms' pricing strategies, and a convenient
and insightful framework for exploring welfare issues. Key determinants of the competitive process are whether
platforms obtain exclusivity from their clients or not, how differentiated they are, and what tariffs they can use.
Multihoming may enhance efficiency, but has the potential adverse effect of softening competition. It can be
concluded that while the literature has been concerned with antitrust implications, it has delivered few concrete
recommendations for policy intervention.
Keywords: B2B platforms, two-sided market, platforms pricing, welfare, multihoming, competition, tariffs
1. Introduction
The development of digital technologies has led to drastic changes in the intermediation process, which combined
structural separation of the physical and the informational dimensions of intermediation with major innovations in
the latter domains. As discussed by Lucking-Reiley and Spulber (2000) e-commerce may be the source of several
types of efficiency gains, including automation, better supply-chain management or disintermediation. This has led
the way to the emergence of a new sector of activity online, where “info-mediators,” building on traditional
Electronic Data Interchange, offer a wide range of electronic services helping buyers and sellers to find trading
partners and conduct trade online.1 As of 2005, European e-Business Report (e-Business W@tch), which follows 10
sectors in the European Union, estimates that 19 percent of firms were using ICT solutions for e-procurement while
17 percent to support marketing or sales processes (in shares of employment). They also recently pointed to a
surge of e-business since 2005 after some period of stabilization and cost-cutting, as well as to the key role of
information technologies for innovating products. In the United States, the US Census Bureau2 estimates that ecommerce accounted in 2008 for 39 percent of manufacturers shipments and 20.6 percent of merchant
wholesalers sales, while e-commerce sales are modest for retailers (3.6 percent of sales in 2008) although
increasing. For manufacturers, e-commerce is widespread among sectors, ranging from 20 to 54 percent of
shipments, the most active sectors being transportation equipment and beverage and tobacco products.3 A similar
pattern arises for (p. 162) wholesalers although there is more disparity across sectors (from 7 to 60 percent), the
most active sector being drugs (60 percent of sales), followed by motor vehicles and automotive equipment (47
percent).
BtoB intermediation platforms offers a wide and diverse range of services to potential buyers and sellers. One
Page 1 of 18
Two-Sided B to B Platforms
category of services pertains to what can be referred to as matching services. This amounts to help members of
the platform to identify opportunities to perform a profitable transaction (to find a match). The second category
concerns support functions that help traders to improve on the efficiency of trade. This may range from simple
billing and secured payment service up to integrated e-procurement solutions. Indeed the flexibility of electronic
services has created numerous possibilities for combining these services into e-business offers. While some sites
are specialized in guiding clients in finding the desired product with no intervention on the transactions4 others
offer a full supply chain management service.5
In what follows I will be concerned primarily with the first dimension of the activity, namely the matching service.
Moreover I will examine this activity from the particular angle of the two-sided market literature. The concept of a
two-sided market refers to a situation where one or several competing platforms provide services that help
potential trading partners to interact. It focuses on the fact that these activities involve particular forms of
externalities between agents, namely two-sided network externalities. The platform is used by two sides and the
benefits that a participant derives depend directly on who participates on the other side of the market.6 Most of the
literature on two-sided markets has focused on the determination of an optimal price-structure for the services of
the platforms in models where the mass of participants on the other side is a key driver of value for a participant,
as will be explained below.
BtoB platforms clearly have a two-sided dimension. The two sides are buyers and sellers, and a key determinant of
the value of the service is the number of potential trading partners that an agent can reach. Admittedly there are
other dimensions that matter as well. For instance eBay's innovation in dealing with issues of reliability and
credibility with an efficient rating system has been part of its success.7 Also potential traders may care about the
quality of the potential partners. The two-sided market perspective is partial but one that has proved to be
extremely useful in providing new and original insights on the business strategies of platforms, and that leads the
way for further inclusion of more sophisticated aspects in a consistent framework.
The objective of this chapter is to present the main insights of the literature in the context of electronic
intermediation and to open avenues for further developments. After an introduction to two-sided markets, I will
discuss the implications for optimal tariffs in the case of a monopoly platform, including the role of up-front
payments and of contingent transaction fees. Then the competitive case is discussed, with different degrees of
differentiation, the distinction between single-homing and multihoming, and different business models. The third
section (p. 163) is devoted to non-price issues such as tying, the design of the matching process and the
ownership structure. The last section concludes.
2. An Introduction to Two-Sided Markets
Apart from BtoB, examples of two-sided markets include payment card systems (Rochet and Tirole, 2001), video
games (Hagiu, 2006), music or video platforms (Peitz and Waelbrock, 2006), media (Anderson and Coate, 2005) or
health care (Bardey and Rochet, 2010).8 Telecommunication networks and Internet are also instances of two-sided
markets that have been the object of many studies and the source of many insights.9
The literature on two-sided markets is concerned with the consequences for business strategies of the indirect
network externalities that generate a well known chicken-and-egg problem: an agent on one side of the market is
willing to participate to the platform activity only if he expects a sufficient participation level on the other side.
Platforms’ strategies then aim at “bringing the two sides on board”10 and account for the demand externalities in
the price structure. Along this line, Rochet and Tirole (2006) defines a platform to be two-sided when the profit and
efficiency depends not only on the price level but also on the price structure.11
The most influential analysis of two-sided markets has been developed by Jean Tirole and Jean-Charles Rochet
(2003, 2006). Starting from the analysis of payment card systems, they developed a general theory of platform
mediated transactions highlighting the two-sided nature of these markets. Transactions take place on the platform
when supply meets demand, so that the total volume of transaction depends on the willingness to pay of the two
sides, the buyer side and the seller side. Let tb be the fee paid by a buyer per transaction and ts be the fee paid by
a seller per transaction. Then the maximal number of transactions that buyers will be willing to conduct is a function
Db(tb) that decreases with fee tb. One may define similarly a transaction demand by sellers Ds(ts). For a transaction
to take place, a match should be found between a buyer and a seller who are willing to conduct the transaction.
Page 2 of 18
Two-Sided B to B Platforms
Thus the total number of transactions will increase with both Db and Ds. In the Rochet and Tirole (2003) formulation,
as in most of the two-sided markets literature, the total volume of transaction is (proportional to) the product of the
demands Db(tb)Ds(ts). This is the case for instance when each agent has a utility per transaction that is random,
Di(ti) is the probability that a randomly selected agent is willing to trade and all trade opportunities are exhausted.
This is also the case under random matching.
(p. 164) For every transaction the platform receives a total fee t = tb + ts and supports a cost c. Therefore the
revenue of the platform is given by:
Maximizing the platform profit then yields the optimality conditions
Under reasonable assumptions, this yields a unique solution. Thus optimality results from a balancing of the
charges imposed on the two sides. While the total margin depends on some measure of aggregate elasticity, the
contribution of each side to the profit depends negatively on its demand elasticity.
For our purpose, we can reinterpret the formula as a standard monopoly Lerner index formula with a correct
interpretation of the opportunity cost of increasing the fee on one side. For every agent on side i who is willing to
transact, the platform is gaining not only the fee ti but also the fee tj that the other side pays. Thus every
transaction of an agent on side i induces an effective cost c − tjalong with the revenue ti. Thus we can rewrite the
formula as
With this formulation we see that the fee on side i is the monopoly price for this side, but for an opportunity cost
that accounts for the revenue generated on the other side by the participation of the agent. As we shall see, this
generalizes to a more complex set-up.
The price theory of two-sided markets is thus one of balancing the contributions to profit of the two sides. While the
Rochet and Tirole (2003) model has been very influential in promoting the concept of two-sided market, its
applicability to the issue of BtoB intermediation is limited for two reasons. The first reason is that it assumes that all
potential buyers and sellers are participants to the platform and, in particular, that there is no fixed cost to
participating. The second is that transactions conducted through BtoB platforms involve a transfer between the
transacting parties. Whenever buyers and sellers bargain over the distribution of the surplus from transaction and
are fully aware of each other transaction fees tb and ts, they will undo the price structure so that the final
distribution of the surplus depends only on the total transaction fee t = tb + ts.12 For instance, in the case of a
monopoly seller, the total price (price plus buyer fee) paid by the buyer depends only on the total fee t and not on
the fee structure since the seller will adjust its price to any rebalancing of the fees between the two parties.
According to the definition of Rochet and Tirole (2006), the market would then not be a two-sided market.
(p. 165) Armstrong (2006), as well as Caillaud and Jullien (2001, 2003) developed alternative approaches based
on the idea of indirect network effects and membership externalities.13 In these models, agents are considering the
joint decision (possibly sequential) of participating to the platform and transacting with the members from the other
side with whom they can interact on the platform. The models focus on one dimension of heterogeneity on each
side, namely the costs of participating to the platform (time and effort devoted to the registration and the activity on
the platform), and assume a uniform (but endogenous) benefits from transactions within each side. Some progress
toward more dimensions of heterogeneity are made by Rochet and Tirole (2006) and more recently Weyl and
White (2010).
Page 3 of 18
Two-Sided B to B Platforms
3. A Model for Commercial Intermediation
Let us consider a platform that intermediates transactions between buyers and sellers, where the service consists
in identifying the profitable trade opportunities . To access the platform, the two types of agents are required to
register. Once this is done they can start to look for trading partners. Suppose that prior to his participation in the
identification process for profitable matches, an agent on one side considers all agents on the other side as equally
likely to be a trading partner. In other words a buyer has no ex-ante preference about the identity of the sellers
who participate. Suppose also that there is no rivalry between members of the same side. In particular sellers’
goods are non-rival, sellers have no capacity constraints and buyers have no financial constraints. In this set-up
the willingness to participate of a buyer depends solely on the prices set by the platform and on the anticipated
total value of the transactions performed. Moreover if the matching technology is neutral and the same for all, the
total value of transactions is the product of the number of sellers and the expected value per seller (where the
latter is itself the probability that the transaction occurs times the expected value of a successful transaction).
In this context, we wish to discuss how the allocation resulting on the platform is affected by the level and the
structure of the prices set by the platform.
There is a variety of pricing instruments that an intermediary may rely upon for generating revenues from the
transactions occurring on its platform. Describing this diversity of situations is beyond the scope of this chapter.
For the purpose of discussing the two-sided nature of the intermediation activity, it is sufficient to distinguish
between two types of prices. The first category includes fees that are insensitive to the volume of activity and are
paid up-front by any user who whishes to use the platform services. I will refer to these as registration fees (they
are sometime referred to as membership fees in the two-sided market literature). (p. 166) The registration fees will
be denoted rb and rs for the buyers and the sellers respectively. Usually participation to the platform can be
monitored so that registration fees can be imposed on both sides but they may be too costly to implement, in
particular in BtoC or CtoC activities.
The second category of pricing instruments includes those that are variable with the number and/or value of the
transactions. Transaction fees are commonly used in BtoB activities and can take complex forms. For example,
eBay.com charges sellers an “insertion fee” plus a “final value fee” paid only if the item is sold which depends on
the closing value. Of course this requires the ability to monitor transactions which may depend on the nature of the
transaction and the contractual agreements with the buyer or the seller. For instance, if the match between a buyer
and a seller triggers multiple transactions, they may avoid paying fees to the platform by conducting the
transaction outside the platform. In this case implementing fees proportional to the value of transaction requires a
commitment by one party to report the transactions, either contractual or through long-run relationship and
reputation effects.
When feasible, the intermediary may thus charge fees tb and ts per transaction for respectively the buyer and the
seller. As pointed above, with commercial transactions, the buyer and the seller negotiate a transfer so that we can
assume that only the total fee t = tb + ts matters for determining whether a transaction occurs and how the surplus
is shared.
Notice that while registration fees and transaction fees are natural sources of revenue for BtoB platforms, they also
rely on alternative sources of revenue. First advertising is a way to finance the activity without direct contribution
of the parties. In so far that advertising is a way to induce interactions between potential buyers and advertisers,
this can be analyzed with the toolkit of two-sided markets.14 Advertising will not be considered here, but it is worth
pointing that electronic intermediation has also dramatically affected advertising by allowing direct monitoring of
the ads impact through the click-through rate. Unlike banner advertising, it is difficult to draw a clear line between
BtoB intermediation and click-through advertising. Indeed in both cases the platform adapts its response to the
customer requests and behavioral pattern so as to propose some trade to him, and is rewarded in proportion to its
success in generating interactions.
Other revenues generating strategies subsume to bundling the service with some information goods. This is not
only the case for most search engines that act as portals, but also for many support services that are offered by
BtoB platforms, such as billing, accounting or any other information services for BtoB. Bundling may be profitable
because it reduces transaction costs or exploits some forms of economy of scope.15 Bundling may also be part of
an articulated two-sided market strategy, an aspect that we will briefly address. Notice that in some cases bundling
16
Page 4 of 18
Two-Sided B to B Platforms
can be embedded into prices by defining prices net of the value of the good bundled.16 For instance if a service
increases the value of any transaction by a fixed amount z at a cost cz for the platform, we can redefine the net
transaction (p. 167) fee as t̃ = t − z and the net cost per transaction as c̃ = c − z + cz then the profit per
transaction is t̃ − c̃ = t − c − cz.
3.1. Membership Externalities
In the simplest version of the model, there is no transaction fee. For any pair of buyer and seller, let us normalize
the expected value of transactions for a pair of buyer and seller to 1. We can then denote by α b the expected
share of this surplus for a buyer and by α s = 1 − α b the expected profit of a seller per buyer registered. Notice that
α s and α b don’t depend on the mass of sellers or buyers, which reflects the assumptions made above and which
was the case in most of the two-sided market literature until recently (see however the discussion of externalities
within sides below). This is in particular the assumption made by Armstrong (2006) and Gaudeul and Jullien (2001).
Let us denote by Nb and Ns the number of buyers and the number of sellers on the platform. Then the expected
utility from transactions that a buyer anticipates can be written as α bNs. As a result, the number of buyers
registering is a function of α bNs − rb that we denote Db(rb − α bNs) where Db is non-increasing. Notice that this
demand function is similar to the demand where quality is endogenous and measured in monetary equivalent. The
term α bNs can thus be interpreted as a measure of quality that depends on the other side's participation level.
Following this interpretation, the participation of sellers can be viewed as an input in the production of the service
offered to buyers. As noticed in Jullien (2011), from the platform perspective agents have a dual nature: there are
both clients buying the service and suppliers offering their contribution to the service delivered to the other side.
This intuition then makes clear that since the price charged to one side will have to balance these two effects, it will
tend to be lower than in one-sided markets. This is then no surprise that the price on one side may even be
negative.
Following a similar reasoning, the expected utility from transactions of a seller is α sNb leading to a level of
registration Ds(rs − α sNb) With these demand functions, we obtain what can be seen as a canonical model for
registration to the platform. The participation levels for registration fees rb and rs solve the system of equations (1)
Under some reasonable assumptions the system of demand defines unique quantities Nb(rb, rs) and Ns(rb, rs) given
the registration fees. Notice that the model involves a network externality between buyers: although buyers are not
(p. 168) directly affected by other buyers, in equilibrium, each buyer creates a positive externality on the others
through its impact on the sellers’ participation. This type of externalities is what is referred to in the concept of
indirect network effects.
Remark 1
When the demand is not unique, the price structure may matter for selecting a unique demand. Suppose
for instance that Di(x) is zero for x non-negative but positive for x negative. Then zero participation is an
equilibrium at all positive prices. However if one price is negative (say rb) and the other positive, the
demands must be positive since Db(rb) 〈 0
The platform profit is then given by (for clarity we set the cost per transaction to zero from now on, c = 0)
where cb and cs are the registration costs on the buyer side and the seller side respectively. Thus the two-sided
platform profit and the monopoly prices are similar to those of a multi-product seller, where the products are the
participations on each side. An intuitive derivation of these prices can be obtained as follows.
Consider the following thought experiment: the platform reduces marginally the fee rb and raises the fee rs by an
Page 5 of 18
Two-Sided B to B Platforms
b
s
amount such that the participation of sellers remains unchanged. This is the case when the change of the fee rs is
proportional to the change in buyers participation: drs/dNNs = α s. Then, since the sellers participation remains
unaffected, the change in buyers participation is
Following this logic, the effect
on profit of this “neutralized” change in price is
At optimal prices the change
should have zero first-order effect implying that the optimal fee rb solves (the symmetric formula holds for the
sellers side): (2)
The interpretation is then that the registration fee is the monopoly price for the participation demand but for an
opportunity cost cb − α sNs The opportunity cost is the cost cb net of the extra revenue that the platform can derive
on the other side from the participation of the agent, which corresponds here to an increase in the registration fee
by α s for each of the Ns sellers.17
Monopoly prices exhibit several interesting properties. First the presence of two-sided externalities (α b positive)
tends to generate a lower price than without externality, very much like it is the case of direct network effects. The
size of the effect on buyers’ registration fees depends, however, on the level of participation on the other side.
The side that values the other side's participation the least should have (everything being equal otherwise) lower
prices than the other side. This is because it is the most value enhancing for the platform which can leverage
participation on (p. 169) the other side. Of course, this effect has to be combined with standard cost and demand
elasticity considerations.
A striking conclusion is that the optimal price can be negative on one side, if the cost is low. In many cases
negative prices are hard to implement in which case the registration fee will be zero, possibly complemented with
gifts and freebies. The theory thus provides a rational set-up to analyze the behavior of free services as the result
of profit maximization. For this reason it is a natural tool in the economics of Internet and media, where free
services are widespread.
3.2. Transaction Fees
The canonical model abstracts from transaction fees and it is important to understand the role that such fees can
play. While registration fees are neutral to transactions (apart for their effect on participation to the platform),
transaction fees may have a direct effect on transactions. This is always the case if the benefits from trade are
variable across transactions. Indeed, under efficient bargaining, a transaction occurs only when its total value is
larger than the transaction fee.
To start with, suppose that transaction fees are non distortionary so that the volume of transaction is not affected.
Assuming a constant sharing rule of the transaction surplus between the seller and the buyer, transaction fee t
leads to expected benefits α b(1 − t) for a buyer and α s(1 − t) for a seller. The expected revenue from transactions
of the platform is then tNsNb proportional to the number of transaction. In the case of distortionary transaction fee,
we can similarly write the respective expected benefits per member of the other side as α bv(t) and α sv(t) with v(0)
− 1. Then v(t) is the net surplus of a pair of buyer and seller from a transaction and the probability that a pair of
buyer and seller transact is v'(t).18
In the case of distortionary transactions fees, the total expected surplus for a pair of buyer and seller is v(t) +
v'(t)t. A classical argument for two-part tariffs shows that when the platform can combine a transaction fee with
registration fees, then the optimal transaction fee maximizes this total surplus (Rochet and Tirole, 2006; Jullien,
(2007)). The general intuition is that the platform can always rebalance the prices so as to increase the surplus
from transactions and maintain participation levels, implying that the welfare gains generated are captured by the
platform. To see that, notice that demand functions are given by (3)
Page 6 of 18
Two-Sided B to B Platforms
Using
we can write the profit in terms of quantities as
(p. 170) With this formulation we see that the profit can be decomposed into two parts. The first part is a standard
profit function for selling quantity Nb on the buyer side and Ns on the seller side. The second part is a term
capturing the strategic dimension of the externality. From the profit formulation it is immediate that for any
participation levels on each side, the platforms should set the fee at a level that maximizes the total expected
surplus v(t) + v'(t)t.19
As mentioned above the conclusion that transaction fees should aim at maximizing the surplus generated by the
activity of agents on the platform is reminiscent of similar results for two-part tariffs. From this we know that it has to
be qualified when applied to a more general context. For one thing the conclusion relies on the assumption that
agents are risk neutral. If a participant faces some uncertainty on future transactions, then transferring some of the
platform revenue from fixed payment (registration fees) to transaction fees is a way to provide some insurance.
The risk on the final utility of the agent is reduced which may raise efficiency when the agent is risk averse.20
Second, the conclusion relies on the assumption that the expected benefits from transactions are uniform across
buyers or across sellers (see below).
Hagiu (2004) points to the fact that the level of transaction fees matters also for the platform incentives to foster
transactions. In his model there is insufficient trade so that total surplus is maximized by subsidizing trade. Running
a deficit on transactions fosters more efficient trades by members of the platform, however it may hinder the
platform incentives to attract new participants in a dynamic setting. Indeed once a significant level of participation
is reached, any new participant raises the volume of transactions of already registered members and thus the
deficit that the platform incurs on these transactions. Conversely positive transaction fees raise the gain of
attracting a new member. The ability of the platform to commit on its future strategy then matters for the conclusion.
A lack of commitment may lead the platform to opt for positive transaction fees.
3.3. Welfare
Two-sided markets provide not only a framework to analyze platforms pricing strategies but also a convenient and
insightful framework for analyzing welfare issues. Socially optimal pricing rules are derived in Jullien and Gaudeul
(2001) and Armstrong (2006). These prices follow from standard arguments in welfare economics applied to
networks. To discuss this, consider the case without transaction fees. Recall that the participation of an additional
buyer generates an average value α s for every seller on the platform. Like for any other externality (pollution or
congestion, for instance), since the buyer makes his decisions based on his own benefits solely, his participation
decision is efficient when he faces a price that internalizes all the costs and benefits of other agents in the society.
In the case of a two-sided market the net cost for society is the physical cost diminished by the value of the (p.
171) externality created on the other side, cb − α sNs. A similar argument on the other side shows that the
allocation is efficient only when both sides face (total) prices pb = cb − α sNs and ps = cs − α bNb
Thus socially efficient prices are below marginal cost, a conclusion that is in line with the conclusions for direct
network effects. One may then interpret these prices as “social opportunity costs”. Notice that they coincide with
the platform's opportunity costs. This is due to the fact that the expected benefit from transactions is uniform within
each side. If it were not the case, the two opportunity costs would differ by a factor reflecting the traditional Spence
distortion: the platform would care about the externality on the marginal member while social welfare would require
to care about the externality on the average member (see the discussion of Weyl, 2010).
Page 7 of 18
Two-Sided B to B Platforms
This being understood, one can apply standard intuition to these opportunity costs. For instance, optimal tariffs
under budget balanced conditions can be derived as Ramsey prices for the two segments that constitute the two
sides, whereby the mark-up over the opportunity cost is proportional to the inverse elasticity of participation
(Gaudeul and Jullien, 2001).
Similar conclusions can be derived when transaction fees can be used (see Jullien, 2007). Clearly, social welfare is
maximized when the transaction fee maximizes the expected trade surplus. Therefore, subject to the caveats
discussed above, there is no divergence between privately optimal (monopoly) and socially optimal transaction
fees. Given the transaction fees, the analysis of socially optimal registration fees is the same as above but
accounting for the fact that the externality induced by the participation of an agent on one side includes not only
the value created for members on the other side but also the profit created for the platform.
3.4. Usage Heterogeneity
There is little contribution on optimal tariffs when both the benefits from transactions and the costs of participation
are heterogenous.
A noticeable exception is Weyl (2010) which provides a remarkable analysis of tariffs with heterogeneity in the
transaction values α s and α b. Then participation demand on side i = s,b for the case with no transaction fees takes
a non-linear form Di(pi, Nj) that depends on the joint distribution of the participation cost and the benefit α i The
intuition developed above concerning the opportunity cost of adding one more member to the platform extends. In
particular, he characterizes the increase ᾶ i of the revenue per member of side j that a platform can obtain with one
more member on side i keeping the participation Nj constant on side j. The opportunity cost of selling to side i is
then as above ci − ᾶ iNj Weyl then shows that monopoly pricing follows standard Lerner index formulas for these
opportunity costs:
(p. 172) where the elasticity term is with respect to the price.
He also extends the welfare analysis by showing that socially optimal registration fees are equal to a social
opportunity cost derived as follows. For one more member on side i, let ᾶ i be the mean increase of the utility of
members who are registered on the other side. Then the social opportunity cost can be defined as ci − á¾± jNj
Weyl identifies two distortions associated with the exercise of market power in two-sided markets. One is the
standard monopoly mark-up over marginal cost. The second is reflected in the fact that ᾶ j may differ from ᾶ j, which
corresponds to the standard distortion in the provision of quality by a monopoly. Indeed, as already pointed,
participation on one side can be viewed as a quality dimension on the other side. When deciding on the price on
one side, the monopolist accounts for the effect on the marginal member on the other side and ignores inframarginal members. By contrast, a social welfare perspective requires to account for the average member.
4. Competing Platforms
Competition between platforms is shaped by the chicken-and-egg nature of the activity, as the driver of success
on one side is success on the other side. As pointed in Jullien (2011), each user of a platform is both a consumer of
the service and an input for the service offered to the other side. Platforms’ pricing strategies then reflect the
competition to sell the service, but also the competition to buy the input. As we shall see, this dual nature of
competition may generate complex strategies using cross-subsidies and a departure of prices from marginal costs.
Key determinants of the competitive process discussed below are whether platforms obtain exclusivity from their
clients or not, how differentiated they are and what tariffs they can use. The situation where agents can register
with several platforms is usually referred to as multihoming. In discussing the competitive out-come, I devote the
first part to the case of exclusive dealing by buyers and sellers (single-homing), and then I discuss multihoming.
Page 8 of 18
Two-Sided B to B Platforms
4.1. Overcoming the Multiplicity Issue
One issue that is faced when dealing with platform competition, that is akin to competition with network effects, is
the issue of multiplicity. There may be different allocations of buyers and sellers compatible with given prices,
which complicates the equilibrium analysis. There are at least three ways to address this issue. Caillaud and Jullien
(2003) develop a methodology to characterize the full (p. 173) set of equilibria, based on imposing specific
conditions on the demand faced by a platform deviating from the equilibrium (using the notion of pessimistic beliefs
discussed below). Another approach consists in focusing on situations where multiplicity is not an issue. Along this
line, Armstrong (2006) assumes enough differentiation between the platforms for the demand to be well defined.
Armstrong's model has proven to be very flexible and is widely used by researchers on two-sided markets.21
Ongoing work by Weyl and White (2010) relies on a particular type of non-linear tariffs introduced in Weyl (2010),
named “insulating tariffs.” These tariffs are contingent on participation levels of the two sides and designed in such
a way that participation on one side becomes insensitive to the other side's participation, thereby endogenously
removing the source of multiplicity.
The last approach is to choose among the possible demands one particular selection. For instance, Ambrus and
Argenziano (2009) imposes a game theory selection criterion (coalitional rationalizability) that can be interpreted
as some limited level of coordination between sides. Jullien (2011) and Hagiu (2006) introduce the concept of
pessimistic beliefs that assumes that agents always coordinate on the least favorable demand for one predetermined platform. Jullien (2011) notices that, since which demand emerges depends on agents expectations,
one platform will be at a disadvantage if agents view the competing platform as focal. The concept of pessimistic
beliefs captures this idea in a formal way and allows to study the effect on competition and entry of the advantage
derived from favorable expectations.
Beyond the technical difficulty, this multiplicity issue reminds of the importance of agents expectation in the
outcome of competition between platforms. There has been little work on the role of reputation in the dynamics of
BtoB marketplaces, an issue that should deserve more attention (Jullien (2011) and Caillaud and Jullien (2003) can
be interpreted as addressing these issues).
4.2. Competition with Exclusive Participation
The main contributions on competition between platforms are Armstrong (2006) and Caillaud and Jullien (2001,
2003).22 Exclusivity refers to the fact that agents can be active only on one platform. Whether this applies to BtoB
platforms depends on the concrete context. For example an industrial buyer that relies on the platform for its
supply-chain management would typically have an exclusive relationship with the platform, while his suppliers may
not. More generally e-procurement may require the buyer to deal exclusively with the platform.
One lesson from the literature is that the nature of competition may be drastically affected by factors such as
complementary services offered by the platform and whether the platforms are perceived to be differentiated or
not. To illustrate this we can contrast several contributions in the context of competition with registration fees only.
(p. 174) Armstrong (2006) assumes that the platforms provide enough services for the demand to be always
positive (Di(0) is positive in our model) and that these services are differentiated across platform. More precisely,
Armstrong's model extends the framework of Section 2 by assuming that two symmetric platforms are differentiated
à la Hotelling on each side, with large enough transportation costs and full coverage of the market. With
differentiated platforms, the residual demand of one platform is well defined and the intuition on opportunity costs
applies. Suppose that each platform covers half of the market on each side, and one platform decides to attract
one more buyer away from its competing platform. This raises its value for sellers by α s but at the same time this
reduces the value of the competing platform for sellers by α s since the buyer is moving away from the competitor.
Therefore the platform can raise its registration fee rs by 2α s with unchanged sellers’ demand. Given that it serves
half of the market, the platform increases its profit on the seller side by 2α s × 1/2 when it attracts one more buyer.
The opportunity cost of attracting a buyer is then cb − α s, and by the same reasoning it is cs − α b for a seller.
Armstrong shows that the equilibrium registration fees are the Hotelling equilibrium prices for these opportunity
costs: ri = ci − α j + σi where σi is the equilibrium Hotelling mark-up (the transportation cost). The conclusion is thus
that prices are reduced on both sides when the markets are two-sided compared to the one-sided case (α s = α b =
0) and that the strongest effect is on the market that induces relatively more externality for the other side, a
Page 9 of 18
Two-Sided B to B Platforms
conclusion in line with the lessons from the monopoly case.
Caillaud and Jullien (2001), by contrast, analyze the case of non-differentiated platforms that provide pure
intermediation service to homogenous buyers and sellers. Suppose there is a mass 1 on each side and that when
members of side j register with platform k = A, B, a member of side i obtains a utility
with no other
value attached to the platform's services. In this context, demands are not well defined. If the absolute value of the
price differential on each side is smaller than the average value α i of transactions, then there are two possible
equilibria: one where all buyers and sellers join platform A and one where they all join platform B. To fix ideas,
assume that platform B faces “pessimistic beliefs” which here imposes that in the above situation all agents
register with platform A. Then the most efficient competitive strategy for platform B takes the form of a divide-andconquer strategy. When platform A offers prices ri ≤ α i, a strategy for platform B consists in setting fees rs − α i − ε
〈 0 for sellers (divide) and α b + inf{0, rb} for buyers (conquer). The platform subsidizes the sellers at a level that
compensates them for being alone on the platform, thereby securing their participation. This creates a bandwagon
effect on the other side of the market. Hence buyers are willing to pay α b to join the platform if rb 〉 0 (since platform
A has no value without sellers) or α b − rb if rb 〈 0. A divide-and-conquer strategy then subsidizes participation on
one side and recoups the subsidy by charging a positive margin on the other side. The mirror strategy subsidizing
buyers can be considered. Caillaud and Jullien (2001) then show that platform serves the whole market and obtains
a positive profit equal to the difference between the values of transactions |α s − α b| and Caillaud (p. 175) and
Jullien (2003) show that more generally when no restriction is imposed on the demand system (except some
monotonicity condition), the set of equilibria consists in one platform serving the whole market and making a profit
between 0 and |α s − α b|.
Armstrong (2006) and Caillaud and Jullien (2003) thus lead to very different conclusions and equilibrium strategies,
where the difference lies mostly in the intensity of the impact of indirect network effects on demand. In Armstrong's
model, the demand is smooth and externalities raise the elasticity of the residual demand, as in Rochet and Tirole
(2003) canonical model. In Caillaud and Jullien (2001, 2003) externalities generate strong band-wagon effects and
non-convexities.
The model of Caillaud and Jullien has also the particularity that all the value is derived from pure intermediation.
Surprisingly it is this feature that allows the active platform to generate positive profit. Jullien (2011) considers the
same model but assumes that there is an intrinsic value to participating on a platform even if no member of the
other side is present. If this “stand-alone” value is large enough, then the analysis of divide-and-conquer strategy
is the same except that the “conquer” price is α b + rb instead of α b + inf{0, rb} (the reason is that buyers would
pay rb 〉 0 even if there is no seller on the platform). This simple twist has a drastic implication since it implies that
with homogenous competing platforms, there doesn’t exist a pure strategy equilibrium. The reason is that for any
profit rb + rs that a platform can obtain in equilibrium, its competitor can obtain rs + rb |α s − α b| with an adequate
divide-and-conquer strategies.
What to retain from this? First, as it is often the case with network effects, the existence of indirect network effects
in the BtoB marketplaces tends to intensify competition, at least when intermediation is only one part of the activity
of platforms. A difference with the monopoly case is that the opportunity cost of attracting a member on the
platform has another component than the value created on the other side: it is the “competitive hedge” due to the
fact that attracting a member from the other platform deprives the latter from the very same value. Thus in a
competitive context the effect of network externalities is doubled, which is what explains the inexistence result of
Jullien (2011).
Second, where the business models are fundamentally based on pure intermediation activities, it may be very
difficult to compete with well established platforms that benefit from reputation effects. This conclusion for instance
shades some light on the high degree of concentration that characterizes the market for search engines (however
this should be pondered by multihoming considerations discussed below).23 New entrants may then rely on
alternative strategies involving horizontal differentiation and multihoming by some agents, as discussed below.
Caillaud and Jullien (2001, 2003) also illustrate the fact that the pattern of prices in two-sided markets may exhibit
implicit cross-subsidies. As a result, even two sides that have similar characteristics may face very different prices,
a feature that is referred to as price skewness by Rochet and Tirole (2003).24 Unlike the case of a one-sided
Page 10 of 18
Two-Sided B to B Platforms
market where competition tends to align prices with marginal costs, competition between two-sided platforms tends
to exacerbate the skewness (p. 176) of prices and leads to permanent departure of prices from marginal costs
(or even from opportunity costs as defined before). The reason is that it raises the incentives to gain a competitive
advantage by courting some side.
Finally, it is worth mentioning that there is much to learn for the case where the value of transactions is
heterogenous within each side.25 Ambrus and Argenziano (2009) provides the insight that when agents differ in
their usage value, then size on one side may act as a screening device on the other side. This opens the
possibility of endogenous differentiation whereby multiple platforms with very different participation patterns
coexist, the equilibrium allocation being supported by endogenous screening.
4.3. Multihoming
The previous discussion assumes that agents register only to one platform which appears unrealistic for many BtoB
platforms. For instance, in e-tendering, typically buyers single-home but sellers may be active on several platforms
simultaneously. More generally, platforms offering supply-chain management may have only one side singlehoming depending on which end of the supply-chain is targeted by the platform's process.
When one side can register to both platforms and not the other one, then a particular form of competition emerges
that is referred to as “competitive bottleneck” by Armstrong (2006), where the term is borrowed from the debate
over termination charges in mobile telecommunications. Indeed platforms do not really compete to attract
multihoming agents as the “products” offered to them are non-rival. To see that suppose sellers are multihoming
and not buyers. Then a platform can be viewed as providing an exclusive access to its registered buyers: from the
perspective of sellers it is a bottleneck. Thus it can charge sellers the full value of accessing its population of
registered buyers. There is no rivalry on the sellers’ side as access to one platform is not substitutable with access
to another platform.26 The profit on the seller side is however competed away on single-homers. Indeed following
the logic developed in the previous sections, the opportunity cost of attracting a buyer discounts the profit that the
buyer allows to generate on the seller side. The equilibrium prices on the buyer side will thus be lowered to an
extent that depends on the rate at which costs are passed on to buyers in equilibrium. This effect, according to
which higher revenue per user on one side translates into lower prices on the other side, is what is referred to as
the waterbed effect in the telecommunication literature and sometime as the seesaw effect in the literature on twosided markets.27
Notice that in comparison to single-homing, multihoming leads to higher prices for the multihoming side, there is no
clear comparison on the single-homing side. The reason is that while the revenue that is generated by an
additional single-homing agent is higher, the additional “competitive hedge” effect discussed in the previous
section disappears with vanishing competition on the multihoming side. (p. 177) Both Armstrong (2006) and
Caillaud and Jullien (2003) however concludes that profits are higher when there is multihoming on one side. Armstrong and Wright (2007) shows that a competitive bottleneck may emerge endogenously when differentiation is
low on one side, but its sustainability is undermined by the possibility to offer exclusive contracts to multihomers.
4.4. Transaction Fees
The results discussed so far concern competition with registration fees, which apply to some marketplaces but not
all. Caillaud and Jullien (2003) analyze the outcome of Bertrand type competition with transaction fees in a context
where transaction fees are non-distortionary. Much remains to be done for situation where transaction fees have
distortionary effects.
One main insight follows from the remark that transaction fees act as a form of risk sharing between the platform
and the agents, because the payment is made only in case of a transaction while the platform would support the
full cost in case no transaction occurs. Therefore they are natural tools for competing in situations involving the
chicken-and-egg problem that characterizes two-sided markets. It is easier to attract a member facing uncertainty
on the other side's participation if this member's payment is contingent on the other side's participation.28
In the context of Bertrand competition with single-homing, platforms would charge maximal transaction fees and
subsidize participation. The competitive effect is then sufficient for the equilibrium to be efficient with zero profit.29
Page 11 of 18
Two-Sided B to B Platforms
In the context of multihoming, the conclusions are more complex. Typically multihoming modifies the analysis of
divide-and-conquer strategies. Indeed it is easier to convince an agent on one side to join since it does not need to
de-register from the other platform. But there is a weaker band-wagon effect and it becomes more difficult to
convince the other side to de-register from the competitor. In this context transaction fees play two roles. First,
they raise the subsidy that can be paid upfront in order to attract an agent by deferring the revenue to the
transaction stage. Second, they are competitive tools since agents may try to shop for the lowest transactions
fees. As a consequence two alternative strategies can be used by platforms to conquer the market, depending on
the prices of the competing platform. One strategy aims at gaining exclusivity through low registration fees or
subsidies and generating revenue with high transaction fees. The other strategy consists in inducing multihoming
but attracting transactions with low transaction fees.
To summarize, the fact that multihoming agents try to concentrate their activity on the low transaction fee platforms
creates two levels of competition. Intermediaries compete to attract registrations, and in a second stage they
compete to attract transactions by multihomers. This competition tends to reduce transaction fees. One should thus
expect platforms to charge lower transaction fees if there is a large extent of multihoming. In the context of
Bertrand competition analyzed by Caillaud an Jullien (2003) the consequence is that an efficient equilibrium exists
(p. 178) and involves zero profit if the intermediation technologies are identical.30 With different and imperfect
technologies however, profits will be positive unlike the single-homing case.31 Moreover, this efficient equilibrium
may coexist with inefficient competitive bottleneck equilibria.
5. Design and Other Issues
While the two-sided market literature has to a large extent focused on pricing issues with indirect network effects,
there are clearly other important dimensions for BtoB platforms. The literature is still in its infant stage, but some
contributions address some other issues.
5.1. Sellers’ Rivalry
Concerning the pricing strategies of a platform, the first obvious issue is that most of the literature abstracts from
externalities between members of the same side. In the context of BtoB marketplaces the assumption is
questionable as sellers of substitutable products will compete on the market, which generates negative pecuniary
externalities. An analysis of two-sided markets with negative network effects within sides and positive externalities
between sides is provided by Belleflamme and Toulemonde (2009) where they show that when they are neither too
large or too small, negative externalities within sides impede the ability of divide-and-conquer strategies to
overcome the chicken-and-egg problem. This suggests that it may be more difficult for a potential entrant to find his
way to successful entry. Baye and Morgan (2001) focus more explicitly on BtoB platforms and show that when
sellers are offering substitutable goods and are the source of revenue of the platform, the platform will restrict entry
of sellers so as to reduce competition on the platform and preserve a positive margin for sellers (see also White,
2010).32 Hagiu (2009) shows that, as a result of reduced sellers’ competition, an increase in consumers’
preference for variety raises the relative contribution of sellers to a monopoly platform revenue.33 The paper by
Nocke, Peitz and Stahl (2007) discussed below also allows for sellers’ rivalry.
5.2. Tying
Traditional business strategies need to be reconsidered in the context of two-sided markets. For instance, tying has
raised some attention, in part as a consequence of the debates over the recent antitrust procedures surrounding
Microsoft's tying (p. 179) strategies.34 First, the traditional analysis of tying as an exclusionary practice
(Whinston, 1990) needs to be reconsidered as the underlying strategic effects are complex (Hagiu and Farhi,
2008, Weyl, 2008). Second, tying may have beneficial effects specific to two-sided markets.35 For instance, Choi
(2010) points to the fact that with multihoming, tying may raise the level of participation by inducing more agents to
multihome, raising the global volume of transactions in the market.36 Amelio and Jullien (2007) view tying as a
substitute for monetary subsidies when the latter are not feasible, thereby helping the platform to coordinate the
two sides and to be more efficient in a competitive set-up, and analyze the strategic implications.37
5.3. Designing Intermediation Services
Page 12 of 18
Two-Sided B to B Platforms
Tying is one instance of strategic decisions by platforms that relates to the design of the platform architecture as
well as pricing strategies. The development of BtoB marketplaces has led to a burgeoning of innovation in design
and clearly design is an integral part of the business strategies adopted by the platform. Despite a large literature
on matching design, there is little contribution on the interaction between design and pricing.38 Still this seems to
be an important avenue for future research as the incentives of a platform to improve the efficiency of the
intermediation process will depend on its ability to generate revenue from the improvement. Hence pricing models
interact with technical choices in a non-trivial way. For instance Hagiu and Jullien (2011) provides several
rationales for an intermediary offering directed matching services to direct buyers toward sub-optimal matches,
and analyze the impact of pricing models on this phenomenon. Eliaz and Spiegler (2010) and de Corniere (2010)
identify a mechanism in a sequential search model whereby a degradation of the quality of a pool of listed sellers
leads to higher prices charged by sellers, more clicks and more revenue for the platform. For search engines,
White (2009) analyzes the optimal mix of paying and non-paying listed sellers, when there are pecuniary
externalities between sellers. All these contributions provide micro-foundations for design choices by platforms that
affect the perceived quality of the intermediation service by each side in different ways. At a more theoretical level,
Bakos and Katasamas (2008) point to the effect of vertical integration on the incentives of a platform to bias the
design in favour of one side or the other.
5.4. Vertical Integration and Ownership
Vertical integration is a common feature of BtoB platforms and one that is important for two reasons.39 First, vertical
integration is one way to reach a critical size on one side, thereby gaining enough credibility to convince other
agents to join the (p. 180) platform. Second, vertical integration leads to internalization of the surplus of the
integrated buyers or sellers. It thus affects the pricing strategy of the platform and may lead them to be more
aggressive in attracting new members.40 Moreover entry by integrated platforms may be more difficult to deter
than entry by independent platforms, as shown for instance by Sülzle (2009). Another issue relates to the
distribution of ownership which is discussed by Nocke, Peitz and Stahl (2007), where it is shown that for strong
network effects, an independent concentrated ownership dominates in terms of social welfare dispersed ownership
as well as vertical integration with a small club of sellers.
5.5. Sellers’ Investment
While most contributions assume that the products are exogenous, some discuss the link between the platform's
strategy and that sellers’ quality choices. Belleflamme and Peitz (2010) analyze sellers’ pre-affiliation investment
with two competing platforms. They compare open access platforms with for-profit pay platforms and conclude that
investment incentives are higher in the latter case whenever sellers’ investment raises consumers surplus to a
sufficient extent (the precise meaning of “sufficient” depends on the singlehoming/multihoming nature of
participation on both sides). Hagiu (2009) shows that charging transaction fees may help a platform fostering
sellers’ pre-affiliation investment incentives (the argument follows from Hagiu (2004) discussed in Section 3.2).
Hagiu (2007) argues that a two-sided platform may outperform traditional buy-and-resell intermediation when
sellers must invest to raise consumers utility after the affiliation/wholesale decisions are made.
6. Conclusion
Online intermediaries can be seen as platforms where trading partners meet and interact. The literature on twosided markets provides a useful perspective on the business strategies by focusing on two-sided network
externalities and their implications for tariffs and competition. Beyond the overall price level, it is the entire price
structure that matters and the literature helps understanding how the prices are affected by indirect networks
effects. A key lesson is that prices should and will involve some form of cross-subsidy. Typically, the platform
should court more the low externality side than the other. Moreover, unlike one-sided activities, competition
exacerbates the tendency to cross-subsidy. Multihoming may improve efficiency, but has the potential adverse
effect of softening competition.
Much remains to be understood about competition, in particular due to the lack of a tractable well articulated model
of dynamic competition. In particular the (p. 181) literature so far does not provide a clear view on barriers to
Page 13 of 18
Two-Sided B to B Platforms
entry. While the analysis of divide-and-conquer strategies suggests that there are opportunities for new entrants,
these strategies may be excessively risky and not sustainable. Moreover issues of reputation and coordination
point to the existence of barriers to entry akin to those encountered in network competition.
As pointed out by Jullien (2011), the intensification of competition generated by indirect network effect suggests
that there are particularly strong incentives for platforms to escape competition through differentiation and
excessive market segmentation, although little is known about the determinants and the nature of platform's
product design. And among the various topics for future researches mentioned in the text, the most exciting and
novel concerns the linkage between design and business models.
To conclude, while the literature has been concerned with antitrust implications, it has delivered few concrete
recommendations for policy intervention. One of the challenge for the coming years will then be to develop models
helping policy makers to deal with mergers and other antitrust issues in two-sided markets.
References
Ambrus A. and Argenziano R. (2009): “Network Markets and Consumer Coordination,” American Economic Journal:
Microeconomics 1(1):17–52.
Amelio A. and Jullien B. (2007): “Tying and Freebies in Two-Sided Markets,” IDEI Working Paper, forthcoming in
International Journal of Industrial Organization.
Anderson S. and Coate S. (2005): “Market Provision of Broadcasting: A Welfare Analysis,” Review of Economic
Studies, 72(4): 947–972.
Anderson S. and Gabszewicz J. (2006): “The Media and Advertising: A Tale of Two-Sided Markets,” in V. A.
Ginsburg and D. Throsby (eds.), Handbook on the Economics of Art and Culture, vol. 1: 567–614.
Armstrong M. (2002): “The Theory of Access Pricing and Interconnection,” in M. Cave, S. Majumdar and I.
Vogelsang (eds.), Handbook of Telecommunications Economics, North-Holland, 295–384.
Armstrong M. (2006): “Competition in Two-Sided Markets,” Rand Journal of Economics 37: 668–691.
Armstrong M. and Wright J. (2007): “Two-Sided Markets, Competitive Bottlenecks and Exclusive Contracts,”
Economic Theory 32: 353–380.
Bakos Y. and Brynjolfsson E. (1999): “Bundling Information Goods,” Management Science 45(12):1613–1630.
Bakos Y. and Katsamakas E. (2008): “Design and Ownership of Two-Sided Networks: Implications for Internet
Platforms,” Journal of Management Information Systems 25(2):171–202.
Bardey D. and Rochet J.C. (2010): “Competition Among Health Plans: A Two-Sided Market Approach,” Journal of
Economics & Management Strategy 19(2):435–451.
Baye M.R. and Morgan J. (2001): “Information Gatekeepers on the Internet and the Competitiveness of
Homogeneous Product Markets,” American Economic Review 91(3):454–474.
Belleflamme P. and Peitz M. (2010): “Platform Competition and Sellers Investment Incentives,” European Economic
Review 54(8):1059–1076.
Belleflamme P. and Toulemonde E. (2009): “Negative Intra-group Externalities in Two-sided Markets,” International
Economic Review 50(1):245–272.
Caillaud B. and Jullien B. (2001): ‘Competing Cybermediaries,’ European Economic Review (Papers & Proceedings)
45: 797–808.
Caillaud B. and Jullien B. (2003): “Chicken & Egg: Competition among Intermediation Service Providers,” Rand
Journal of Economics 34: 309–328.
Page 14 of 18
Two-Sided B to B Platforms
Choi J.P. (2010): “Tying in Two-Sided Markets with Multi-homing,” The Journal of Industrial Economics 58(3):607–
626.
Crampes C., Haritchabalet C. and Jullien B. (2009): “Competition with Advertising Resources,” Journal of Industrial
Economics 57(1):7–31.
Damiano E. and Li H. (2007): “Price Discrimination and Efficient Matching,” Economic Theory 30: 243–263. (p.
184)
de Cornière A. (2010): “Targeting with Consumer Search: An Economic Analysis of Keyword Advertising,” mimeo,
Paris School of Economics.
Farhi E. and Hagiu A. (2008): “Strategic Interactions in Two-Sided Market Oligopolies,” mimeo, Harvard Business
School.
Gaudeul A. and Jullien B. (2001): “E-commerce: Quelques éléments d’économie industrielle,” Revue Economique
52: 97–117
; English version,
“E-Commerce, Two-sided Markets and Info-mediation,” in E. Brousseau and N. Curien (eds.), Internet and Digital
Economics, Cambridge University Press, 2007.
Hagiu A. (2006): “Optimal Pricing and Commitment in Two-Sided Markets,” The RAND Journal of Economics
37(3):720–737.
Hagiu A. (2007): “Merchant or Two-Sided Platforms,” Review of Network Economics 6: 115–133.
Hagiu A. (2009): “Two-Sided Platforms: Product Variety and Pricing Structures,” Journal of Economics and
Management Strategy 18: 1011–1043.
Hagiu A. and Jullien B. (2011): “Why Do Intermediares Divert Search?” Rand Journal of Economics 42(2): 337–362.
Innes R. and Sexton R. (1993): “Customer Coalitions, Monopoly Price Discrimination and Generic Entry
Deterrence,” European Economic Review 37: 1569–1597.
Jullien B. (2011): “Competition in Multi-Sided Markets: Divide and Conquer,” American Economic Journal:
Microeconomics 3: 1–35.
Jullien B. (2006): “Pricing and Other Business Strategies for e-Procurement Platforms,” in N. Dimitri, G. Spiga and G.
Spagnolo (eds.), Handbook of Procurement, Cambridge University Press.
Jullien B. (2007): “Two-Sided Markets and Electronic Intermediation,” in G. Illing and M. Peitz (eds.), Industrial
Organization and the Digital Economy, MIT Press.
Katz M. and Shapiro K. (1985): “Network Externalities, Competition and Compatibility,” American Economic Review
75: 424–440.
Katz M. and Shapiro K. (1986): “Technology Adoption in the Presence of Network Externalities,” Journal of Political
Economy 94: 822–841.
Kaplan S. and Sawhney M. (2000): “B2B e-Commercehubs; Toward a Taxonomy of Business Models,” Harvard
Business School Review 78(3):97–103.
Lucking-Reyley D. and D. Spulber (2001): “Busisness-to-Business Electronic Commerce,” Journal of Economic
Prespective 15: 55–68.
Nocke V., Peitz M., and Stahl K. (2007): “Platform Ownership,” Journal of the European Economic Association
5(6):1130–1160.
Peitz M. and P. Waelbrock (2006): “Digital Music” in: Illing, G. and M. Peitz (eds.), Industrial Organization and the
Digital Economy, MIT Press.
Page 15 of 18
Two-Sided B to B Platforms
Rochet J.C. and Tirole J. (2003): “Platform Competition in Two-Sided Markets,” Journal of the European Economic
Association 1: 990–1029.
Rochet J.C. and Tirole J. (2006): “Two-Sided Markets: a Progress Report,” The RAND Journal of Economics 37: 645–
667.
Rochet J.C. and Tirole J. (2008): “Tying in Two-Sided Market and the Honor-all-Cards Rule,” International Journal of
Industrial Organization 26(6):1333–1347.
Rysman M. (2009): “The Economics of Two-Sided Markets,” Journal of Economic Prespective 23(3):125–143.
Sülzle K. (2009): “Duopolistic Competition between Independent and Collaborative Business-to-Business
Marketplaces,” International Journal of Industrial Organization 27: 615–624.
Weyl E.G. (2008): “The Price Theory of Two-Sided Markets,” Harvard University. (p. 185)
Weyl E.G. (2010): “A Price Theory of Multi-sided Platforms.” American Economic Review 100(4): 1642–72.
Weyl G. and White A. (2010): “Imperfect Platform Competition: A General Framework” mimeo, Harvard University.
Whinston D. (1990): “Tying, Foreclosure, and Exclusion,” The American Economic Review 80(4):837–859.
Wright J. (2003): “Optimal Card Payment System,” European Economic Review 47, 587–617.
Yoo B. Choudhary V., and Mukhopadhyay T. (2007): “Electronic B2B Marketplaces with Different Ownership
Structures,” Management Sciences 53(6):952–961 (p. 186) .
Notes:
(1.) See the survey on electronic commerce in The Economist (February 2000).
(2.) 2008 E-commerce Multi-sector “E-Stats” Report, available at http://www.census.gov/econ/estats/.
(3.) The value is concentrated among six sectors: Transportation Equipment, Petroleum and Coal Products,
Chemical Products, Food Products, Computer and Electronic Products and Machinery Products.
(4.) Nextag and Kaboodle are examples of search engines proposing a comparison of products and prices over all
funishers.
(5.) Examples are Sciquest, which provides procurement and supplier management processes for life sciences,
and BravoSolution, a general provider of supply management.
(6.) General presentations are Rochet and Tirole (2005), Jullien (2006), and Rysman (2009)
(7.) See the chapter “Reputation on the Internet” by L. Cabral in this handbook.
(8.) For detailed discussions, see the chapters in this handbook “Digitalization of Retail Payment” by W. Bolt and S.
Chakravorti, “Home Videogame Platforms” by R. Lee, “Software Platforms” by A. Hagiu and “Advertising on the
Internet” by S. Anderson.
(9.) See the survey by Armstrong (2002) and the chapter “Mobile Telephony” by S. Hoernig and T. Valletti in this
handbook.
(10.) The alliance between Cornerbrand and Kazaa described by Peitz and Waelbroeck (2006) is a good example
of such a strategy, as each side benefits from it.
(11.) Since there has been notorious difficulty in providing a consensus on a formal general definition of a twosided market, I shall not try to do so here.
(12.) The statement is valid for a large class of bargaining models, including models with asymmetric information
Page 16 of 18
Two-Sided B to B Platforms
(see Rochet and Tirole, 2006).
(13.) See also Gaudeul and Jullien (2001) for a monopoly model along the same lines.
(14.) See Anderson and Gabszewicz (2006), Anderson and Coate (2005), Crampes et al. (2004), and the chapter
14 by S. Anderson in this handbook.
(15.) A particular form of economy of scope for electronic goods is a reduction in demand uncertainty (see Bakos
and Brynjolfson, 1999).
(16.) See chapter 11 by J.P. Choi in this handbook.
(17.) With a cost per transaction, the opportunity costs is cb + cNs − α sNs which may be larger or smaller than cb.
(18.) To see that let u be the value of a transaction and F(u) its cdf. Then
which derivative is 1 − F(u) the probability that the transaction occurs.
(19.) This conclusion extends to duopoly with single-homing.
(20.) As an illustration, while in 2005 the leading automotive e-procurement portal, Covinsint, was relying on
membership fees, its smaller competitor Partsforindustry was relying most extensively on volume related payment
(Jullien, 2006).
(21.) The Rochet and Tirole (2003) model also has a unique equilibrium in duopoly.
(22.) The literature on competition between mobile telecommunication networks has introduced many of the
relevant concepts (see Armstrong (2002) for a review). See also chapter “Mobile Telecommunications” by S.
Hoernig and T. Valletti in this handbook.
(23.) See chapter 9 by J. Moraga and M. Wildenbeest in this handbook.
(24.) This conclusion is corrobated by Jullien's treatment of a general competitive game between multisided
platforms (Jullien, 2000).
(25.) See the ongoing work by Weyl and White (2010).
(26.) This assumes away any resource constraint or other diminishing returns for sellers.
(27.) In the context of the Hotelling model discussed by Armstrong (2006), there is a one-to-one waterbed effect so
all profits from sellers are competed away on buyers.
(28.) The same insight holds for Weyl's concept of insulating tariff, which may help to provide credibility to this
concept for competitive situations.
(29.) With distortionary transaction fees and Bertrand competition, efficiency would not be achieved but there
would still be zero profits (see Jullien, 2007).
(30.) By this it is meant that if one platform fails to generate a particular transaction, the other will fail as well.
(31.) With an imperfect intermediation technology, a third strategy always generates positive profits: it consists in
focusing on agents who failed to perform a transaction on the competing platform, by charging a high transaction
fee, exploiting the last resort position.
(32.) See chapter 9 by J. Moraga and M. Wildenbeest in this handbook.
(33.) Results are more ambiguous for competing platforms, for reasons similar to Belleflamme and Toulemonde
(2009).
(34.) See chapter 11 by J.P. Choi in this handbook.
(35.) See Rochet and Tirole (2004b) for a similar view on tying between two payment card systems.
Page 17 of 18
Two-Sided B to B Platforms
(36.) An interesting example of efficient bundling is the ability to solve the issue of buyers’ moral hazard by
bundling a well-designed payment system with the matching service (see for example the BtoC website
PriceMinister).
(37.) Bundling may of course serve more traditional price discrimination purposes. The reader is referred to Jullien
(2006) for an informal discussion of bundling in the context of e-procurement.
(38.) A noticeable exception is Damiano and Li (2008).
(39.) This is usually referred to as biased marketplaces, see Kaplan and Sawheny (2000). As an example Covisint
is jointly owned by car manufacturers Daimler, General Motors, Ford and Renault-Nissan.
(40.) See Yoo et al. (2007).
Bruno Jullien
Bruno Jullien is a Member of the Toulouse School of Economics and Research Director at CNRS (National Center for Scientific
Research).
Page 18 of 18
Online versus Offline Competition
Oxford Handbooks Online
Online versus Offline Competition
Ethan Lieber and Chad Syverson
The Oxford Handbook of the Digital Economy
Edited by Martin Peitz and Joel Waldfogel
Print Publication Date: Aug 2012
Online Publication Date: Nov
2012
Subject: Economics and Finance, Economic Development
DOI: 10.1093/oxfordhb/9780195397840.013.0008
Abstract and Keywords
This article describes the nature of competition between online and offline retailing. It first introduces some basic
empirical facts that reflect the present state of online and offline competition, and then discusses the interplay of
online and offline markets. Internet users are higher income, more educated, and younger than non-Internet users.
Education is a sizeable determinant of who is online, even controlling for income. E-commerce enables new
distribution technologies that can decrease supply chain costs, improve service, or both. The Internet has
influenced the catalog of products available to consumers. The changes in demand- and supply-side fundamentals
that e-commerce brings can foment substantial shifts in market outcomes from their offline-only equilibrium. These
include prices, market shares, profitability, and the types of firm operating in the market. Online channels have yet
to fully establish themselves in some markets and are typically growing faster than bricks-and-mortar channels.
Keywords: competition, online retailing, offline retailing, e-commerce, Internet, prices, market shares, profitability
1. Introduction
Amazon is arguably one of the most successful online firms. As of this writing, its market value is more than $80
billion, 60 percent higher than the combined value of two large and successful offline retailers, Target and Kohl’s,
who have almost 2,900 stores between them.
Jeff Bezos conceived of Amazon as a business model with many potential advantages relative to a physical
operation. It held out the potential of lower inventory and distribution costs and reduced overhead. Consumers
could find the books (and later other products) they were looking for more easily, and a broader variety could be
offered for sale in the first place. It could accept and fulfill orders from almost any domestic location with equal
ease. And most purchases made on its site would be exempt from sales tax.
On the other hand, Bezos no doubt understood some limitations of online operations. Customers would have to wait
for their orders to be received, processed, and shipped. Because they couldn’t physically inspect a product before
ordering, Amazon would have to make its returns and redress processes transparent and reliable, and offer other
ways for consumers to learn as much about the product as possible before buying.
(p. 190) Amazon's entry into the bookselling market posed strategic questions for brick-and-mortar sellers like
Barnes & Noble. How should they respond to this new online channel? Should they change prices, product
offerings, or capacity? Start their own online operation? If so, how much would this cannibalize their offline sales?
How closely would their customers see ordering from the upstart in Seattle as a substitute for visiting their stores?1
The choices made by these firms and consumers’ responses to them—actions driven by the changes in market
fundamentals wrought by the diffusion of e-commerce technologies into bookselling—changed the structure of the
Page 1 of 28
Online versus Offline Competition
market. As we now know, Amazon is the largest single bookseller (and sells many other products as well). Barnes &
Noble, while still large, has seen its market share diminish considerably. Borders is out of business. There are also
many fewer bricks-and-mortar specialty bookshops in the industry and their prices are lower.
In this chapter, we discuss the nature of competition between a market's online and offline segments. We take a
broad view rather than focus on a specific case study, but many of the elements that drove the evolution of retail
bookselling as just described are present more generally.
We organize our discussion as follows. The next section lays out some basic facts about the online sales channel:
its size relative to offline sales; its growth rate; the heterogeneity in online sales intensity across different sectors,
industries, and firms; and the characteristics of consumers who buy online. Section 3 discusses how markets’
online channels are economically different due to e-commerce's effects on market demand and supply
fundamentals. Section 4 explores how changes in these fundamentals due to the introduction of an online sales
channel might be expected to change equilibrium market outcomes. Section 5 investigates various strategic
implications of dual-channeled markets for firms. A short concluding section follows.
2. Some Facts
Before discussing the interplay of online and offline markets, we lay out some basic empirical facts that reflect the
present state of online and offline competition.
2.1. How Large Are Online Sales Relative to Offline Sales?
To take the broadest possible look at the data, it is useful to start with the comprehensive e-commerce information
collected by the U.S. Census Bureau.2 The Census separately tracks online- and offline-related sales activity in
four major sectors: (p. 191) manufacturing, wholesale, retail, and a select set of services. The data are
summarized in Table 8.1. In 2008, total e-commerce-related sales in these sectors were $3.7 trillion. Offline sales
were $18.7 trillion. Therefore transactions using some sort of online channel accounted for just over 16 percent of
all sales. Not surprisingly, the online channel is growing faster: nominal e-commerce sales grew by more than 120
percent between 2002 and 2008, while nominal offline sales grew by only 30 percent. As a greater fraction of the
population goes online—and uses the Internet more intensively while doing so—e-commerce's share will almost
surely rise.
The relative contribution of online-based sales activity varies considerably across sectors, however. Looking again
at 2008, e-commerce accounted for 39 percent of sales in manufacturing and 21 percent in wholesale trade, but
only 3.6 percent in retail and 2.1 percent in services. If we make a simple but broadly accurate classification of
deeming manufacturing and wholesale sales as business-to-business (B2B), and retail and services as business-toconsumer (B2C), online sales are considerably more salient in relative terms in B2B sales than in B2C markets.
Because total B2B and B2C sales (thus classified) are roughly equal in size, the vast majority of online sales, 92
percent, are B2B related.3 That said, B2C e-commerce is growing faster: it rose by 174 percent in nominal terms
between 2002 and 2008, compared to the 118 percent growth seen in B2B sectors. In terms of shares, ecommerce-related sales in B2B sectors grew by about half (from 19 to 29 percent) from (p. 192) 2002 to 2008,
while more than doubling (from 1.3 to 2.7 percent) in B2C sectors over the same period.4
Page 2 of 28
Online versus Offline Competition
Table 8.1 Dollar Value of Commerce by Sector and Type ($ billions)
Percent gain,
Manufacturing
Wholesale
Retail
Service
Total
2002
2008
2002–2008
E-commerce
751.99
2154.48
186.5
Offline
3168.65
3331.78
5.2
Fraction e-commerce
0.192
0.393
E-commerce
806.59
1262.37
56.5
Offline
3345.01
4853.79
45.1
Fraction e-commerce
0.194
0.206
E-commerce
44.93
141.89
215.8
Offline
3089.40
3817.27
23.6
Fraction e-commerce
0.014
0.036
E-commerce
59.97
146.49
144.3
Offline
4841.03
6700.97
38.4
Fraction e-commerce
0.012
0.021
E-commerce
1,663.47
3,705.23
122.7
Offline
14,444.09
18,703.81
29.5
Fraction e-commerce
0.103
0.165
Notes: This table shows the composition of sector sales by e-commerce status. Data are from the U.S. Census
E-commerce Reports (available at 〈http://www.census.gov/econ/estats/〉). See text for definition of e-commerce
sales.
When considering the predominance of B2B e-commerce, it is helpful to keep in mind that the data classify as ecommerce activity not just transactions conducted over open markets like the Internet, but also sales mediated via
proprietary networks as well. Within many B2B sectors, the use of Electronic Data Interchange as a means to
conduct business was already common before the expansion of the Internet as a sales channel during the mid
1990s. While some research has looked at the use of less open networks (e.g. Mukhopadhyay, Kekre, and
Kalathur, 1995), the academic literature has focused on open-network commerce much more extensively. We
believe that much of the economics of the more B2C-oriented literature discussed in this paper applies equally or
nearly as well to B2B settings. Still, it is useful to keep in mind the somewhat distinct focal points of the data and the
literature.
2.2. Who Sells Online?
Page 3 of 28
Online versus Offline Competition
In addition to the variation in online sales intensity across broad sectors, there is also considerable heterogeneity
within sectors. Within manufacturing, the share of online-related sales ranges from 21 percent in Leather and Allied
Products to 54 percent in Transportation Equipment. In retail, less than one third of one percent of sales at Food
and Beverage stores are online; on the other hand, online sales account for 47 percent of all sales in the
“Electronic Shopping and Mail-Order Houses” industry (separately classified in the NAICS taxonomy as a 4-digit
industry). Similar diversity holds across industries in the wholesale and service sectors.
Differences in the relative size of online sales across more narrowly defined industries can arise from multiple
sources. Certain personal and business services (e.g. plumbing, dentistry, copier machine repair) are inherently
unsuited for online sales, though some logistical aspects of these businesses, such as advertising and billing, can
be conducted online. Likewise, consumer goods that are typically consumed immediately after production or
otherwise difficult to deliver with a delay (e.g., food at restaurants or gasoline) are also rarely sold online.
In an attempt to explain the heterogeneity in the online channel's share of sales across manufacturing industries,
we compared an industry's e-commerce sales share to a number of plausible drivers of this share. These include
the dollar value per ton of weight of the industry's output (a measure of the transportability of the product; we use
its logarithm), R&D expenditures as a fraction of sales (a proxy for how “high-tech” the industry is), logged total
industry sales (to capture industry size), and an index of physical product differentiation within the industry. (All
variables were measured at the 3-digit NAICS level.5) We did not find clear connections of industries’ e-commerce
sales shares to these potential drivers in either raw pairwise correlations or in a regression framework, though our
small sample (p. 193) size makes inference difficult. The tightest link was between e-commerce intensity and the
logged value per ton of the industry's output. A one-standard-deviation increase in the latter was related to a
roughly half-standard-deviation increase in e-commerce's sales share. The statistical significance of this
connection was marginal (the p-value is 0.101), however.
Forman et al. (2003) study sources of differences in online sales activity across firms by investigating commercial
firms’ investments in e-commerce capabilities. They do so using the Harte Hanks Market Intelligence CI Technology
database from June 1998 through December of 2000, which contains information on technology use for more than
300,000 establishments. The authors use this data to classify investments in e-commerce capabilities into two
categories: participation and enhancement. The former involves developing basic communications capabilities like
email, maintaining an active website, and allowing passive document sharing. Enhancement involves adopting
technologies that alter internal operations or lead to new services. They found that most firms, around 90 percent,
made some sort of technology investment. However, only a small fraction (12 percent) adopted Internet
technologies that fell into the enhancement category. So while most firms adopted Internet technologies, only a few
made investments that would fundamentally change their business.6
2.3. Who Buys Online?
We use data from the 2005 Forrester Research Technographics survey, a representative survey of North
Americans that asks about respondents’ attitudes toward and use of technology, to form an image of what online
shoppers look like.
We first look at who uses the Internet in any regular capacity (not necessarily to shop). We run a probit regression
of an indicator for Internet use by the respondent during the previous year on a number of demographic variables.
The estimated marginal effects are in Table 8.2, column 1. By the time of the 2005 survey, more than 75 percent of
the sample reported being online, so the results do not simply reflect the attributes of a small number of
technologically savvy early adopters.
Internet users are higher-income, more educated, and younger than non-Internet users. The coefficients on the
indicators for the survey's household income categories imply that having annual income below $20,000 is
associated with a 22 percentage point smaller probability of being online than being in a household with an income
over $125,000, the excluded group in the regression. Internet use increases monotonically with income until the
$70,000–90,000 range. Additional income seems to have little role in explaining Internet use after that threshold.
Education is a sizeable determinant of who is online, even controlling for income. Relative to having a high school
degree (the excluded category), not having graduated from high school reduces the probability of using the
Page 4 of 28
Online versus Offline Competition
Internet by 8 to 9 percentage points (we include categorical education variables for both the (p. 194) (p. 195)
female and male household heads), while having a college degree raises it by 6 to 8 points.
Table 8.2 Demographics and Probability of Using Internet and Purchasing Online
Respondent's race is Black
Respondent's race is Asian
Respondent's race is other
Respondent is Hispanic
Respondent is Male
Household income 〈 $20K
$20K 〈 household income ≤ $30K
$30K 〈 household income ≤ $50K
$50K 〈 household income ≤ $70K
$70K 〈 household income ≤ $90K
$90K 〈 household income ≤ $125K
Female head of household's education is less than high school
Page 5 of 28
Use Internet
Purchase Online
−0.039
−0.106
(0.007)***
(0.009)***
0.029
0.01
(0.014)**
(0.018)
0.005
−0.002
(0.015)
(0.020)
0.005
−0.002
(0.009)
(0.012)
−0.005
−0.009
(0.005)
(0.006)
−0.217
−0.309
(0.015)***
(0.011)***
−0.134
−0.207
(0.013)***
0.012)***
−0.085
−0.133
(0.011)***
(0.011)***
−0.043
−0.085
(0.011)***
(0.011)***
−0.004
−0.038
(0.010)
(0.011)***
−0.017
−0.043
(0.010)
(0.011)***
−0.081
−0.109
Online versus Offline Competition
(0.009)***
(0.012)***
0.063
0.083
(0.005)***
0.006)***
−0.091
−0.134
(0.008)***
(0.010)***
0.084
0.109
(0.004)***
(0.006)***
−0.004
−0.003
(0.001)***
(0.001)**
−0.025
−0.085
(0.007)***
(0.011)***
Additional income and family structure controls
X
X
Fraction of sample responding yes
0.763
0.509
N
54,320
54,320
Pseudo-R2
0.24
0.196
Female head of household's education is college
Male head of household's education is less than high school
Male head of household's education is college
Age
Age2 /1000
Notes: This table shows the estimates from probit regressions of indicators for households using the Internet
and making purchases online on household demographics. The sample includes U.S. households in the 2005
Forrester Research Technographics survey. Standard errors in parentheses.
***
p〈0.01,
(**) p〈0.05,
*
p〈0.10.
Not surprisingly, the propensity to be online declines with age. The coefficient on the square of age is negative and
significant, so the marginal effect grows slightly with age. For example, a 35-year-old is 5.5 percentage points less
likely to be online than a 25-year-old, while a 60-year-old is 6.8 percentage points less likely than a 50-year-old to
use the Internet.
Race also explains some variation in Internet use, though the size of the marginal effect is modest. Blacks are
about 4 percentage points less likely to be online than Whites, while Asians are three percentage points more
likely. Hispanics are online at the same rate as whites.
Gender does not seem to be a factor in explaining Internet use.
The results in column 2 of Table 8.2 look at online purchasing behavior per se. The column shows the marginal
effects of a probit regression on whether the survey respondent reported making an online purchase within the last
Page 6 of 28
Online versus Offline Competition
year. The qualitative patterns estimated are similar to those for the probit on Internet use, though many marginal
effects have larger magnitudes. So while a low income person (household income less than $20,000 per year) is
about 22 percentage points less likely to be online than someone from a household making $125,000 or more, they
are 31 percentage points less likely to actually buy something online. Similarly, not having a high school diploma
reduces the probability of online purchases by 11 to 13 percentage points relative to having a diploma (as
opposed to an 8 to 9 percentage point effect on Internet use), and having a college degree now raises it by 8 to 11
percentage points (as opposed to 6 to 8 points for use). Age effects are also larger, now being in the 8 to 13
percentage point range per 10 years, depending on the ages being compared, as the magnitude of the age effect
is still convex. While Blacks were 4 percentage points less likely to be online, they are about 11 percentage points
less likely to make purchases once online. On the other hand, while Asians were more likely to be online than
Whites and Hispanics, they are not significantly more likely to report having bought goods or services online.
(p. 196) Though not shown, we also ran regressions of purchases conditional on Internet use. The results are
very similar to the coefficients from the second column. This indicates that selection based on who uses the
Internet is not driving the patterns of who purchases products online.
These results are informative and largely in line with what we suspect are many readers’ priors. But they reflect
overall online purchasing likelihoods, not the determinants of whether consumers, when buying a particular
product, choose to do so via online or offline channels. However, the Technographics survey collects additional
information on the method of purchase for specific types of products. We can make such comparisons in this case.
We investigate consumers’ behavior regarding a set of financial products: auto loans, credit cards, mortgages and
home equity loans, auto and life insurance, and checking accounts. The survey asks both whether each of these
products were researched online or offline prior to purchase, and whether any purchase was made online or
offline. Table 8.3 reports the results. Column 1 of Table 8.3 simply reprints, for the sake of comparison, the results
from column 2 of Table 8.2 regarding whether the respondent made any purchase online within the past 12
months. Columns 2 and 3 of Table 8.3 report analogous results for probits on whether the respondent bought any
of the particular financial products listed above online within the past year. The results in column 2 are not
conditional on the respondent having reported that they researched such financial products online; those in
column 3 use the subsample of respondents reporting having researched those product types online. The results
from both samples are similar.
Many of the qualitative patterns seen for online purchases in general are observed for financial products in
particular, but there are some interesting differences. The effect of age is still negative, but is now concave in
magnitude rather than convex. And while having a college degree is associated with a significantly higher
probability of buying something online, it has a much smaller and insignificant (and in the case of the female head
of household, negative) role in financial products. Most striking are the results on race. While Blacks are 11
percentage points less likely to purchase products online than Whites, they are 1.5 percentage points more likely
to buy financial products online. Not only is this effect in the opposite direction of the overall results, it is almost as
large in magnitude in relative terms.7 Asian and Hispanic respondents are similarly more likely (economically and
statistically) to buy financial products online than Whites, even though they did not exhibit statistically different
patterns for overall online purchases. We speculate this differential racial pattern for financial products may reflect
minorities’ concerns about discrimination in financial product markets, but in the absence of additional evidence,
we cannot really know.
Finally, we look at changes in consumers’ propensity to buy specific products online in Table 8.4. The second
column of the table lists, for a number of product categories that we can follow in the Forrester Technographics
survey from 2002 (p. 197) (p. 198) to 2007, the five-year growth rate in consumers’ reported frequency of
buying the product online. The third column shows for reference the fraction of consumers reporting having bought
the product online in the past year. Auto insurance, one of the financial products we just discussed, saw the fastest
growth in online purchases, nearly tripling between 2002 and 2007 (though from an initially small level). Many of
the “traditional” online products (if there is such a thing after only about 15 years of existence of e-commerce)—
books, computer hardware, airline tickets, and so on—saw more modest but still substantial growth.8 While the
growth rate of online purchases for a product is negatively correlated with its 2002 level, the correlation is modest
(ρ = -0.13) and not significantly different from zero. Thus it is not the case that the fastest growing products were
those that had the slowest start.
Page 7 of 28
Online versus Offline Competition
Table 8.3 Probability of Purchasing Financial Products Online
Financial Products
Respondent's race is Black
Respondent's race is Asian
Respondent's race is other
Respondent is Hispanic
Respondent is Male
Household income 〈 $20K
$20K 〈 household income ≤ $30K
$30K 〈 household income ≤ $50K
$50K 〈 household income ≤ $70K
$70K 〈 household income ≤ $90K
$90K 〈 household income ≤ $125K
Female head of household's education is less than
Any
product
Unconditional
Conditional on
purchase
−0.106
0.015
0.058
(0.009)***
(0.004)***
(0.013)***
0.01
0.035
0.107
(0.018)
(0.009)***
(0.023)***
−0.002
0.001
0.019
(0.020)
(0.008)
(0.023)
−0.002
0.015
0.054
(0.012)
(0.006)**
(0.016)***
−0.009
0.015
0.033
(0.006)
(0.003)***
0.008)***
−0.309
−0.048
0.107
(0.011)***
(0.004)***
(0.014)***
−0.207
−0.026
0.062
(0.012)***
(0.005)***
(0.015)***
−0.133
−0.017
0.049
(0.011)***
(0.005)***
(0.014)***
−0.085
−0.012
0.035
(0.011)***
(0.005)***
(0.013)***
−0.038
−0.003
0.008
(0.011)***
(0.005)
(0.013)
−0.043
−0.006
0.016
(0.011)***
(0.005)
(0.013)
−0.109
−0.011
0.014
Page 8 of 28
Online versus Offline Competition
Female head of household's education is less than
high school
−0.109
−0.011
0.014
(0.012)***
(0.005)**
(0.016)
0.083
−0.005
0.013
(0.006)***
(0.003)*
0.008)
−0.134
−0.021
0.051
(0.010)***
(0.004)***
(0.012)***
0.109
0.003
0.008
(0.006)***
(0.003)
(0.008)
−0.003
−0.005
0.006
(0.001)**
(0.001)***
(0.001)***
−0.085
0.015
−0.006
(0.011)***
(0.005)***
(0.014)
Fraction of sample responding yes
1
0
0.265
N
54,320
59,173
21,474
Pseudo-R2
0.196
0.097
0.086
Female head of household's education is college
Male head of household's education is less than
high school
Male head of household's education is college
Age
Age2 /1000
Notes: Estimates from probit regressions of household purchase indicators on demographics. Column 1 reprints
for comparison column 2 of Table 8.2. Columns 2 and 3 use an indicator for whether the household purchased
one or more of a set of financial products (see text for list) online in the previous year. Column 2 uses the entire
sample; column 3 conditions on the subsample that reports having researched financial products online. The
sample includes U.S. households in the 2005 Forrester Research Technographics survey. Standard errors in
parentheses.
(***) p 〈 0.01,
(**) p 〈 0.05,
(*) p 〈 0.10.
3. How is the Online Channel Different from the Offline Channel?
E-commerce technology can affect both demand and supply fundamentals of markets. On the demand side, ecommerce precludes potential customers from inspecting goods prior to purchase. Further, online sellers tend to
be newer firms and may have less brand or reputation capital to signal or bond quality. These factors can create
information asymmetries between buyers and sellers not present in offline purchases. Online sales also often
involve a delay between purchase and (p. 199) (p. 200) consumption when a product must be physically
delivered. At the same time, however, e-commerce technologies reduce consumer search costs, making it easier
Page 9 of 28
Online versus Offline Competition
to (virtually) compare different producers’ products and prices. On the supply side, e-commerce enables new
distribution technologies that can reduce supply chain costs, improve service, or both. Both the reduction in
consumer search costs and the new distribution technologies combine to change the geography of markets; space
can matter less online. Finally, and further combining both sides of the market, online sales face different tax
treatment than offline sales. We discuss each of these factors in turn in this section.
Table 8.4 Changes in Consumers’ Propensity to Buy Products Online, 2002–2007
Product
category
Pct. growth in online purchase frequency,
2002–2007
Fraction buying product
online, 2007
Car insurance
183.7
0.076
Major appliances
139.6
0.014
Consumer
electronics
125.7
0.092
Video games
117.3
0.070
Sporting goods
100.8
0.068
Footwear
89.8
0.116
Credit card
77.2
0.102
Apparel
73.6
0.253
Auto parts
64.3
0.039
Books
60.3
0.278
DVDs
58.6
0.148
Event tickets
53.2
0.121
Music
48.3
0.156
Computer
hardware
43.0
0.076
Life insurance
42.2
0.019
Toys
41.2
0.124
Hotel
reservations
31.1
0.151
Clothing
accessories
23.6
0.089
Airline tickets
22.2
0.172
Page 10 of 28
Online versus Offline Competition
Tools/hardware
21.0
0.045
Office supplies
19.1
0.077
Software
12.7
0.113
Flowers
11.0
0.097
Car loans
6.3
0.024
Car rentals
6.2
0.077
Food/beverages
1.1
0.041
Home equity
loans
3.5
0.018
Mortgages
25.4
0.025
Small appliances
32.8
0.022
Notes: The table reports both levels of and changes in the fraction of households reporting purchasing specific
goods and services online. The sample includes U.S. households in the 2002 and 2007 Forrester Research
Technographics surveys.
3.1. Asymmetric Information
Information asymmetries are larger when purchasing online for a few reasons. The most obvious is that the
consumer does not have the opportunity to physically examine the good at the point of purchase. This presents a
potential “lemons problem” where unobservably inferior varieties are selected into the online market. Another is
that because online retailing is relatively new, retailers have less brand capital than established traditional retailers.
A related factor involves some consumers’ concerns about the security of online transactions.
Because information asymmetries can lead to market inefficiencies, both buyers and sellers (particularly sellers of
high quality goods) have incentives to structure transactions and form market institutions to alleviate “lemons”
problems. Many examples of such efforts on the part of online sellers exist. Firms such as Zappos offer free
shipping on purchases and returns, which moves closer to making purchases conditional upon inspection.
However, the delay between ordering and consumption inherent to online commerce (more on this below) still
creates a wedge.
An alternative approach is to convey prior to purchase the information that would be gleaned by inspecting the
product. Garicano and Kaplan (2001) examine used cars sold via an online auction, Autodaq, and physical
auctions. They find little evidence of adverse selection or other informational asymmetries. They attribute this to
actions that Autodaq has taken in order to reduce information asymmetries. Besides offering extensive information
on each car's attributes and condition, something that the tools of e-commerce actually make easier, Autodaq
brokers arrangements between potential buyers and third-party inspection services. Jin and Kato (2007) examine
the market for collectable baseball cards and describe how the use of third-party certification has alleviated
information asymmetries. They find a large increase in the use of professional grading services when eBay began
being used for buying and selling baseball cards. Another form of disclosure is highlighted in Lewis (2009). Using
data from eBay Motors, he finds a positive correlation between the number of pictures that the seller posts and the
winning price of the auction. However, he does not find evidence that information voluntarily disclosed by the seller
affects the probability that the auction listing results in a sale.
Page 11 of 28
Online versus Offline Competition
(p. 201) Instead of telling consumers about the product itself, firms can try to establish a reputation for quality or
some other brand capital. Smith and Brynjolfsson (2001) use data from an online price comparison site to study the
online book market. They find that brand has a significant effect on consumer demand. Consumers are willing to
pay an extra $1.72 (the typical item price in the sample is about $50) to purchase from one of the big three online
book retailers: Amazon, Barnes & Noble, or Borders. There is evidence that the premium is due to perceived
reliability of the quality of bundled services, and shipping times in particular. In online auction markets, rating
systems allow even small sellers to build reputations, although Bajari and Hortaçsu (2004) conclude that the
evidence about whether a premium accrues to sellers with high ratings is ambiguous. Perhaps a cleaner metric of
the effect of reputation in such markets comes from the field experiment conducted by Resnick et al. (2006).
There, an experienced eBay seller with a very good feedback rating sold matched lots of postcards. A randomized
subset of the lots was sold by the experienced eBay seller, using its own identity. The other subset was sold by the
same seller, but using a new eBay identity without any buyer feedback history. The lots sold using the experienced
seller identity received winning bids that were approximately eight percent higher. More recently, Adams, Hosken,
and Newberry (2011) evaluate whether seller ratings affect how much buyers are willing to pay for Corvettes on
eBay Motors. Most of the previous research had dealt with items of small value where the role of reputation might
have a relatively modest influence. Collectable sports cars, however, are clearly high value items. In that market,
Adams et al. find very little (even negative) effect from seller ratings.
In another recent paper, Cabral and Hortaçsu (2010) use a different approach and find an important role for eBay's
seller reputation mechanism. They first run crosssectional regressions of prices on seller ratings and obtain results
similar to Resnick et al. (2006). Next, using a panel of sellers to examine reputation effects over time, they find that
sellers’ first negative feed-back drops their average sales growth rates from +5 percent to –8 percent. Further,
subsequent negative feedback arrives more quickly, and the seller becomes more likely to exit as her rating falls.
Outside of online auction markets, Waldfogel and Chen (2006) look at the interaction of branding online and
information about the company from a third party. They find that the rise of information intermediaries such as
BizRate leads to lower market shares for major branded online sellers like Amazon. Thus other sources of online
information may be a useful substitute for branding in some markets.
3.2. Delay Between Purchase and Consumption
While a lot of digital media that is purchased online can be used/consumed almost immediately after purchase
(assuming download times are not a factor), online purchases of physical goods typically involve delivery lags that
can range from hours to days and occasionally longer. Furthermore, these delayed-consumption (p. 202) items
are the kind of product most likely to be available in both online and brick-and-mortar stores, so the role of this lag
can be particularly salient when considering the interaction between a market's online and offline channels.
The traditional view of a delay between choice and consumption is as a waiting cost. This may be modeled as a
simple discounted future utility flow or as a discrete cost (e.g., Loginova 2009). In either case, this reduces the
expected utility from purchasing the good's online version. However, more behavioral explanations hold out the
possibility that, for some goods at least, the delay actually confers benefits to the buyer in the form of anticipation
of a pleasant consumption experience (e.g., Loewenstein 1987). This holds out the possibility that the impact of
delay on the relative advantage of online channels is ambiguous. Though one might think that if delay confers a
consistent advantage, offline sellers could offer their consumers the option to delay consumption after purchase
rather easily. This, to say the least, is rarely seen in practice.
3.3. Reduced Consumer Search Costs
It is generally accepted that search costs online are lower than in offline markets. The rise of consumer information
sites, from price aggregation and comparison sites (aka shopbots) to product review and discussion forums, has
led to large decreases in consumers’ costs of gathering information. This has important implications for market
outcomes like prices, market shares, and profitability, as discussed in detail in Section 4.
Online search isn’t completely free; several papers have estimated positive but modest costs. Bajari and Hortaçsu
(2003), for example, find the implied price of entering an eBay auction to be $3.20. Brynjolfsson, Dick, and Smith
(2010) estimate that the maximum cost of viewing additional pages of search results on a books shopbot is $6.45.
Page 12 of 28
Online versus Offline Competition
Hong and Shum (2006) estimate the median consumer search cost for textbooks to be less than $3.00.
Nevertheless, while positive, these costs are less for most consumers than the value of the time it would take them
to travel to just one offline seller.
3.4. Lower Distribution Costs
E-commerce affects how goods get from producers to consumers. In some industries, the Internet has caused
disintermediation, a diminishment or sometimes the entire removal of links of the supply chain. For example,
between 1997 and 2007, the number of travel agency offices fell by about half, from 29,500 to 15,700. This was
accompanied by a large increase in consumers’ propensity to directly make travel arrangements—and buy airline
tickets in particular—using online technologies.9
E-commerce technologies have also brought changes in how sellers fulfill orders. Firms can quickly assess the
state of demand for their products and turn this (p. 203) information into orders sent to upstream wholesalers and
manufacturers. This has reduced the need for inventory holding. Retail inventory-to-sales ratios have dropped
from around 1.65 in 1992 to 1.34 in late 2010, and from 1.55 to 1.25 over the same period for “total business,” a
sum of the manufacturing, wholesale, and retail sectors.10
An example of how increased speed of communication along the supply chain affects distribution costs is a
practice referred to as “drop-shipping.” In drop-shipping, retailers transfer orders to wholesalers who then ship
directly to the consumer, bypassing the need for a retailer to physically handle the goods. This reduces distribution
costs. Online-only retailers in particular can have a minimal physical footprint when using dropshipping; they only
need a virtual storefront to inform customers and take orders.11
Randall, Netessine, and Rudi (2006) study the determinants of supply chain choice. Markets where retailers are
more likely to adopt drop-shipping have greater product variety, a higher ratio of retailers to wholesalers, and
products that are large or heavy relative to their value. Product variety creates a motive for drop-shipping because
unexpected idiosyncrasies in variety-specific demand make it costly to maintain the correct inventory mix at the
retail level. It is easier to allow a wholesaler with a larger inventory to assume and diversify over some of this
inventory risk.12 Similar reasoning shows that drop-shipping is more advantageous when there is a high retailer to
wholesaler ratio. Relatively large or heavy products are more likely to be drop-shipped because the higher costs of
physically distributing such goods raises the savings from skipping the extra step of shipping from wholesaler to
retailer.
The Internet has also affected the catalog of products available to consumers. Bricks-and-mortar operations are
limited in the number of varieties they offer for sale at one time, as margins from low-volume varieties cannot cover
the fixed costs of storing them before sale. Online sellers, however, can aggregate demand for these low-volume
varieties over a larger geographic market (this will be discussed in Section 3.5 below). At the same time, they
typically have a lower fixed cost structure. The combination of these technological changes lets them offer a
greater variety of products for sale. (E-commerce's consumer search tools can also make it easier for consumers
of niche products to find sellers.) This “longtail” phenomenon has been studied by Brynjolfsson, Hu, and Smith
(2003) and others. Brynjolfsson et al. find that the online book retailers offer 23 times as many titles as did a typical
bricks-and-mortar firm like Barnes & Noble. They estimate that this greater product variety generates consumer
welfare gains that are 7 to 10 times larger than the gains from increased competition.
3.5. The Geography of Markets
E-commerce allows buyers to browse across potential online sellers more easily than is possible across offline
outlets. This fading of markets’ geographic boundaries is tied to the reduction in search costs in online channels.
Further, e-commerce (p. 204) technologies can reduce the costs of distributing products across wide
geographies. The practice of drop-shipping discussed above is an example; not having to ship to retailers can
make it easier for supply chains to service greater geo-graphic markets.
There is some empirical support for this “death of distance” notion (Cairncross, 1997). Kolko (2000) finds that
people in more isolated cities are more likely to use the Internet. Similarly, Sinai and Waldfogel (2004) show that
conditional on the amount of local online content, people in smaller cities are more likely to connect to the Internet
Page 13 of 28
Online versus Offline Competition
than those in larger cities. Forman, Goldfarb, and Greenstein (2005) document that on the margin, businesses in
rural areas are more likely to adopt technologies that aid communication across establishments.
Despite this, several studies suggest spatial factors still matter. Hortaçsu et al. (2009) look at data from two Internet
auction websites, eBay and MercadoLibre. They find that the volume of exchanges decreases with distance.
Buyers and sellers that live in the same city have particular preference for trading with one another instead of
someone outside the metropolitan area. Hortaçsu et al. surmise that cultural factors and a lower cost of enforcing
the contract should a breach occur, explain this result. Blum and Goldfarb (2006) find geography matters online
even for purely digital goods like downloadable music, pictures, and movies, where transport and other similar
trade costs are nil. They attribute this to culturally correlated tastes among producers and consumers living in
relative proximity. Sinai and Waldfogel (2004) find patterns consistent with broader complementarities between the
Internet and cities. They find in Media Metrix and Current Population Survey data that larger cities have
substantially more local content online than smaller cities, and this content leads people to connect to the
Internet.13
We test whether geography matters online more generally by comparing the locations of pure-play online retailers
to where people who purchase products online live. If e-commerce makes geography irrelevant, we would expect
the two to be uncorrelated. On the other hand, if online sellers are physically located near customers, this suggests
that geography still plays a role in these markets. Unfortunately we cannot distinguish with such data whether the
relevant channel is shipping costs, contract enforceability, or something else.
We measure the number of online-only businesses in geographic markets using County Business Patterns data on
the number of establishments in NAICS industry 45411, “Electronic Shopping and Mail-Order Houses.” This industry
classification excludes retailers with any physical presence, even if they are a hybrid operation with an online
component. Hence these businesses sell exclusively at a distance. (Though they may not necessarily be online,
as they could be exclusively a mail order operation. We consider the implications of this below.) We use the
Technographics survey discussed above to compute the fraction of respondents in a geographic market who
report making online purchases in the previous year. Our geographic market definition is based on the Component
Economic Areas (CEAs) constructed by the U.S. Bureau of Economic Analysis. CEAs are groups of economically
connected counties; in many cases, they are a metro area plus some additional outlying counties. (p. 205) There
are approximately 350 CEAs in the U.S. (Goldmanis et al. (2010) use the same variable to measure the intensity of
local online shopping.) We combine these data sets into an annual panel spanning 1998 to 2007.
Table 8.5 shows the results from regressing the number of pure-play online sellers on the fraction of consumers in
the local market that purchase products online. We include market fixed effects in the regression because
unobserved factors might cause certain markets to be amenable to both online sellers and online buyers. For
example, Silicon Valley's human capital is both desired by online retailers and makes Valley consumers apt to shop
online.14 We further include logged total employment in the CEA in the regression to control for overall economic
growth in the market, and we add year fixed effects to remove aggregate trends.
The estimate in the first numerical column of Table 8.5 indicates that as the fraction of consumers purchasing
products online in a market increases by ten percentage points, on average another 2.2 electronic shopping and
mail-order businesses open in the local area. (Establishment counts have been scaled by 10 for better resolution of
some of the correlations by business size category.) While NAICS 45411 can include mail-order businesses that do
not sell online, it is seems likely (p. 206) that growth in pure mail-order operations within a market would either be
uncorrelated or perhaps even negatively correlated with the growth of online shopping in the market. Hence it is
likely that the estimated coefficient reflects growth in the number of pure-play online retailers in response to greater
use of e-commerce by local consumers.
Page 14 of 28
Online versus Offline Competition
Table 8.5 Relationship between Fraction Purchasing Products Online and Number of Online Firms within Local
Markets
Total online only businesses
Online only businesses of given size
1–4
5–9
10–19
20–49
50–99
100+
[1]
[2]
[3]
[4]
[5]
[6]
[7]
Fraction purchasing
online in market
22.29***
(6.190)
14.87**
(4.535)
3.714**
(1.234)
2.136**
(0.719)
1.318*
(0.669)
0.211
(0.315)
0.049
(0.345)
Year FEs
x
x
x
x
x
x
x
Market FEs
x
x
x
x
x
x
x
Mean of dependent variable
39.16
23.31
6.73
4.24
2.65
0.94
1.29
R2
0.963
0.947
0.942
0.941
0.914
0.856
0.920
N
3378
3378
3378
3378
3378
3378
3378
Notes: The table shows the results from regressing the number of online sales businesses (NAICS 45411,
Electronic Shopping and Mail-Order Houses) in a geographic market area (see text for definition) on the fraction
of consumers in that area that report making online purchases. The sample is an annual panel constructed
using establishment counts from U.S. County Business Patterns data and online purchase data from Forrester
Research's Technographics Survey. All specifications also include logged total market employment in the year
and market fixed effects. Standard errors clustered at the CEA level are given in parentheses. Standard errors
in parentheses.
(***) p〈 0.01,
(**) p 〈 0.05,
(*) p 〈 0.10.
The next six columns of Table 8.5 report results from similar regressions that use as the dependent variable counts
of NAICS 45411 establishments in various employment size categories. A given increase in online shopping is tied
to a larger increase in the number of smaller establishments than bigger ones. If we instead use the natural log of
the number of establishments as the dependent variable, the estimated effects are much more uniform across the
size distribution of firms. So in percentage terms, increasing the fraction of consumers who shop online in an area
proportionally increases the number of firms of all sizes. (Some of the growth of the number of larger
establishments may well reflect existing businesses becoming larger rather than de novo entry.)
3.6. Tax Treatment
One advantage that many online transactions enjoy over transactions in a physical store is the absence of sales
tax. Legally, U.S. citizens are obligated to pay their state's sales or use taxes on their online purchases. This rarely
happens in practice, as reporting and payment is left completely to the consumer. Only when the online seller “has
nexus” in the consumer's state is the sales tax automatically added to the transaction price by the firm.15 This
unevenness of the application of sales taxes could lead to a strong advantage for online retail purchases. For
example, consumers in Chicago buying online at the beginning of 2012 would avoid the applicable sales tax of
9.50 percent, a considerable savings.
Page 15 of 28
Online versus Offline Competition
Goolsbee (2000) provides the empirical evidence on this subject. He uses the Forrester Technographics survey to
estimate that the elasticity of the probability of consumers buying products on the Internet with respect to the local
tax rate is about 3.5. This estimate implies substantial sensitivity of online purchases to tax treatment. If the
average sales tax in his data (6.6 percent) were applied to all online transactions, the number of people
purchasing products online would fall by 24 percent.
While Goolsbee (2000) estimates the effect of sales tax on the extensive margin (whether a person buys anything
online), Ellison and Ellison (2009b) estimate the effect of taxes on a measure of total sales that includes both the
extensive and intensive margins. Their findings are similar to Goolsbee’s, further bolstering the case that applying
sales taxes to Internet purchases could reduce online retail sales by one-quarter.
On the supply side, tax structure can distort firm location decisions. Suppose a firm bases its operations in
Delaware to take advantage of the state's lax tax laws. If the firm were to create a distribution center in the Midwest
to decrease the time (p. 207) it takes to fulfill orders from the Midwest, then it might choose to open the
distribution center in the state with relatively few purchasers. A case study of Barnes & Noble (Ghemawat and
Baird, 2004) illustrates this point nicely. When Barnes & Noble first created an online business, the online division
was almost entirely separate from the brick-and-mortar store. The one shared resource among the online and
offline divisions was the company's book buyers. Even though the two divisions shared buyers, the books to be
sold on BarnesandNoble.com were sent to a distribution center in Jamesburg, New Jersey. Books bound for
traditional brick-and-mortar stores were sent to different warehousing facilities to make it clear which books would
not be subject to sales tax. Further, when BarnesandNoble.com went online in May of 1997, the company initially
refused to install kiosks with access to the website in stores. They also avoided delivering books ordered online to
their physical stores for pick up by customers. It wasn’t until October 2000 that Barnes & Noble, after struggling to
compete with Amazon, decided to forego the sales tax benefits it had enjoyed and integrate its online and offline
businesses (Ghemawat and Baird, 2006).
4. How E-Commerce Affects Market Outcomes
The changes in demand- and supply-side fundamentals that e-commerce brings can foment substantial shifts in
market outcomes from their offline-only equilibrium. These include prices, market shares, profitability, and the type
of firms operating in the market.
4.1. Prices
Perhaps no market outcome has been studied more intensively in the context of online sales activity than prices.
Much of the conventional wisdom and some theoretical work (e.g., Bakos, 1997) have focused on the potential for
e-commerce to reduce prices. Both reduced consumer search costs and lower distribution costs—two of the
fundamental mechanisms described in the previous section—can act to reduce prices in online markets. Lower
search costs make firms’ residual demand curves more elastic, reducing their profit-maximizing prices. Reduced
distribution costs directly impact profit-maximizing prices if they reflect changes in marginal costs.16
A body of empirical work has supported these predictions about lower prices. For example, Brynjolfsson and Smith
(2000) and Clay, Krishnan, and Wolff (2001) find that prices drop due to the introduction of online book markets.
Scott Morton, (p. 208) Zettelmeyer, and Silva-Risso (2001) document that consumers who used an online service
to help them search for and purchase a car paid on average two percent less than other consumers. Brown and
Goolsbee (2002) estimate that price comparison websites led to drops of 8–15 percent in the prices of term life
insurance policies. Sengupta and Wiggins (2006) document price reductions in airline tickets driven by online
sales.
Many of the price reductions documented in these studies and others result from e-commerce technologies making
markets more competitive, in the sense that firms’ cross-price elasticities rise. We will discuss below how this can
be beneficial for firms with cost advantages over their competitors. However, these same competitive forces can
also give strong incentives to firms with cost disadvantages to limit the impact of price differentials. These firms
would like to take actions that reduce the propensity of consumers, now with enhanced abilities to shop around, to
shift their purchases toward lower-cost sellers.
Page 16 of 28
Online versus Offline Competition
Certainly, some barriers to substitution exist online. E-commerce markets are not the utterly frictionless commoditytype markets sometimes speculated about early in the Internet's commercial life. Often, more than just the product
upon which the transaction is centered is being sold. Goods are usually bundled with ancillary services, and the
provision of these services might vary across sellers without being explicitly priced. Sellers’ brands and
reputations might serve as a proxy or signal for the quality of such service provision. Smith and Brynjolfsson's
(2001) aforementioned study on brand effects in online book sales is an example. Waldfogel and Chen (2006),
while finding that price comparison websites weaken brand effects, also find that brand still matters for sellers in a
number of product markets. And the work by Jin and Kato (2006), Resnick et al. (2006), and Cabral and Hortaçsu
(2010) on seller reputation on online auction sites further bolsters the importance of such ancillary services. Given
these results, it is not surprising that firms that operate online—especially those with higher costs than their
competitors—try to emphasize brand and bundled services rather than the raw price of the good itself.
Ancillary service provision and branding efforts aren’t the only tools firms use to soften price competition. Ellison
and Ellison (2009a) document active efforts by online sellers of computer CPUs and memory cards to obfuscate
their true prices in order to defeat the price-comparison abilities of e-commerce technologies. In this market, both
products and sellers are viewed by consumers as homogeneous, so many sellers focus their efforts on “bait-andswitch”-type tactics where a bare-bones model of the product (often missing key parts most users would find
necessary for installation) is priced low to grab top rankings on shopbots, while the additional necessary parts are
sold at considerable mark ups. Ellison and Ellison describe a constant battle between sellers trying to find new
ways to hide true prices from the shopbots (by making posted prices look very low) and shopbot firms adjusting
their information gathering algorithms to better decipher goods’ actual prices.
However, Baye and Morgan (2001) make an interesting point about shopbots and other product comparison
websites. Building a perfect shopbot—one that (p. 209) reports all information relevant to consumers’ purchasing
decisions, allowing them to find their highestutility options almost costlessly—may not be an equilibrium strategy
when products are differentiated primarily by price or other vertical attributes. A product comparison site that
works too well will destroy the very dispersion in price or other attributes it was created to address, obviating the
need for its services. Baye and Morgan show that product comparison websites should provide enough information
to be useful for searching customers (on whom the sites rely for revenues, either through subscriptions as in the
model or, more often in practice, through advertising revenues), but not so useful as to eliminate their raison d’etre.
These active efforts by e-commerce firms are reasons why, as documented by Baye, Morgan, and Scholten (2007)
and the studies cited therein, substantial price dispersion remains in most online markets. See chapter 9 in this
Handbook for extensive discussion of price comparison sites.
4.2. Other Market Outcomes
The advent of online sales in a product market is likely to affect more than just prices. Reduced consumer search
costs or differential changes in distribution costs across producers can lead to a wave of creative destruction that
shifts the fundamental structure of an industry.
Because e-commerce technologies make it easier for consumers to find lower-price sellers, lower-cost firms (or
those able to deliver higher quality at the same cost) will grab larger shares of business away from their highercost competitors. Even if, as discussed above, the more competitive landscape created by lower search costs
reduces prices and margins, this market structure response could be large enough that low-cost firms actually
become more profitable as e-commerce spreads. High-cost firms, on the other hand, are doubly hit. Not only does
their pricing power fall, their market share falls too, as customers who were once captive—either because of
ignorance or lack of alternatives—flee to better options elsewhere. Some of these firms will be forced out of
business altogether.
Conventional wisdom suggests that market structure impacts could be large; the rapid growth of online travel sites
at the expense of local travel agencies is one oftcited example. But while many academic studies of the effect of ecommerce on prices exist, only a small set of studies have investigated which businesses most benefit and most
suffer from e-commerce.
Goldmanis et al. (2010) flesh out how such shifts could happen in a model of industry equilibrium where
Page 17 of 28
Online versus Offline Competition
heterogeneous firms sell to a set of consumers who differ in their search costs. Firm heterogeneity arises from
differences in marginal costs, though the model can be easily modified to allow variation in product quality levels
instead. Industry consumers search sequentially when deciding from whom to buy. Firms set prices given
consumers’ optimal search behavior as well as their (p. 210) own and their rivals’ production costs. Firms that
cannot cover their fixed costs exit the industry, and initial entry into the industry is governed by an entry cost.
Interpreting the advent and diffusion of e-commerce as a leftward shift in the consumer search cost distribution,
Goldmanis et al. show that, consistent with previous literature, opening the market to online sales reduces the
average price in the market. The more novel implications regard the equilibrium distribution of firm types, however.
Here the model predicts that introducing e-commerce should shrink and sometimes force the exit of low-type (i.e.,
high-cost) firms and shift market share to high-type (low-cost) firms. Further, new entrants will on average have
lower costs than the average incumbent, including those forced out of the market.
Testing the model's predictions in three industries perceived to have been considerably impacted by e-commerce
—travel agencies, book-stores, and new auto dealers—Goldmanis et al. find support for these predictions. While
they cannot measure costs directly in their data, they use size to proxy for firms’ costs. (A considerable body of
research has documented that higher cost firms in an industry tend to be smaller. See, e.g., Bartelsman and Doms,
2000.) They find that growth in consumers’ online shopping is linked to drops in the number of small (and
presumably high-cost) establishments, but has either no significant impact or even positive impact on the number
of the industries’ large establishments. In addition to these industry-wide shifts, e-commerce's effects varied by
local markets among bookstores and new car dealers. Cities where consumers’ Internet use grew faster in a
particular year saw larger drops (gains) in the number of small (large) bookstores and car dealers over the same
year. This also informs the discussion above about whether online sales truly eliminate spatial boundaries in
markets.17 The effects among car dealers are particularly noteworthy in that auto manufacturers and dealers in the
U.S. are legally prohibited from selling cars online. Therefore any effects of e-commerce must be channeled
through consumers’ abilities to comparison shop and find the best local outlet at which to buy their car, not through
changes in the technology of car distribution. While this technology-based channel is important in some industries,
the consumer-side search channel is the one posited in their model, and therefore new car dealers offer the most
verisimilitude to the theory from which they derive their predictions.
We add to Goldmanis et al.'s original data and specifications here. Figure 8.1 shows how the composition of
employment in the same three industries changed between 1994 and 2007. Each panel shows the estimated
fraction of employment in the industry that is accounted for by establishments of three employment size classes:
those having 1–9 employees, those with 10–49, and those with 50 or more. In addition to the three industries
studied in Goldmanis et al., the figure also shows for the sake of comparison the same breakdown for total
employment in the entire County Business Patterns coverage frame (essentially all establishments in the private
nonfarm business sector with at least one employee).18
Page 18 of 28
Online versus Offline Competition
Click to view larger
Figure 8.1 Estimated Share of Industry Employment by Establishment Size.
Notes: These figures show the fraction of industry or sector employment accounted for by establishments
of differing employment levels. Values are taken from the U.S. County Business Patterns data base
(available at http://www.census.gov/econ/cbp/index.html). Industry definitions are as follows: travel
agencies, SIC 4724/NAICS 561510; bookstores, SIC 5942/NAICS 451211; and new auto dealers, SIC
5510/NAICS 441110.
Panel A shows the breakdown for travel agencies. It is clear that during the early half of the sample period, which
saw the introduction and initial diffusion of e-commerce, the share of industry employment accounted for by travel
agency offices with (p. 211) (p. 212) fewer than 10 employees shrank considerably. This lost share was almost
completely taken up by establishments with 50 or more employees. After 2001, the share losses of the smallest
offices stabilized, but the 10–49 employee category began to lose share to the largest establishments. These
patterns are consistent with the predictions of the theory—the largest offices in the industry benefit at the cost of
the smaller offices.
Panel B shows the same results for bookstores. Here, the pattern is qualitatively similar, but even more stark
quantitatively. While the fraction of employment at stores with 10–49 employees is roughly stable over the entire
period, the largest bookstores gained considerable share at the expense of the smallest.
Panel C has the numbers for new car dealers. In this industry, establishments with fewer than 10 employees
account for a trivial share of employment, so the interest is in the comparison between the 10–49 employee
dealers and those with more than 50. Again, we see that the large establishments accounted for a greater fraction
of industry employment over time, with the largest establishments gaining about 10 percentage points of market
share at the cost of those with 10–49 employees.
Finally, panel D does the same analysis for all establishments in the private nonfarm business sector. It is apparent
that the shifts toward larger establishments seen in the three industries of focus were not simply reflecting a
broader aggregate phenomenon. Employment shares of establishments in each of the three size categories were
stable throughout the period.
These predictions about the market share and entry and exit effects of introducing an online sales channel in an
industry are based on the assumption that firms behave non-cooperatively. If e-commerce technologies instead
Page 19 of 28
Online versus Offline Competition
make it easier for firms to collude in certain markets, e-commerce technologies might actually make those markets
less competitive. Campbell, Ray, and Muhanna (2005) use a dynamic version of Stahl (1989) to show theoretically
that if search costs are high enough initially, e-commerce-driven reductions in search costs can actually make it
easier for collusion to be sustained in equilibrium, as they increase the profit difference between the industry's
collusive and punishment (static Nash Equilibrium) states.
A more direct mechanism through which online sales channels support collusion is that the very transparency that
makes it easier for consumers to compare products can also make it easier for colluding firms to monitor each
other's behavior. This makes cheating harder. Albæk, Møllgaard, and Overgaard (1997) document an interesting
example of this, albeit one that doesn’t directly involve online channels, in the Danish ready-mixed concrete
industry. In 1993, the Danish antitrust authority began requiring concrete firms to regularly publish and circulate
their transactions prices. Within a year of the institution of this policy, prices increased 15–20 percent in absence
of any notable increases in raw materials costs or downstream construction activity. The policy—one that,
ironically, was implemented with hopes of increasing competition—facilitated collusion by making it easier for
industry firms to coordinate on anticompetitive prices and monitor collusive activities. Online markets are often
characterized by easy access to firms’ prices. If it is hard for firms to offer secret discounts because of market
convention, technological constraints, or legal strictures, this easy access fosters a powerful monitoring device for
colluders.
(p. 213) 5. Implications of Online Commerce for Firm Strategy
The fundamental effects of opening a concurrent online sales channel in an industry that we discussed in Section 3
can have implications for firms’ competitive strategies. These strategy choices can in turn induce and interact with
the equilibrium changes we discussed in Section 4. This section reviews some of these strategic factors.
A key factor—perhaps the key factor—influencing firms’ joint strategies toward offline and online markets is the
degree of connectedness between online and offline markets for the same product. This connectedness can be
multidimensional. It can involve the demand side: how closely consumers view the two channels as substitutes. It
can involve the supply side: whether online and offline distribution technologies are complementary. And it can
involve firms’ available strategy spaces: how much leeway firms have in conducting separate strategic trajectories
across channels, which is particularly salient as it regards how synchronized a firm's pricing must be across offline and online channels.
At one extreme would be a market where the offline and online channels are totally separated. Specifically,
consumers view the product as completely different depending upon the channel through which it is sold (perhaps
there are even separate online and offline customer bases); there are no technological complementarities between
the two channels; and firms can freely vary positioning, advertising, and pricing of the same product across the
channels. In this case, each channel can be thought of as an independent market. The firm's choices in each
channel can be analyzed independently, as there is no scope for strategic behavior that relies upon the interplay
between the two channels.
Of more interest to us here—and where the research literature has had to break new ground—are cases where
there are nontrivial interactions between online and offline channels selling the same products. We’ll discuss some
of the work done in this area below, categorizing it by the device through which the online and offline channels are
linked: consumer demand (e.g., substitutability), technological complementarities, or strategic restrictions.
5.1. Online and Offline Channels Linked Through Consumer Demand
One way the online and offline sales channels can be connected is in the substitutability that buyers perceive
between the channels. The extent of such substitutability determines two related effects of opening an online
channel in a market: the potential for new entrants into an online channel to steal away business from incumbents,
and the amount of cannibalization offline incumbents will suffer upon opening an online segment. Not all consumers
in a market will necessarily (p. 214) view this substitutability symmetrically. Distinct segments can react
differently to the presence of online purchase options. The observed substitutability simply reflects the aggregate
impact of these segments’ individual responses.
Page 20 of 28
Online versus Offline Competition
These factors have been discussed in several guises in the literature investigating the strategic implications of
operating in a market with both online and off-line channels. Dinlersoz and Pereira (2007), Koças and Bohlmann
(2008), and Loginova (2009) construct models where heterogeneity in consumers’ views toward the substitutability
of products sold in the two segments affects firms’ optimal strategies.
Dinlersoz and Pereira (2007) and Koças and Bohlmann (2008) build models where some customers have loyalty for
particular firms and others buy from the lowest-price firm they encounter. Offline firms with large loyal segments
(“Loyals”) stand to lose more revenue by lowering their prices to compete in the online market for price-sensitive
“Switchers.” Hence the willingness of incumbents from the offline segment to enter new online markets depends in
part on the ratios of Loyals-to Switchers. This also means the success of pure-play online firms is tied to the
number of Switchers. In some circumstances, opening an online channel can lead to higher prices in the offline
market, as the only remaining consumers are Loyals who do not perceive the online option as a substitute.
Depending on the relative valuations and sizes of the Loyals and Switchers segments, it is even possible that the
quantity-weighted average price in the market increases. In effect, the online channel becomes a price
discrimination device.
Direct tests of these models are difficult, but they do imply that if we compare two firms, the one with the higher
price will have more loyal consumers than the other. We can conduct a rough test of this in the bookselling
industry using the Forrester Technographics data. In it, consumers are asked whether they have shopped either
online or offline at Amazon, Barnes & Noble, or Borders in the previous thirty days. Clay et al. (2002) found that
Amazon set prices higher than Barnes & Noble, which in turn set prices higher than Borders. Thus the models
predict that Amazon's customers will be more loyal than Barnes & Noble’s, who are themselves more loyal than
Borders’. In our test, this implies that of the customers of these sellers, Amazon will have the highest fraction of
exclusive shoppers, followed by Barnes & Noble and Borders.
The results are in Table 8.6. In the first row, the first column reports the fraction of consumers who purchased a
book in the past three months and shopped only at Amazon. The second column gives the fraction of customers
who purchased a product from Amazon as well as from Barnes & Noble or Borders. If we take the first column as a
crude measure of the fraction of Amazon's loyal customers and the second column as a measure of those willing to
shop around, Amazon's customer base is roughly split between Loyals and Switchers. While the models would
predict that, given the observed price difference, Barnes and Noble's Loyals-to-Switchers ratio should be lower,
this is not the case in the data, as reflected in the second row. However, Borders’ low ratio of Loyals to Switchers is
consistent with them having the lowest prices. A caveat to these results, however, is that they could be
confounded by Internet use. The models’ predictions regard the loyalty of (p. 215) a firm's online customers. If
many of Barnes & Noble's loyal customers are offline, our measure might overstate the loyalty of Barnes & Noble's
online consumers. We address this in the second panel of Table 8.6 by recalculating the fractions after
conditioning on the consumer having purchased a book online. Now the evidence is exactly in line with the
predictions of Dinlersoz and Pereira (2007) and Koças and Bohlmann (2008): the rank ordering of the firms’ prices
is the same as the ordering of the Loyals-to-Switchers ratio.
In Loginova (2009), consumers’ ignorance of their valuations for a good shapes the nature of the link between
online and offline markets. Consumers in her model differ in their valuations for the market good, but do not realize
their valuations until they either (a) visit an offline retailer and inspect the good, or (b) purchase the good from an
online retailer (no returns are allowed). Under certain parameter restrictions, there is an equilibrium where both
channels are active and all consumers go to offline retailers and learn their valuations. Upon realizing their utility
from the good, they decide either to immediately purchase the good from the offline retailer or to go home and
purchase the product from an online retailer while incurring a waiting cost. This creates an equilibrium market
segmentation where consumers with low valuations buy from online stores and high-valuation consumers buy
immediately at the offline outlet they visited. The segmentation lets offline retailers raise their prices above what
they would be in a market without an online segment. The imperfect substitutability between online and offline
goods segments the market and allows firms to avoid head-on competition.
These papers focus on the extent to which goods sold on online and offline channels are substitutes, but it is
possible in certain settings that they may be complements. Empirical evidence on this issue is relatively sparse.
Gentzkow (2007) (p. 216) estimates whether the online edition of the Washington Post is a substitute or
complement for the print edition. The most basic patterns in the data suggest they are complements: consumers
Page 21 of 28
Online versus Offline Competition
who visited the paper's website within the last five days are more likely to have also read the print version.
However, this cross sectional pattern is confounded by variation in individuals’ valuations from consuming news. It
could be that some individuals like to read a lot of media, and they often happen to read the online and offline
versions of the paper within a few days of one another. But conditioning on having read one version, that specific
individual may be less likely to read the other version. This is borne out in a more careful look at the data;
instrumenting for whether the consumer has recently visited the paper's website using shifters of the consumer's
costs of reading online, Gentzkow finds the two channels’ versions are rather strong substitutes. Using a different
methodology, Biyalogorsky and Naik (2003) look at whether Tower Records’ introduction of an online channel lifted
or cannibalized its offline sales. They find cannibalization, though it was modest, on the order of 3 percent of the
firm's offline sales. Given that brick-and-mortar record stores have clearly suffered from online competition since
this study, their result suggests that much of the devastation was sourced in across-firm substitution rather than
within-firm cannibalization.
Table 8.6 “Switchers” and “Loyals” in the Book Industry
Consumers Who Purchased Books in Past Three Months
Loyals
Switchers
Loyals/Switchers
Amazon
0.201
0.203
0.990
Barnes & Noble
0.279
0.278
1.004
Borders
0.087
0.153
0.569
Consumers Who Purchased Books Online in Past Three Months
Loyals
Switchers
Loyals/Switchers
Amazon
0.343
0.262
1.309
Barnes & Noble
0.179
0.274
0.653
Borders
0.034
0.095
0.358
Notes: Entries under “Loyals” are the fraction of customers who purchased from only one of the three firms
listed, while “Switchers” are the fraction purchasing from more than one of the three firms. The third column
gives the ratio of Loyals to Switchers for each firm. The top panel includes all consumers who purchased books,
whether online or offline, while the lower panel only includes consumers who purchased books online. Data are
from Forrester Research's Technographics Survey.
5.2. Online and Offline Channels Linked Through Technological Complementarities
Wang (2007) ties the online and offline channels with a general complementarity in the profit function that he
interprets as a technological complementarity. His model treats the introduction of e-commerce into an industry as
the opening of a new market segment with lower entry costs. The model's dynamic predictions are as follows.
Taking advantage of the new, lower entry costs, pure-play online sellers enter first to compete with the brick-andmortar incumbents. But the complementarity between the online sales and distribution technology and the offline
technology gives offline incumbents incentive to expand into the online channel. It also gives these firms an
inherent advantage in the online market, as they are able to leverage their offline assets to their gain. As a result,
many of the original online-only entrants are pushed out of the industry. Thus a hump-shaped pattern is predicted
in the number of pure-play online firms in a product market, and a steady diffusion of former offline firms into the
online channel.
Page 22 of 28
Online versus Offline Competition
This is a reasonably accurate sketch of the trajectory of the online sector of many retail and service markets. The
online leaders were often pure-play sellers: Amazon, E-Trade, Hotmail, pets.com, and boo.com, for example. But
many of these online leaders either eventually exited the market or were subsumed by what were once offline
incumbents. Some pure-play firms still exist, and a few are fabulously successful franchises, but at the same time,
many former brick-and-mortar sellers now dominate the online channels of their product markets.
(p. 217) Jones (2010) explores a different potential complementarity. The notion is that the online technology is
not just a way to sell product, but it can also be an information gathering tool. Specifically, the wealth of data
generated from online sales could help firms market certain products to individuals much more efficiently and lead
to increased sales in both channels.
5.3. Online and Offline Channels Linked Through Restrictions on Strategy Space
Liu, Gupta, and Zhang (2006) and Viswanathan (2005) investigate cases where the online and offline channels are
tied together by restrictions on firms’ strategy spaces—specifically, that their prices in the two channels must be a
constant multiple of one another. In the former study, this multiple is one: the firm must price the same whether
selling online or offline. Viswanathan (2005) imposes that the price ratio must be a constant multiple, though not
necessarily one. While it might seem unusual that these pricing constraints are exogenously imposed instead of
arising as equilibrium outcomes, it is true that certain retailers have faced public relations and sometimes even
legal problems due to differences in the prices they charge on their websites and in their stores. Liu, Gupta, and
Zhang remark that many multichannel firms report in surveys that they price consistently across their offline and
online channels.
Liu, Gupta, and Zhang (2006) show that when the equal pricing restriction holds, an incumbent offline seller can
deter the entry of a pure-play online retailer by not entering the online market itself. This seemingly counterintuitive
result comes from the uniform price requirement across channels. An incumbent moving into the online channel is
restricted in its ability to compete on price, because any competition-driven price decrease in the online market
reduces what the incumbent earns on its inframarginal offline units. This limit to its strategy space can actually
weaken the incumbent's competitive response so much that a pure-play online retailer would be more profitable if
the incumbent enters the online segment (and therefore has to compete head-to-head with one hand tied behind its
back) than if the incumbent stays exclusively offline. Realizing this, the incumbent can sometimes deter entry by
the pure-play online firm by staying out of the online channel in the first place. The link across the online and offline
channels in this model creates an interesting situation in which the offline firm does not gain an advantage by being
the first mover in to the online channel. Instead, it may want to abstain from the online market altogether.
Viswanathan (2005) models the online and offline models as adjacent spatial markets. Consumers in one market
cannot buy from a firm in the other market. However, one firm at the junction of the two markets is allowed to
operate as a dual-channel supplier, but it must maintain an exogenously given price ratio of k between the two
markets. Viswanathan shows that in this setup, the price charged (p. 218) by the two-channel firm will be lower
than the offline-only firms’ prices but higher than the pure-play online sellers.
6. Conclusions
The emergence of online channels in a market can bring substantial changes to the market's economic
fundamentals and, through these changes, affect outcomes at both the market level and for individual firms. The
potential for such shifts has implications in turn for firms’ competitive strategies. Incumbent offline sellers and new
pure-play online entrants alike must account for the many ways a market's offline and online channels interact
when making pricing, investment, entry, and other critical decisions.
We have explored several facets of these interactions in this chapter. We stress that this is only a cursory
overview, however. Research investigating these offline-online connections is already substantial and is still
growing. This is rightly so, in our opinion; we expect the insights drawn from this literature to only become more
salient in the future. Online channels have yet to fully establish themselves in some markets and, in those where
they have been developed, are typically growing faster than bricks-and-mortar channels. This growing salience is
especially likely in the retail and services sectors, where online sales appear to still have substantial room for
Page 23 of 28
Online versus Offline Competition
growth.
Acknowledgment
We thank Martin Peitz and Joel Waldfogel for comments. Syverson thanks the NSF and the Stigler Center and Centel
Foundation/Robert P. Reuss Faculty Research Fund at the University of Chicago Booth School of Business for
financial support.
References
Adams, C.P., Hosken, L., Newberry, P., 2011. Vettes and Lemons on eBay. Quantitative Marketing and Economics
9(2), pp. 109–127.
Albæk, S., Møllgaard, P., Overgaard, P.B., 1997. Government-Assisted Oligopoly Coordination? A Concrete Case.
Journal of Industrial Economics 45(4), pp. 429–443.
Bajari, P., Hortaçsu, A., 2003. The Winner's Curse, Reserve Prices, and Endogenous Entry: Empirical Insights from
eBay Auctions. RAND Journal of Economics 34(2), pp.329–355.
Bajari, P., Hortaçsu, A., 2004. Economic Insights from Internet Auctions. Journal of Economic Literature 42(2), pp.
257–286.
Bakos, J.Y., 1997. Reducing Buyer Search Costs: Implications for Electronic Marketplaces. Management Science
43(12), pp. 1676–1692.
Bartelsman, E.J., Doms, M., 2000. Understanding Productivity: Lessons from Longitudinal Microdata. Journal of
Economic Literature 38(3), pp.569–594.
Baye, M. R., Morgan, J., 2001. Information Gatekeepers on the Internet and the Competitiveness of Homogeneous
Product Markets. American Economic Review 91(3), pp.454–474. (p. 221)
Baye, M. R., Morgan, J., Scholten, P., 2007. Information, Search, and Price Dispersion. Handbooks in Economics and
Information Systems, vol. 1, (T. Hendershott, Ed.), Amsterdam and Boston: Elsevier. pp. 323–376.
Biyalogorsky, E., Naik, P., 2003. Clicks and Mortar: The Effect of On-line Activities on Of-line Sales. Marketing Letters
14(1), pp. 21–32.
Blum, B.S., Goldfarb, A., 2006. Does the Internet Defy the Law of Gravity? Journal of International Economics, 70(2),
pp. 384–405.
Brown, J.R., Goolsbee, A., 2002. Does the Internet Make Markets More Competitive? Evidence from the Life
Insurance Industry. Journal of Political Economy 110(3), pp. 481–507.
Brynjolfsson, E., Dick, A.A., Smith, M.D., 2010. A Nearly Perfect Market? Differentiation vs. Price in Consumer
Choice. Quantitative Marketing and Economics 8(1), pp. 1–33.
Brynjolfsson, E., Smith, M.D., 2000. Frictionless Commerce? A Comparison of Internet and Conventional Retailers.
Management Science 46(4), pp. 563–585.
Brynjolfsson, E., Hu, Y., Smith, M.D., 2003. Consumer Surplus in the Digital Economy: Estimating the Value of
Increased Product Variety at Online Booksellers. Management Science 49(11), pp. 1580–1596.
Cabral, L., Hortaçsu, A., 2010. The Dynamics of Seller Reputation: Evidence from eBay. Journal of Industrial
Economics 58(1), pp. 54–78.
Cairncross, F., 1997. The Death of Distance: How the Communication Revolution Will Change Our Lives. Harvard
Business School Press.
Page 24 of 28
Online versus Offline Competition
Campbell, C., Ray, G., Muhanna, W.A., 2005. Search and Collusion in Electronic Markets. Management Science
51(3), pp. 497–507.
Clay, K., Krishnan, R., Wolff, E., 2001. Prices and Price Dispersion on the Web: Evidence from the Online Book
Industry. Journal of Industrial Economics 49(4), pp. 521–539.
Clay, K., Krishnan, R., Wolff, E., 2002. Retail Strategies on the Web: Price and Non-price Competition in the Online
Book Industry. Journal of Industrial Economics 50(3), pp. 351–367.
Dinlersoz, E.M., Pereira, P., 2007. On the Diffusion of Electronic Commerce. International Journal of Industrial
Organization 25(3), pp. 541–574.
Ellison, G., Ellison, S.F., 2009a. Search, Obfuscation, and Price Elasticities on the Internet. Econometrica 77(2), pp.
427–452.
Ellison, G., Ellison, S.F., 2009b. Tax Sensitivity and Home State Preferences in Internet Purchasing. American
Economic Journal: Economic Policy 1 (2), pp.53–71.
Forman, C., Goldfarb, A., Greenstein, S., 2003. Which Industries Use the Internet? Organizing the New Industrial
Economy, vol. 12 (Advances in Applied Microeconomics), (M. Baye, Ed.), Elsevier. pp. 47–72.
Forman, C., Goldfarb, A., Greenstein, S., 2005. How did Location Affect Adoption of the Commercial Internet? Global
Village vs. Urban Leadership. Journal of Urban Economics 58, pp. 389–420.
Garicano, L., Kaplan, S.N., 2001. The Effects of Business-to-Business E-Commerce on Transaction Costs. Journal of
Industrial Economics 49(4), pp. 463–485.
Gentzkow, M., 2007. Valuing New Goods in a Model with Complementarity: Online Newspapers. American Economic
Review 97(3), pp. 713–744.
Ghemawat, P., Baird, B., 2004. Leadership Online (A): Barnes & Noble vs. Amazon.com. Boston, Mass: Harvard
Business School Publishing. (p. 222)
Ghemawat, P., Baird, B., 2006. Leadership Online (B): Barnes & Noble vs. Amazon.com in 2005. Boston, Mass:
Harvard Business School Publishing.
Goldmanis, M., Hortaçsu, A., Syverson, C., Emre, O., 2010. E-commerce and the Market Structure of Retail
Industries. Economic Journal 120(545), pp. 651–682.
Gollop, F.M., Monahan, J.L., 1991. A Generalized Index of Diversification: Trends in U.S. Manufacturing. Review of
Economics and Statistics 73(2), pp. 318–330.
Goolsbee, A., 2000. In a World without Borders: The Impact of Taxes on Internet Commerce. Quarterly Journal of
Economics 115(2), pp. 561–576.
Hong, H., Shum, M., 2006. Using Price Distributions to Estimate Search Costs. The RAND Journal of Economics
37(2), pp. 257–275.
Hortaçsu, A., Martinez-Jerez, F.A., Douglas, J., 2009. The Geography of Trade in Online Transactions: Evidence
from eBay and MercadoLibre. American Economic Journal: Microeconomics 1(1), pp. 53–74.
Jin, G. Z., Kato, A., 2006. Price, Quality, and Reputation: Evidence from an Online Field Experiment. RAND Journal of
Economics 37(4), pp. 983–1005.
Jin, G. Z., Kato, A., 2007. Dividing Online and Offline: A Case Study. Review of Economic Studies 74(3), pp. 981–
1004.
Jones, S.M., 2010. Internet Poised to Become Bigger Force in Retail. Chicago Tribune. Accessed January 9, 2010 at
www.chicagotribune.com/business/chi-tc-biz-outlook-retail-0105-jan06,0,6844636.story.
Koças, C., Bohlmann, J.D., 2008. Segmented Switchers and Retailer Pricing Strategies. Journal of Marketing 72(3),
Page 25 of 28
Online versus Offline Competition
pp. 124–142.
Kolko, J., 2000. The Death of Cities? The Death of Distance? Evidence from the Geography of Commercial Internet
Usage. The Internet Upheaval (Ingo Vogelsang and Benjamin M. Compaine, eds.), Cambridge: MIT Press. pp. 73–98.
Krishnan, K., Rao, V., 1965. Inventory Control in N Warehouses. Journal of Industrial Engineering 16, pp. 212–215.
Lewis, G., 2009. Asymmetric Information, Adverse Selection and Online Disclosure: The Case of eBay Motors.
American Economic Review 101(4), pp. 1535–1546.
Liu, Y., Gupta, S., Zhang, Z.J., 2006. Note on Self-Restraint as an Online Entry-Deterrence Strategy. Management
Science 52(11), pp.1799–1809.
Loewenstein, G., 1987. Anticipation and the Valuation of Delayed Consumption. Economic Journal 97(387), pp.
666–684.
Loginova, O., 2009. Real and Virtual Competition. Journal of Industrial Economics 57(2), pp.319–342.
Mesenbourg, T., 2001. Measuring Electronic Business: Definitions, Underlying Concepts, and Measurement Plans.
〈www.census.gov/epcd/www/ebusines.htm〉
Morton, F.S., Zettelmeyer, F., Silva-Risso, J., 2001. Internet Car Retailing. Journal of Industrial Economics 49(4),
pp.501–519.
Mukhopadhyay, T., Kekre, S., Kalathur, S., 1995. Business Value of Information Technology: A Study of Electronic
Data Interchange. MIS Quarterly 19(2), pp. 137–156.
Netessine, S., Rudi, N., 2006. Supply Chain Choice on the Internet. Management Science 52(6), pp. 844–864.
Randall, T., Netessine, S., Rudi, N., 2006. An Empirical Examination of the Decision to Invest in Fulfillment
Capabilities: A Study of Internet Retailers. Management Science 52(4), pp. 567–580.
Resnick, P., Zeckhauser, R., Swanson, J., Lockwood, K., 2006. The Value of Reputation on eBay: A Controlled
Experiment. Experimental Economics 9(2), pp. 79–101. (p. 223)
Saloner, G, Spence, A.M., 2002. Creating and Capturing Value—Perspectives and Cases on Electronic Commerce,
Crawfordsville: John Wiley & Sons, Inc.
Sengupta, A., Wiggins, S.N., 2006. Airline Pricing, Price Dispersion and Ticket Characteristics On and Off the
Internet. NET Institute Working Paper, No. 06–07.
Sinai, T., Waldfogel, J., 2004. Geography and the Internet: Is the Internet a Substitute or a Complement for Cities?
Journal of Urban Economics, 56(1), pp. 1–24.
Smith, M.D., Brynjolfsson, E., 2001. Consumer Decision-Making at an Internet Shopbot: Brand Still Matters. Journal of
Industrial Economics 49(4), pp.541–558.
Stahl, D.O., II, 1989. Oligopolistic Pricing with Sequential Consumer Search. American Economic Review 79(4), pp.
700–712.
US Census Bureau, 2010. 2008 E-commerce Multi-Sector Report. 〈www.census.gov/estats .
Viswanathan, S., 2005. Competing across Technology-Differentiated Channels: The Impact of Network Externalities
and Switching Costs. Management Science 51(3), pp. 483–496.
Waldfogel, J., Chen, L., 2006. Does Information Undermine Brand? Information Intermediary Use and Preference for
Branded Web Retailers. Journal of Industrial Economics 54(4), pp. 425–449.
Wang, Z., 2007. Technological Innovation and Market Turbulence: The Dot-Com Experience. Review of Economic
Dynamics 10(1), pp.78–105.
Page 26 of 28
Online versus Offline Competition
Notes:
(1.) Ghemawat and Baird (2004, 2006) offer a detailed exploration of the nature of competition between Amazon
and Barnes & Noble.
(2.) The Census Bureau defines e-commerce as “any transaction completed over a computer-mediated network
that involves the transfer of ownership or rights to use goods or services.” A “network” can include open networks
like the internet or proprietary networks that facilitate data exchange among firms. For a review of how the Census
Bureau collects data on e-commerce and the challenges posed in quantifying e-commerce, see Mesenbourg
(2001).
(3.) The Census Bureau defines the B2B and B2C distinction similarly to the sector-level definition here. It is worth
noting, however, that because the Bureau does not generally collect transaction-level information on the identity of
the purchaser, these classifications are only approximate. Also, the wholesale sector includes establishments that
the Census classifies as manufacturing sales branches and offices. These are locations separate from production
facilities through which manufacturers sell their products directly rather than through independent wholesalers.
(4.) The Census Bureau tracks retail trade e-commerce numbers at a higher frequency. As of this writing, the latest
data available are for the fourth quarter of 2011, when e-commerce-related sales accounted for a seasonallyadjusted 4.8 percent of total retail sales.
(5.) The R&D data is aggregated across some of the 3-digit industries, so when comparing online sales shares to
R&D, we aggregate the sales channel data to this level as well. This leaves us 17 industries to compare.
Additionally, the product differentiation index (taken from Gollop and Monahan 1991) is compiled using the older
SIC system, so we can only match 14 industries in this case.
(6.) The two-digit NAICS industry with the highest enhancement category investment rate (28 percent) was
Management of Companies and Enterprises (NAICS 55). The lowest adoption rate (6.2 percent) was in Educational
Services (NAICS 61).
(7.) Note that when comparing the magnitudes of the coefficient estimates across columns in Table 8.3, one should
be mindful of the average probability of purchase in the sample, pbar, displayed at the bottom of the table. Because
the average probability of purchasing one of the financial products online (9.6 percent) is roughly one-fifth the
probability that any product is purchased (50.9 percent), the estimated marginal effects in the financial products’
case are five times the relative size. Thus the 1.5-percentage-point marginal effect for Black respondents and
financial products in column 2 corresponds to a roughly 7.5-percentage-point marginal effect in column 1.
(8.) Two products saw substantial declines in online purchase likelihoods: mortgages and small appliances. The
former is almost surely driven by the decline in demand for mortgages through any channel. We are at a loss to
explain the decline in small appliance purchases.
(9.) An interesting case where the Internet brought about increased intermediation is in auto sales. There, at least
in the United States, legal restrictions require that all sales go through a physical dealer who cannot be owned by a
manufacturer. Given these restrictions, online technologies in the industry were devoted to creating referral
services like Autobytel.com. Consumers shop for and select their desired vehicle on the referral service's website,
and then the service finds a dealer with that car and has the dealer contact the consumer with a price quote
(Saloner and Spence, 2002).
(10.) http://www.census.gov/mtis/www/data/text/mtis-ratios.txt, retrieved 1/26/11.
(11.) The practice has been adopted by many but not all online-only retailers. Netessine and Rudi (2006) report
that 31 percent of pure-play Internet retailers use drop-shipping as their primary method of filling orders.
(12.) Traditional retailers have used other mechanisms to serve a similar function (though likely at a higher cost).
For example, retailers with multiple stores often geographically pool inventory risk by cross-shipping orders from a
store with an item in inventory to one that takes a customer order but is stocked out (e.g., Krishnan and Rao 1965).
(13.) As noted above, the same authors find that conditional on local content, people from smaller cities are more
Page 27 of 28
Online versus Offline Competition
likely to connect to the Internet. Interestingly, in their data, these two forces are just offset so that use of the
Internet isn’t strongly correlated with city size.
(14.) We have also estimated specifications that control for the fraction of the local population that uses the
Internet for any purpose. (This variable is similarly constructed from the Technographics survey.) This did not
substantively impact the nature of the results described below, except to make the estimated positive effect of
online shopping on online retailers larger.
(15.) The great majority of states have a sales tax; only Alaska, Delaware, Montana, New Hampshire, and Oregon
do not. Whether a firm has nexus within a state is not always obvious. In the Supreme Court decision Quill vs.
North Dakota (1992), it was established that online merchants without a substantial physical presence in the state
would not have to enforce sales tax in that state. Later, the 1998 Internet Tax Nondiscrimination Act clarifies that a
web presence in a state does not constitute nexus.
(16.) Asymmetric information can affect prices as well, though the direction of this effect is ambiguous. Quantities,
however, should decline if information becomes more asymmetric.
(17.) The aggregate impact observed among travel agencies resulted from the nature of the institutional shifts in
industry revenues that e-commerce caused. Responding to a shift in customers toward buying tickets online,
airlines cut ticket commissions to travel agents, which accounted for 60 percent of industry revenue in 1995,
completely to zero by 2002. These commission cuts were across the board, and did not depend on the propensity
of travelers to buy tickets online in the agents’ local markets.
(18.) County Business Patterns do not break out actual total employment by size category, so we impute it by
multiplying the number of industry establishments in an employment category by the midpoint of that category's
lower and upper bounds. For the largest (unbounded) size categories, we estimated travel agency offices and
bookstores with 100 or more employees had an average of 125 employees; auto dealers with more than 250
employees had 300 employees. Imputations were not necessary in the case of the total nonfarm business sector,
as the CBP do contain actual employment by size category in that case.
Ethan Lieber
Ethan Lieber is a PhD student in the Economics Department at the University of Chicago.
Chad Syverson
Chad Syverson is Professor of Economics at the Booth School of Business, University of Chicago.
Page 28 of 28
Comparison Sites
Oxford Handbooks Online
Comparison Sites
Jose-Luis Moraga-Gonzalez and Matthijs R. Wildenbeest
The Oxford Handbook of the Digital Economy
Edited by Martin Peitz and Joel Waldfogel
Print Publication Date: Aug 2012
Online Publication Date: Nov
2012
Subject: Economics and Finance, Economic Development
DOI: 10.1093/oxfordhb/9780195397840.013.0009
Abstract and Keywords
This article, which reviews the work on comparison sites, discusses how comparison sites operate and their main economic
roles. It also reports a model of a comparison site. The important issue of price discrimination across channels is explored. It is
noted that the comparison site becomes a marketplace more attractive for the buyers than the search market. The market
clears when transactions between firms and consumers take place. The fact that firms can price discriminate eliminates the
surplus consumers obtain by opting out of the price comparison site and this ultimately destroys the profits of the retailers.
The analysis of the simple model of a comparison site has exhibited that product differentiation and price discrimination play a
critical role. Click-through data can assist the assessment of structural models of demand.
Keywords: comparison sites, price discrimination, marketplace, firms, consumers, product differentiation, click-through data
1. Introduction
Not so long ago individuals used atlases, books, magazines, newspapers, and encyclopedias to find content. To locate
businesses and their products, it was customary to use yellow pages, directories, newspapers, and advertisements. Family
and friends were also a relevant source of information. Nowadays things are quite different: many individuals interested in
content, a good, or a service usually conduct a first search on the Internet.
Through the Internet individuals can easily access an immense amount of information. Handling such a vast amount of
information has become a complex task. Internet browsers constitute a first tool to ease the navigation experience. Thanks to
the browsers, users move easily across documents and reach a large amount of information in just a few mouse-clicks.
Search technologies are a second tool to facilitate the browsing experience. They are fundamental to navigate the Internet
because they help users locate and aggregate content closely related to what they are interested in.
Search engines such as Google, Yahoo, and Bing constitute one type of search technologies. These search tools are often
offered at no cost for users and the placing of advertisements constitutes the most important way through which these search
engines are financed. By taking advantage of the keywords the user provides while searching for information, search engines
can pick and deliver targeted advertisements to the most interested audience. In this way, search engines become a
relatively precise channel through which producers and retailers can reach consumers. This raises the search engines’ value
and their scope (p. 225) to extract rents from producers or retailers. The study of the business model of search engines
constitutes a fascinating research area in economics (see e.g. Chen and He, 2011; Athey and Ellison, 2011; Spiegler and
Eliaz, 2011; and Gomes, 2011).
Comparison sites, or shopping robots (shopbots) such as PriceGrabber.com, Shopper.com, Google Product Search, and Bing
Shopping are a second type of search technologies. These sites help users find goods or services that are sold online. For
multiple online vendors, shopbots provide a significant amount of information, including the products they sell, the prices they
charge, indicators about the quality of their services, their delivery costs as well as their payment methods. By using
shopbots, consumers can easily compare a large number of alternatives available in the market and ultimately choose the
most satisfactory one. Because they collate information from various offers relatively quickly, shopbots reduce consumer
search costs considerably. Business models vary across comparison sites. Most shopbots do not charge consumers for
access to their sites and therefore the bulk of their profits is obtained via commercial relationships with the shops they list.
They get paid via subscription fees, click-through fees, or commission fees. Some comparison sites list sellers at no cost and
Page 1 of 21
Comparison Sites
get their revenue from sponsored links or sponsored ads. Finally, some charge consumers to obtain access to its information
but firms do not pay any fees.
The emergence of Internet shopbots in the marketplace raises questions not only about the competitiveness and efficiency of
product markets but also about the most profitable business model. Do all types of firms have incentives to be listed in
comparison sites? Why are the prices listed in these sites dispersed, even if the advertised products are seemingly
homogeneous? Do comparison sites enhance social welfare? How much should each side of the market pay for the services
offered by comparison sites? Addressing these questions is the main focus of this chapter. We do this within a framework
where a comparison site designs its fee structure to attract (possibly vertically and horizontally differentiated) online retailers,
on the one hand, and consumers, on the other hand. While analyzing our model, we describe the received wisdom in some
detail.
The study of search engines other than comparison sites raises other interesting economic questions. The economics of
online advertising is described by Anderson (2012) in chapter 14 of this volume. Of particular importance is the management
of sponsored search advertisements. Search engines cannot limit themselves to delivering consumer access to the advertiser
placing the highest bid; they must also carefully manage the quality of the ads; otherwise they can lose the ability to obtain
surplus from advertisers.
The rest of this chapter is organized as follows. In Section 2, we describe how comparison sites operate, as well as their main
economic roles. We also summarize the main results obtained in the small theoretical literature in economics, and discuss
empirical research on the topic. In Section 3, we present a model of a comparison site. The model is then applied to
comparison sites dealing with homogeneous products. Later we discuss the role of product differentiation, both horizontal and
(p. 226) vertical. We also explain the important issue of price discrimination across channels. We conclude the chapter with
a summary of theoretical and empirical considerations and put forward some ideas for further research.
2. Comparison Sites
Comparison sites or shopbots are electronic intermediaries that assist buyers when they search for product and price
information in the Internet. Shopbots have been operating on the Internet since the late 1990s. Compared to other more
traditional intermediary institutions, most shopbots do not sell items themselves—instead they gather and aggregate price,
product, and other relevant information from third-party sellers and present it to the consumers in an accessible way. By
doing this, consumers can easily compare offers. Shopbots also display links to the vendors’ websites. These links allow a
buyer to quickly navigate to the site of the seller that offers her the best deal.
Shopbots operate according to several different business models. The most common is that users can access the comparison
site for free, while sellers have to pay a fee. Initially, most comparison sites charged firms a flat fee for the right to be listed.
More recently, this fee usually takes the form of a cost-per-click and is paid every time a consumer is referred to the seller's
website from the comparison site. Most traditional shopbots, like for instance PriceGrabber.com and Shopping.com, operate in
this way. Fees typically depend on product category—current rates at PriceGrabber range from $0.25 per click for clothing to
$1.05 per click for plasma televisions. Alternatively, the fee can be based on the execution of a transaction. This is the case
of Pricefight.com, which operates according to a cost-per-acquisition model. This model implies that sellers only pay a fee if a
consumer buys the product. Other fees may exist for additional services. For example, sellers are often given the possibility to
obtain priority positioning in the list after paying an extra fee.
A second business model consists of offering product and price comparison services for free to both sellers and buyers and
thus relies on advertising as a source of revenue. Both Google Product Search and Microsoft's Bing Shopping are examples of
comparison sites that have adopted this type of business model. Any seller can list products in these websites by uploading
and maintaining a product data feed containing information about the product price, availability, shipping costs, and so on.
A third, although less common, model is to have consumers pay a membership fee to access the comparison site, whereas
sellers are listed for free. AngiesList.com for instance aggregates consumer reviews about local service companies, which
can be accessed by consumers for an annual membership fee between $10 and $50, depending on where the consumer
lives.
(p. 227)
2.1. Early Intermediation Literature
Shopbots are platforms through which buyers and sellers can establish contact with one another. In this sense, comparison
sites essentially play an intermediation role. As a result, we are first led to the literature on intermediation, which has been a
topic of interest in economics in general, and in finance in particular. Spulber (1999), in his study of the economic role and
relevance of intermediaries, describes various value-adding roles played by intermediaries. The following aspects are
prominent: buyer and seller aggregation, lowering of search and matching costs, and facilitation of pricing and clearing
Page 2 of 21
Comparison Sites
services.
In a market where buyers and sellers meet and negotiate over the terms of trade, a number of business risks and costs exist.
Reducing such risks and costs is a key role played by intermediaries. In terms of forgone welfare opportunities, not finding a
suitable counter-party is in itself the most costly hazard trading partners face; sometimes a trading partner is found at a cost
but either rationing occurs or the failure to reach a satisfactory agreement takes place, in which case similar welfare losses
are realized. In all these situations, an intermediary can enter the market and reduce the inefficiencies. By announcing prices
publicly, and by committing to serve orders immediately, intermediaries reduce significantly the costs of transacting.
Intermediaries “make the market” by choosing input and output prices to maximize their profits. Market makers trade on their
own account and so they are ready to buy and sell in the market in which they operate. Provision of immediacy, which is a
key aspect emphasized in the seminal articles of Demsetz (1968) and Rubinstein and Wolinsky (1987), distinguishes market
makers from other intermediating agents in the value chain. An example of a market maker is a supermarket chain with
sufficient upstream bargaining power so as to have an influence on bid and ask prices. In finance, perhaps the most common
examples of market-makers are stock exchange specialists (such as commercial banks, investment financial institutions, and
brokers). Market makers are also the subject of study in Gehrig (1993), Yavas (1996), Stahl (1988), and Spulber (1999).
Watanabe (2010) extends the analysis by making the intermediation institution endogenous. Rust and Hall (2003) distinguish
between market-makers that post take-it-or-leave-it prices and middlemen who operate in the over-the-counter market at
prices that can be negotiated. They study the conditions under which brokers and market-makers can coexist, and study the
welfare properties of intermediated equilibria.
Price comparison sites are similar to traditional intermediaries in that they “facilitate” trade between online shoppers and
retailers. However, what distinguishes a comparison site from a traditional intermediary is that the latter typically buys goods
or services from upstream producers or sellers and resells them to consumers. Shopbots do not trade goods, but add value
by aggregating information. In that sense, shopbots are more similar to employment agencies and realtors, who also serve the
purpose of establishing a bridge between the supply and the demand sides of the market.
(p. 228) How can a price comparison site enter the market and survive in the long run? Do comparison sites increase the
competitiveness of product markets? Do they enhance market efficiency? This chapter revolves around these three
questions. Whether a comparison site can stay in business in the long run is not, a priori, clear. The problem is that, given that
retailers and consumers can encounter each other outside the platform and conduct transactions, the search market
constitutes a feasible outside option for the agents. In fact, a comparison site can only stay in business if it chooses its
intermediation fees carefully enough to out-compete the search market, or at least to make intermediated search as attractive
as the search market. The question is then whether a comparison site can indeed create value for retailers and consumers.
The first paper studying these questions is Yavas (1994). Yavas studies the match-making role of a monopolistic intermediary
in a competitive environment. He shows that the intermediary can obtain a profit by attracting high-valuation sellers and lowvaluation buyers; the rest of the agents trade in the decentralized market. Interestingly, relative to the market without
intermediary, buyers and sellers lower their search intensity, which can ultimately decrease matching rates and cause a
social welfare loss. Though Yavas’ analysis is compelling, most markets are populated by firms that hold a significant amount
of market power. Since market power drives a wedge between the market outcome and the social optimum, it cannot be
ignored when modeling the interaction between comparison sites, retailers, and consumers in real-world markets. Our work
especially adds in this direction.
2.2. Our Model and Results, and Their Relation to the Literature
Our model, described in detail in Section 3, aims at understanding how comparison sites can overcome “local” market power
and emerge in the marketplace. In addition, we study whether comparison sites enhance social welfare. “Local” market power
can stem from geographical considerations, from loyalty, or from behavioral-type of assumptions such as random-shopping or
default-bias (Spiegler, 2011).
Our model is inspired from Baye and Morgan's (2001) seminal paper. Baye and Morgan had geographical considerations in
mind when they developed their model so in their case “local” market power arises from geographical market segmentation.
From a general point of view, however, the particular source of “local” market power is not very important. We will assume
buyers opting out of the comparison site will buy at random, and therefore this will be the main source of “local” market
power. In essence, the model we study is as follows. Suppose that in a market initially characterized by some sort of
segmentation, a price comparison site is opened up. Suppose that the comparison site initially succeeds at attracting some of
the buyers from the various consumer segments. The comparison site creates value for firms (p. 229) since a firm that
advertises its product on the comparison site can access consumers “located” in other segmented markets. This is
reinforcing in that consumers, by registering with the shopbot, can observe a number of product offerings from the advertising
firms in addition to the usual one. We study the extent to which the market becomes centralized. We also compare the levels
of welfare attained with and without a comparison site. It turns out that product differentiation, both vertical and horizontal, and
Page 3 of 21
Comparison Sites
the possibility to price discriminate between the centralized and the decentralized marketplaces play an important role. We
describe next the results we obtain and how they connect with earlier work.
We first study the case in which retailers sell homogeneous products. This is the case examined in Baye and Morgan (2001).
We show that a crucial issue is whether online retailers can practice price discrimination across marketplaces or not. If price
discrimination is not possible, as in Baye and Morgan, then the platform's manager has an incentive to raise firms’ participation
fees above zero so as to induce less than complete participation of the retailers. This results in an equilibrium with price
dispersion, which enhances the gains users obtain from registering with the shopbot. Although establishing a price
comparison site is welfare improving, the equilibrium is not efficient because prices are above marginal costs and the
comparison site only attracts a share of the transactions.
We show that the market outcome is quite different when retailers can price discriminate across marketplaces, that is, when
they are allowed to advertise on the price comparison site a price different from the one they charge in their websites. In that
case, the utility consumers derive from buying randomly, which is the outside option of consumers, is significantly reduced
and the price comparison site can choose its tariffs such that both consumers and firms fully participate, while still extracting
all rents from the market. This means that with price discrimination the market allocation is efficient and all trade is centralized.
We then move to study the case in which retailers sell horizontally differentiated products. This case was examined by
Galeotti and Moraga-González (2009). We employ the random utility framework that gives rise to logit demands. In such a
model, we show that the price comparison site can choose fees that fully internalize the network externalities present in the
market. These fees attract all firms to the platform, which maximizes the quality of the matches consumers obtain and thereby
the overall economic rents. The market allocation is not fully efficient because product sellers have market power. However,
the monopolist intermediary does not introduce distortions over and above those arising from the market power of the
differentiated product sellers. The fact that the comparison site attracts all retailers and buyers to the platform does not
depend on whether the retailers can price discriminate across marketplaces or not. This result stems from the aggregation
role played by the (product and price) comparison site. By luring firms into the platform not only price competition is fostered
so consumers benefit from lower prices but also more choice is offered to consumers. Since the comparison site becomes an
aggregator of variety, the comparison site becomes a marketplace more attractive for the buyers than the search market.
(p. 230) In our final model, we allow for vertical product differentiation in addition to horizontal product differentiation. The
main result we obtain is that the nature of the pricing policy of the comparison site can change significantly and produce an
inefficient outcome. Note that when quality differences across retailers are absent, the comparison site obtains the bulk of its
profits from the buyers. By lowering the fees charged to the firms, more value is created at the platform for consumers and
this value is in turn extracted by the comparison site via consumer fees. We show that when quality differences are large, the
comparison site may find it profitable to do otherwise by charging firm fees sufficiently high so as to prevent low-quality
producers from participating in the comparison site. This raises the rents of the high-quality sellers, and at the same time
creates value for consumers. These rents are in turn extracted by the comparison site via firm and consumer participation
fees. In this equilibrium, the intermediary produces a market allocation that is inefficient.
2.3. Empirical Literature
Empirical studies centered around shopbots have focused on distinct issues. A number of these studies look at whether
predictions derived from the theoretical comparison site models are in line with the data. Using micro data on individual
insurance policies, Brown and Goolsbee (2002) provide empirical evidence that increased usage of comparison sites
significantly reduced the price of term life insurance in the 1990s, while prices did not fall with increased Internet usage in the
period before these comparison sites began. Brynjolfsson and Smith (2001) use click-through data to analyze behavior of
consumers searching for books on Dealtime.com. They find that shopbot consumers put substantial brand value on the
biggest three retailers (Amazon, Barnes and Noble, and Borders), which suggests it is indeed important to model product
differentiation. Baye, Morgan, and Scholten (2004) analyze more than four million price observations from Shopper.com and
find that price dispersion is quite persistent in spite of the increased usage of comparison sites. Baye, Gatti, Kattuman, and
Morgan (2006) look at how the introduction of the Euro affected prices and price dispersion using data from Kelkoo, a large
comparison site in the European Union. They find price patterns broadly consistent with predictions from comparison site
models.
More recently, Moraga-González and Wildenbeest (2008) estimate a model of search using price data for memory chips
obtained from the comparison site MySimon.com. Their estimates can be interpreted so as to suggest that consumer
participation rates are relatively low – between 4 and 13 percent of the consumers use the search engine. They find
significant price dispersion. An, Baye, Hu, Morgan, and Shum (2010) structurally estimate a model of a comparison site using
British data from Kelkoo and use the estimates to simulate the competitive effects of horizontal mergers.
(p. 231) Finally, some papers use data from comparison sites to estimate demand models. Ellison and Ellison (2009) study
competition between sellers in a market in which the comparison site Pricewatch.com played a dominant role and, using sales
Page 4 of 21
Comparison Sites
data for one of the retailers, find that demand is tremendously price sensitive for the lowest-quality memory modules. In
addition Ellison and Ellison find evidence that sellers are using obfuscation strategies, with less elastic demand for higher
quality items as a result. Koulayev (2010) estimates demand and search costs in a discrete choice product differentiation
model using click-through data for hotel searches, and finds that search frictions have a significant impact on demand
elasticity estimates.
3. A Model of a Comparison Site
We study a model of a comparison site in which subscribing consumers can compare the prices charged by the different
advertising retailers and the characteristics of their products. A comparison site has therefore the features of a two-sided
market. Two-sided markets are characterized by the existence of two groups of agents which derive gains from conducting
transactions with one another, and the existence of intermediaries that facilitate these transactions. Exhibitions, employment
agencies, videogame platforms, Internet portals, dating agencies, magazines, newspapers and journals are other examples of
two-sided markets (see Armstrong, 2006; Caillaud and Jullien, 2003; Evans, 2003; and Rochet and Tirole, 2003).1
In our model the comparison site is controlled by a monopolist. One group of users consists of firms selling products and the
other group of users is made of consumers. From the point of view of a firm, participating in the comparison site is a way to
exhibit its product and post its price to a potentially larger audience. An individual firm that does not advertise on the
comparison site limits itself to selling to those customers who remain outside the platform. Likewise, for a consumer, visiting
the platform is a way to learn the characteristics and the prices of all the products of the participating firms. A consumer who
does not register with the comparison site can only trade outside the platform. We will assume consumers who opt out of the
platform randomly visit a firm.2
The monopoly platform sets participation fees for consumers and firms to attract business to the platform. Traditionally,
comparison sites have used fixed advertising fees, or flat fees, paid by firms that participate. For the moment we shall assume
other fees, like per-click or per-transaction fees, are equal to zero.3 Let denote the (fixed) fee the platform charges firms for
participation. While the platform can charge different fees to firms and consumers, we assume the platform cannot price
discriminate among firms by charging them different participation fees. Likewise, let denote the fee charged to consumers for
registering with the platform. For simplicity, assume the platform incurs no cost.
(p. 232) On the supply side of the market, there are two possibly vertically and horizontally differentiated retailers competing
in prices. Let us normalize their unit production cost to zero. A retailer may decide to advertise its product and price on the
platform (A) or not to advertise it at all (NA). Advertising may involve certain cost k associated to the feeding of product and
price information into the comparison site. For the moment, we will ignore this cost. Let Ei = {A, NA} be the set of advertising
strategies available to a firm i. A firm i's participation strategy is then a probability function over the set Ei. We refer to α i as
the probability with which a firm i chooses A, while 1 − α i denotes the probability with which such a firm chooses . A firm i's
pricing strategy on the platform is a price (distribution) Fi. The firm may charge a different price (distribution), denoted Fio, to
the consumers who show up at the shop directly.4 A strategy for firm is thus denoted by σi = {α i, Fi, Fio}, i = 1,2. The
strategies of both firms are denoted by σ and the (expected) payoff to a firm i given the strategy profile σ is denoted πi(σ).
There is a unit mass of consumers. Consumers can either pick a firm at random and buy there, or else subscribe to the
platform, view the offerings of the advertising firms and buy the most attractive one. We assume that consumers are
distributed uniformly across firms so each firm receives half of the non-subscribing consumers. Those buyers who choose not
to register with the platform visit a retailer at random, say i, and buy there at the price Pio. If they participate in the centralized
market, they see all the products available and choose the one that matches them best. To keep things simple, we assume
that a consumer who registers with the platform cannot trade outside the platform within the current trading period, even if she
finds no suitable product in the platform.
Consumer m's willingness to pay for the good sold by firm i is
The parameter is εim assumed to be independently and identically double exponentially distributed across consumers and
products with zero mean and unit variance and can be interpreted as a match parameter that measures the quality of the
match between consumer i and product m. We assume there is an outside option with utility u0 = ε0m. Let G and g denote the
cumulative and probability distribution functions of εim, respectively.5 A buyer demands a maximum of one unit. To allow for
vertical product differentiation, let Δ ≡ δ1 − δ2 〉 0 be the (ex-ante) quality differential between the two products. Ex-ante,
consumers do not know which firm sells which quality; like match values, the quality of a particular firm is only known after
consumers visit such firm. Buyers may decide to register with the platform (S) or not at all (NS). The set of consumers’ pure
strategies is denoted R = {S, NS}. A consumer's mixed strategy is a probability function over the set R. We refer to μ ∈ [0,1]
as the probability with which a consumer registers with the platform. Given all other agents’ strategies, u(μ) denotes the
Page 5 of 21
Comparison Sites
(expected) utility of a consumer who subscribes with probability μ.
(p. 233) The timing of moves is the following. In the first stage, the comparison site chooses the participation fees. In the
second stage, firms simultaneously decide on their participation and pricing decisions, while consumers decide whether to
register with the platform or not. Firms and consumers that do not enter the platform can only conduct transactions when they
match in the decentralized market; likewise, consumers who register with the comparison site can only conduct transactions
there. The market clears when transactions between firms and consumers take place. We study subgame perfect equilibria.6
3.1. Homogeneous Products
It is convenient to start by assuming that firms sell homogeneous products. Therefore, we assume that εim = 0 for all i,m,
including the outside option, and that δi = δj. As a result, uim = δ for all i and m. This is the case analyzed in the seminal
contribution of Baye and Morgan (2001).7 In what follows, we will only sketch the procedure to derive a SPE. For details, we
refer the reader to the original contribution of Baye and Morgan.
Let us proceed backwards and suppose that the participation fees a and s are such that firms and consumers are indifferent
between participating in the price comparison site or not. Recall that μ denotes the fraction of participating consumers and α
the probability with which a firm advertises its price on the platform. To allow for mixed pricing strategies, refer to F(p) as the
advertised price.
Consider a firm that does not advertise at the price comparison site. This firm will only sell to a fraction (1 − μ)/2 of nonparticipating consumers and therefore it is optimal for this firm to charge the monopoly price δ. As a result, a firm that does not
advertise at the price comparison site obtains a profit:
Consider now a firm that decides to charge a price p and advertise it at the price comparison site. This firm will sell to a
fraction (1 − μ)/2 of non-participating consumers as well as to the participating consumers if the rival either does not
advertise or advertises a higher price. Therefore, the profit this firm will obtain is8
It is easy to see that for advertising fees 0 ≤ a 〈 μδ an equilibrium in pure advertising strategies does not exist. In fact, if the
rival firm did not advertise for sure, then firm i would obtain a profit equal to ((1 − μ)δ/2 if it did not advertise either, while if it
did advertise a price just below δ, this firm would obtain a profit equal to (1 − μ)/2+ μ)δ − a. Likewise, an equilibrium where
firms advertise with (p. 234) probability 1 does not exist either if a 〉 0. If the two firms advertised with probability 1, the price
comparison site would resemble a standard Bertrand market so both firms would charge a price equal to the marginal cost and
then firms would not obtain sufficient profits to cover the advertising fees. Therefore an equilibrium must have α ∈ (0, 1).
To solve for equilibrium, we impose the following indifference conditions:
These conditions tell us that (i) a firm must be indifferent between advertising and not advertising (first equality) and also that
(ii) a firm that advertises the monopoly price on the platform gets the same profits that a firm that advertises any other price in
the support of the price distribution (second equality). Setting the condition
and solving for
α gives the equilibrium advertising policy of the firms:
Using the condition
and solving for F(p), gives the pricing policy of the firms:
The lower bound of the equilibrium price distribution is found by setting F* (p̱) = 0 and solving for p̱, which gives p̱ = δ − 2α * δ
Page 6 of 21
Comparison Sites
μ/(1+μ).
We now turn to the users’ side of the market. Users take as given firms’ behavior so they believe firms to advertise with
probability α * a price drawn from the support of F*. A user who registers with the platform encounters two prices in the price
comparison site with probability α *2 , in which case she picks the lowest one; with probability 2α * (1−α *) there is just one price
advertised on the platform. The expected utility to a user who registers with the price comparison site is then
where
. A user who does not register with
the price comparison site always buys from one of the firms at random so her utility will be equal to zero if the chosen firm
does not advertise, while it will be equal to δ − E[p] otherwise. Therefore,
(p. 235) As a result, for given advertising and subscription fees, we may have two types of market equilibria. One is when
users participate surely. In that case, users must derive a strictly positive utility from subscribing. The other is when users’
participation rate is less than one. In that case, they must be indifferent between participating and not participating so μ must
solve the following equation
Whether one equilibrium prevails over the other is a matter of expectations: if firms expect users to be pretty active then it is
optimal for consumers to do so, and vice-versa.
We now fold the game backwards and consider the stage of the game where the manager of the price comparison site
chooses the pair of advertising and subscription fees {a, s} to maximize its profits. The profits of the intermediary are: (1)
We now argue that a SPE with partial firm and user participation cannot be sustained. Suppose that at the equilibrium fees a
and s, we have a continuation game equilibrium with α *, μ* 〈 1. Then, the participation rate of the users μ* is given by the
solution to u(1) = u(0) and the participation rate of the firms α * = 1 −a/δμ*. Table 9.1 shows that the intermediary's profits
increase monotonically in s. The idea is that in this continuation equilibrium the elasticity of user demand for participation in
the price comparison site is strictly positive. In the putative equilibrium u(1) = u(0). Keeping everything else fixed, an increase
in s makes utility (p. 236) outside the platform higher than inside the platform. To restore equilibrium firms must advertise
more frequently, which can only occur if the fraction of subscribing consumers increases. So an increase in s is accompanied
by an increase in the participation rates of both retailers and users, as it can be seen in the table. This clearly shows than in
SPE it must be the case that μ = 1.
Page 7 of 21
Comparison Sites
Table 9.1 Homogeneous Products: Comparison Site's Profits Increase in Consumer Fee s
a
s
E[p]
E[min p1, p2]
α
μ
πi
u*
Π
0.500
0.010
0.933
0.912
0.170
0.602
0.199
0.011
0.176
0.500
0.020
0.899
0.868
0.238
0.656
0.172
0.024
0.251
0.500
0.030
0.870
0.831
0.289
0.703
0.148
0.038
0.310
0.550
0.030
0.867
0.827
0.281
0.765
0.118
0.037
0.332
0.600
0.030
0.864
0.824
0.274
0.826
0.087
0.037
0.353
0.650
0.030
0.862
0.821
0.268
0.887
0.056
0.037
0.374
0.650
0.040
0.834
0.785
0.307
0.938
0.031
0.051
0.437
0.650
0.050
0.807
0.751
0.342
0.988
0.006
0.066
0.494
0.650
0.053
0.800
0.743
0.350
1.000
0.000
0.070
0.508
Notes: The quality parameter δ is set to 1.
Given this result, to maximize the profits of the price comparison site we need to solve the problem
Table 9.2 shows the solution of this problem for δ = 1. The profits of the intermediary are maximized when a = 0.426 and s =
0.119 (in bold) so in SPE user participation is maximized but firm participation is not. The intuition for this result is clear. If the
firms did participate surely, then the price advertised on the platform would be equal to the marginal cost and the firms would
not make any money. In such a case the bulk of the profits of the price comparison site would have to be extracted from the
consumers. However, in the absence of price dispersion no consumer would be willing to pay anything to register with the
platform. To summarize, the key insights from Baye and Morgan (2001) are that the profits of the price comparison site are
maximized when consumers participate with probability one but firms do not, in this way creating price dispersion and so
subscription value for consumers. As we have mentioned above, price dispersion is ubiquitous also on the Internet (Baye,
Morgan and Scholten, 2004) and, moreover, it is difficult to reject the null hypothesis that advertised prices on comparison
sites are random (Moraga-González and Wildenbeest, 2008).
Page 8 of 21
Comparison Sites
Table 9.2 Homogeneous Products Model: Maximum Profits Intermediary
a
s
E[p]
E[min p1, p2]
α
μ
πi
u*
0.100
0.140
0.256
0.165
0.900
1.000
0.000
0.670
0.320
0.200
0.162
0.402
0.299
0.800
1.000
0.000
0.478
0.482
0.300
0.151
0.516
0.415
0.700
1.000
0.000
0.339
0.571
0.400
0.127
0.611
0.519
0.600
1.000
0.000
0.233
0.607
0.426
0.119
0.633
0.544
0.574
1.000
0.000
0.210
0.608
0.450
0.111
0.653
0.567
0.550
1.000
0.000
0.191
0.607
0.500
0.097
0.693
0.614
0.500
1.000
0.000
0.153
0.597
0.600
0.066
0.766
0.701
0.400
1.000
0.000
0.094
0.546
0.700
0.040
0.832
0.783
0.300
1.000
0.000
0.050
0.460
Notes: The quality parameter δ is set to 1.
(p. 237) 3.1.1. The Role of Price Discrimination Across Marketplaces
In Baye and Morgan (2001) price comparison sites have incentives to create price dispersion since by doing so they create
value for consumers. If prices on- and off-platform are similar, consumers are not interested in the platform services so the
bulk of the money has to be made on the firms’ side. By raising firms participation fees, a price comparison site achieves two
objectives at once. On the one hand, competition between firms is weakened and this increases the possibility to extract rents
from the firm side; on the other hand, price dispersion goes up and this in turn increases the possibility to extract rents from
the user side of the market. This interesting result is intimately related to the assumption that online retailers cannot price
discriminate across marketplaces. To see this, suppose that a firm could set a price at its own website which is different from
the one advertised on the platform. Since consumers who do not register with the platform are assumed to pick a website at
random, it is then obvious that the website's price would be equal to δ. A firm participating in the price comparison site would
then obtain a profit equal to
In such a case, imposing the indifference conditions for equilibrium that
would yield
Now we argue that, to maximize profits, the price comparison site wishes to induce full participation of firms and consumers,
thereby maximizing social welfare and extracting all the surplus. The expected utility to a user who registers with the price
comparison site is given by the same expression for u(1) above, i.e., u(1) = α *2 (δ − E[min{p1, p2 }])+2α * (1−α *) (δ−E[p])−s.
A user who does not register with the price comparison site buys at the price δ so her utility will be u(0)=0.
Page 9 of 21
Comparison Sites
Consider now the stage of the game where the price comparison site's manager chooses firm and consumer participation
fees. Suppose that at the equilibrium fees firms and consumers mix between participating and not participating. Table 9.3
shows that the elasticity of consumer participation is also positive in this case. As a result, the intermediary will continue to
raise the user fees until all consumers participate with probability one. (p. 238)
Table 9.3 Homogeneous Products with Price Discrimination: Intermediary's Profits Increase in s
a
s
E[p]
E[min p1, p2]
α
μ
πi
u*
Π
0.500
0.010
0.948
0.932
0.100
0.556
0.222
0.000
0.106
0.500
0.025
0.916
0.890
0.158
0.594
0.203
0.000
0.173
0.500
0.050
0.879
0.842
0.224
0.644
0.188
0.000
0.256
0.550
0.050
0.879
0.842
0.224
0.708
0.146
0.000
0.281
0.600
0.050
0.879
0.842
0.224
0.773
0.114
0.000
0.307
0.650
0.050
0.879
0.842
0.224
0.837
0.081
0.000
0.333
0.650
0.075
0.849
0.803
0.274
0.895
0.052
0.000
0.423
0.650
0.100
0.822
0.770
0.316
0.951
0.025
0.000
0.506
0.650
0.123
0.800
0.743
0.350
1.000
0.000
0.000
0.508
Notes: The quality parameter δ is set to 1.
Table 9.4 Homogeneous Products Model with Price Discrimination: Intermediary's Maximum Profits
a
s
E[p]
E[min p1, p2]
α
μ
πi
u*
Π
0.750
0.063
0.863
0.822
0.250
1.000
0.000
0.000
0.438
0.650
0.123
0.800
0.743
0.350
1.000
0.000
0.000
0.508
0.500
0.250
0.693
0.614
0.500
1.000
0.000
0.000
0.750
0.250
0.563
0.462
0.359
0.750
1.000
0.000
0.000
0.938
0.100
0.810
0.256
0.166
0.900
1.000
0.000
0.000
0.990
0.010
0.980
0.047
0.019
0.990
1.000
0.000
0.000
1.000
0.001
0.998
0.007
0.002
1.000
1.000
0.000
0.000
1.000
0.000
1.000
0.000
0.000
1.000
1.000
0.000
0.000
1.000
Notes: The quality parameter δ is set to 1.
In Table 9.4 we show that the price comparison site's profits increase by lowering the firms fee and increasing the users
charge until they reach 0 and 1, respectively, which implies that all firms and consumers participate with probability one. In
that case, product prices are driven down to marginal cost and the bulk of the profits of the intermediary is obtained from the
consumers.
Notice that with price discrimination, the market allocation is efficient. The intuition is the following. By increasing competition
Page 10 of 21
Comparison Sites
in the platform, prices go down and more surplus is generated for consumers. This surplus, since buyers have in practice no
outside option, can be extracted from consumers via participation (p. 239) fees. Interestingly, the fact that firms can price
discriminate eliminates the surplus consumers obtain by opting out of the price comparison site and this ultimately destroys
the profits of the retailers.
3.1.2. Click-Through Fees
In recent years many comparison sites have replaced the fixed-fees by a cost-per-click (CPC) tariff structure. In this
subsection we show that the CPC business model does not alter the main insights we have obtained so far.9 Let be the clickthrough fee. Assume also that there still exists a (small) fixed cost a firm has to pay for advertising on the platform, denoted k,
which can be interpreted as a hassle cost for feeding up the comparison site information system. The profits of a firm that
advertises on the platform then become
The first part of this profits expression comes from the consumers who do not register with the platform and buy at random.
The second part comes from the consumers who participate in the comparison site. These consumes click on firm i's offer
when firm i is the only advertising firm, which occurs with probability 1= α, or when firm j also participates at the
clearinghouse but advertises a higher price than firm i, which occurs with probability α(1−F(p)).
Using the condition
and solving for α gives
If μ consumers participate in the platform, the total expected number of clicks is then μ(1−(1−α *)2 ). Using the condition
and solving for F gives
The decision of consumers does not change—they participate with a probability μ*, which is the solution to u(1)=u(0). The
profits of the intermediary are then
Table 9.5 shows that the property that the elasticity of the consumer demand for participation is positive still holds. As a result,
the platform will increase until all consumers register with the price comparison site. (p. 240)
Page 11 of 21
Comparison Sites
Table 9.5 Homogeneous Products Model with CPC: Platform's Profits Increase in Consumer Fee s
k
c
s
E[p]
E[min p1, p2]
α
μ
πi
u*
Π
0.200
0.500
0.010
0.952
0.938
0.255
0.537
0.231
0.012
0.125
0.200
0.500
0.020
0.925
0.903
0.355
0.620
0.190
0.027
0.193
0.200
0.500
0.030
0.899
0.870
0.428
0.700
0.150
0.043
0.257
0.200
0.550
0.030
0.900
0.872
0.436
0.788
0.106
0.044
0.319
0.200
0.600
0.030
0.900
0.873
0.445
0.901
0.050
0.044
0.401
0.200
0.600
0.033
0.892
0.864
0.465
0.935
0.033
0.050
0.431
0.200
0.600
0.036
0.884
0.854
0.484
0.969
0.015
0.056
0.462
0.200
0.600
0.039
0.877
0.846
0.500
1.000
0.000
0.061
0.488
Notes: The quality parameter δ is set to 1.
Table 9.6 Homogeneous Products Model with CPC: Intermediary's Maximum Profits
k
c
s
E[p]
E[min p1, p2]
α
μ
πi
u*
Π
0.200
0.100
0.145
0.487
0.393
0.778
1.000
0.000
0.399
0.240
0.200
0.200
0.127
0.570
0.487
0.750
1.000
0.000
0.323
0.315
0.200
0.300
0.108
0.651
0.579
0.714
1.000
0.000
0.249
0.383
0.200
0.400
0.086
0.730
0.670
0.667
1.000
0.000
0.180
0.442
0.200
0.500
0.063
0.805
0.759
0.600
1.000
0.000
0.117
0.483
0.200
0.563
0.048
0.851
0.814
0.542
1.000
0.000
0.081
0.493
0.200
0.600
0.039
0.877
0.845
0.500
1.000
0.000
0.061
0.489
0.200
0.700
0.014
0.943
0.927
0.333
1.000
0.000
0.019
0.403
Notes: The quality parameter δ is set to 1.
Given this observation, to maximize the profits of the comparison site we need to solve
Table 9.6 shows the solution of this problem for δ = 1 and k = 0.2. The profits of the intermediary are maximized when c =
0.563 so again in SPE user participation is maximized but firm participation is not. The intuition for this result is the same as
above.
(p. 241)
3.2. Differentiated Products
We continue with the case in which firms sell products that are both horizontally and vertically differentiated. Galeotti and
Moraga-González (2009) study a similar model, but without vertical product differentiation, that is, δi = δ for all i, and with
Page 12 of 21
Comparison Sites
match values uniformly distributed. Here we explore the role of vertical product differentiation and we choose to work with
logit demands for convenience. Later, we shall see the relationship between these two models.
Recall that μ is the proportion of consumers in the market registering with the comparison site and that α i is the probability firm
i advertises her product at the comparison site. We first look at the price set by a firm that does not advertise at the
comparison site. Denoting by pio such a price, it should be chosen to maximize the profits a firm obtains from the consumers
who visit a vendor at random. Since consumer m's outside option generates utility u0 = ε0m, we have (2)
Let
denote the monopoly price of firm i, and
the profits this firm obtains if it does not advertise. Taking the
first-order condition of equation (2) with respect to pio and solving for pio gives
where W[exp(δi − 1)] is the Lambert W-Function evaluated at exp(δi − 1), i.e., the W that solves exp(δi − 1) = W exp(W).
Suppose firm i decides to advertise a price pi at the comparison site. The expected profits of this firm are
where α j is the probability the rival firm advertises at the comparison site. As can be seen from this equation, demand
depends on whether or not the rival firm (p. 242) advertises. With probability α j seller i competes with seller j for the
proportion μ of consumers visiting the comparison site, which means that in order to make a sale the utility offered by firm i
needs to be larger than the utility offered by firm j as well as the outside good. Since we are assuming εim is i.i.d. type I
extreme value distributed this happens with probability exp(δi − pi)/(1+ ∑k=i,j exp(δk − pk)). Similarly, with probability (1−α j)
seller i is the only firm competing on the platform, which means ui only has to be larger than u0 for firm i to gain these
consumers, i.e., exp(δi−pi)/(1+exp(δi−pi)). Finally, a proportion (1−μ)/2 of consumers buy at random, and the probability of
selling to these consumers is exp(δi−pi)/(1+exp(δi−pi)).
Page 13 of 21
Comparison Sites
Click to view larger
Figure 9.1 Prices and Profits as Function of Participation Rate of Seller j.
The expression for the profits of firm j is similar. Taking the first-order conditions with respect to pi and pj yields a system of
equations that characterizes the equilibrium prices. Unfortunately it is difficult to derive a closed-form expression for these
prices. Figure 9.1(a), however, shows for selected parameter values that the equilibrium price of firm i decreases in α j.10
Intuitively, as α jincreases, the (p. 243) probability a firm meets a competitor at the comparison site goes up and this fosters
competition. As shown in Figure 9.1(b) the profits of firm i also decrease in α j.
We now study the behavior of consumers in the market. If a user does not register with the comparison site, she just visits a
firm at random. Therefore, the expected utility from this strategy is (3)
where γ is Euler's constant. The utility a consumer obtains when remaining unregistered with the comparison site should
increase in α i and α j because of the increased competition in the comparison site. If a user registers with the intermediary, her
expected utility is (4)
Armed with these equations, we can study the continuation game equilibria. Basically, there may be two kinds of equilibrium.
An equilibrium with full consumer participation (μ* = 1) (and either full or partial firm participation), or an equilibrium with partial
consumer participation (μ* 〈 1) (and either full or partial firm participation). In both types of equilibria, if the firms mix between
advertising and not advertising the advertising probabilities must solve
In the second type of equilibrium, if the users mix between participating and not participating, the participation probability μ*
must be the solution to
Page 14 of 21
Comparison Sites
In the first stage of the game the owner of the comparison site chooses the pair of fees {a, s} such that her profits are
maximized. The profit of the intermediary is given by (5)
(p. 244) 3.2.1. Horizontal Product Differentiation
It is convenient to first assume products are not vertically differentiated, i.e., δi = δj = δ. This means firms are symmetric and
therefore we only need to consider two participation probabilities, α and μ. This case has been studied by Galeotti and
Moraga-González (2009) for the N − firm case and a uniform distribution of match-parameters. Galeotti and Moraga-González
(2009) show that in SPE the comparison site chooses firm and user fees so that firms and consumers join with probability one.
That this also holds for a double exponentially distributed match-parameter can be seen in Table 9.7. The table shows that
when either α or μ are less than one the profits of the intermediary can go up by increasing the fees to firms, consumers, or
both. As a result, the intermediary will set both a and s such that all agents participate, that is, α = μ = 1.
The intuition is similar to that in the model with homogeneous products. Suppose users mix between registering with the
comparison site and not doing so. A raise in their registration fees makes them less prone to participate. To restore equilibrium
firms should be more active in the comparison site. A higher participation rate of the firms can then be consistent with the
expectation that consumers also participate more often. These cross-group externalities imply the increasing shape of the
profits function of the comparison site in s. Therefore, the intermediary will continue to increase s till either α = 1 or μ = 1.
When α = 1, the table shows how an increase in the firms fee increases consumer participation, lowers the price as well as
firm participation. Profits of the comparison site increase anyway because consumer demand for participation is more elastic
than firm demand for participation. Since an increase in a decreases firm participation, this relaxes the μ = 1 constraint and
then the intermediary can increase again the consumer fee. This process continues until the intermediary extracts all the
rents in the market up to the value of the outside option of the agents. We note that this insight is (p. 245) not altered if the
retailers are allowed to price discriminate across marketplaces (see Galeotti and Moraga-González, 2009).
Table 9.7 Model with Horizontal Product Differentiation
a
s
p*
α
μ
πi
u*
Π
0.050
0.100
1.552
0.681
0.108
0.253
1.030
0.079
0.050
0.150
1.550
0.752
0.110
0.252
1.031
0.092
0.050
0.200
1.547
0.825
0.113
0.251
1.032
0.105
0.050
0.313
1.541
1.000
0.121
0.249
1.036
0.138
0.100
0.313
1.518
0.993
0.242
0.215
1.044
0.274
0.200
0.313
1.477
0.982
0.487
0.146
1.059
0.545
0.400
0.313
1.412
0.963
0.980
0.006
1.083
1.078
0.400
0.300
1.405
0.989
0.992
0.002
1.088
1.119
0.401
0.337
1.401
1.000
1.000
0.000
1.090
1.139
Notes: The quality parameter δ is set to 1.
One important assumption behind the efficiency result is that agents are all ex-ante symmetric. If for example firms are exante heterogeneous, a monopolist intermediary may induce a suboptimal entry of agents into the platform (see Nocke, Peitz
and Stahl, 2007).
Click-through fees
If we allow for a click-through fee c in addition to an implicit and exogenous fixed cost k of advertising at the comparison site,
Page 15 of 21
Comparison Sites
the expected profits of a firm i are
The first part of this profits expression comes from the consumers who participate in the comparison site. These consumers
click on firm i's product to get directed to firm 's website when firm i is the only advertising firm and consumers prefer product
i over the outside option, or when firm j also advertises its product but consumers prefer product i over product j and the
outside option. The consumers who do not register with the platform buy at random and therefore do not generate any costs
of clicks.
The formula for the profits of the rival firm is similar. The utility consumers obtain when they opt out of the platform is the same
as equation (3), while the utility they get when they register with the intermediary is the same as in equation (4). As we have
done above, if firms mix between advertising at the platform and not advertising, it must be that
and
. If consumers mix between registering with the intermediary and opting out it must be that
u*(1) * u*(0).
The platform's profits stem from consumer subscriptions and click-through traffic. Therefore,
The following table shows that the behavior of the profits function of the intermediary with click-through fees is qualitatively
similar to the one with fixed fees. Suppose firms and consumers opt out with strictly positive probability. Then, for a (p. 246)
given click-through fee, the intermediary can increase the consumer subscription fee and obtain a greater profit. Likewise, for
a given subscription fee, the intermediary can increase the consumer subscription fee and increase benefits. As a result,
firms and consumers must all participate surely.
Table 9.8 Model with Horizontally Differentiated Products and CPC
k
c
s
p*
α
μ
π*
u*
Π
0.200
0.500
0.050
1.763
0.655
0.595
0.115
0.983
0.292
0.200
0.500
0.100
1.759
0.727
0.608
0.111
0.979
0.342
0.200
0.500
0.150
1.755
0.801
0.622
0.107
0.975
0.392
0.200
0.600
0.150
1.824
0.817
0.668
0.094
0.957
0.488
0.200
0.800
0.150
1.985
0.855
0.771
0.065
0.914
0.720
0.200
1.000
0.175
2.179
0.940
0.891
0.031
0.856
1.044
0.200
1.186
0.182
2.386
1.000
1.000
0.000
0.800
1.369
Notes: The quality parameter δ is set to 1.
Page 16 of 21
Comparison Sites
3.2.2. Vertical Product Differentiation
We now consider the case in which firms sell products that are also vertically differentiated. Without loss of generality we
normalize firm j's quality level to zero and assume firm i offers higher quality than firm j, that is, δi 〉 δj = 0.
We first hypothesize that at the equilibrium fees a and s, consumers participate with probability less than one, so μ 〈 1.
Moreover, for large enough differences in quality, the high-quality firm i participates with probability one and firm j participates
with probability less than one, α i = 1 so and α j ∈ (0,1). If this is so it must be the case that
Since we are assuming the intermediary has to charge the same fee a to both firms, such an equilibrium could arise if the
intermediary sets her fees such that the low-quality firm is indifferent between participating or not participating—for large
enough differences in quality this implies that the high-quality firm will always obtain higher profits by advertising on the
comparison site than by selling to the non-participating consumers. From these equations we can obtain the equilibrium
participation probabilities (α j,μ).
Table 9.9 shows the behavior of firm and user participation rates and the profits of the intermediary when the quality
difference Δ = δi− δj = 1. Starting from a (p. 247) relatively low level of a and s, an increase in s increases both consumer
and firm participation. As a result, the intermediary will continue to increase s till α j is one. From that point onward, an increase
in the firms’ fee increases the user participation rate and lowers the participation of firm j so the intermediary again finds it
profitable to raise the consumer fees. This process continues until firm j and the consumers all participate with probability one.
The intermediary maximizes its profit at a = 0.188 and s = 0.281.
Table 9.9 Model with Vertical Product Differentiation (Δ = 1)
a
s
pi
pj
αi
αj
μ
πi
πj
u
Π
W
0.100
0.150
1.556
1.242
1.000
0.281
0.520
0.454
0.134
0.928
0.206
1.722
0.100
0.200
1.544
1.241
1.000
0.573
0.522
0.441
0.133
0.931
0.262
1.767
0.100
0.250
1.531
1.241
1.000
0.868
0.524
0.427
0.132
0.934
0.318
1.813
0.100
0.272
1.525
1.241
1.000
1.000
0.525
0.421
0.132
0.937
0.343
1.833
0.150
0.272
1.501
1.214
1.000
0.974
0.792
0.348
0.058
0.944
0.512
1.862
0.188
0.272
1.481
1.188
1.000
0.952
0.988
0.293
0.000
0.950
0.639
1.882
0.188
0.281
1.476
1.188
1.000
1.000
1.000
0.288
0.000
0.952
0.657
1.897
0.567
−0.122
1.567
1.193
1.000
0.000
1.000
0.000
0.000
1.148
0.445
1.594
Notes: The quality parameter δi is set to 1, while δj = 0.
So far we have looked at an equilibrium in which the fees are set such that both firms will participate. However, the
intermediary could also set its fees such that only the high-quality firm participates, which means it must be the case that
The last line of Table 9.9 shows that if the intermediary sets her fees such that only the high-quality firm participates, profits
Page 17 of 21
Comparison Sites
are lower than if it lets the two firms participate. Nevertheless, for relatively large differences in quality it might be the case
that the intermediary prefers only the highly quality firm to advertise on its website. For instance, when Δ = 3 the comparison
site maximizes profits by setting a and s in such a way that firm j does not find it profitable to advertise on the platform, while
firm i participates with probability one. This can be seen in Table 9.10 where we describe the behavior of firm and consumer
decisions and the implications for the comparison site's profits forΔ = 3. As shown in the table, since the high-quality firm is
the only firm active at the comparison site, for all a and s the price found on the intermediary will be the same. Starting from
relatively low (p. 248) fee levels a higher s leaves the participation rate of the consumers unchanged—α i is changed in
such way to keep consumers indifferent between participating and not participating. Increasing a leads to higher participation
of the consumers, and as such to higher profits for the intermediary. As shown in the table the intermediary maximizes her
profits when a = 1.557 and s = 0.347 (in bold). Comparing the last two lines of this table shows that in this case setting a
relatively low a such that both firms will participate will generate lower overall profits for the platform in comparison to setting a
relatively high a such that only the high-quality firm participates. However, as shown by the last column of Table 9.10 welfare
W is higher when everyone participates.
Table 9.10 Model with Vertical Product Differentiation (Δ = 3)
a
s
pi
pj
αi
αj
μ
πi
πj
u
Π
W
1.000
0.100
2.557
1.234
0.737
0.000
0.642
0.557
0.100
1.169
0.802
2.628
1.000
0.200
2.557
1.225
0.844
0.000
0.642
0.557
0.100
1.169
0.972
2.799
1.000
0.300
2.557
1.215
0.950
0.000
0.642
0.557
0.100
1.169
1.143
2.969
1.000
0.347
2.557
1.210
1.000
0.000
0.642
0.557
0.100
1.169
1.223
3.050
1.300
0.347
2.557
1.172
1.000
0.000
0.835
0.257
0.046
1.169
1.590
3.063
1.500
0.347
2.557
1.138
1.000
0.000
0.963
0.057
0.010
1.169
1.835
3.072
1.557
0.347
2.557
1.127
1.000
0.000
1.000
0.000
0.000
1.169
1.905
3.074
0.115
0.490
2.388
1.115
1.000
1.000
1.000
1.273
0.000
1.241
0.720
3.235
Notes: The quality parameter δi is set to 3, while δj = 0.
4. Concluding Remarks and Open Research Lines
The emergence of Internet shopbots and their implications for price competition, product differentiation, and market efficiency
have been the focus of this chapter. We have asked a number of questions. How can a comparison site create value for
consumers? Do firms have incentives to be listed in comparison sites? Under which conditions are prices listed in these sites
dispersed? Do comparison sites enhance social welfare? Can comparison sites generate efficient allocations?
To answer these questions we have developed a simple model of a comparison site. The intermediary platform tries to attract
(possibly vertically and horizontally differentiated) online retailers and consumers. The analysis of the model has revealed
that product differentiation and price discrimination play a critical (p. 249) role. For the case of homogeneous product sellers
(Baye and Morgan, 2001), if the online retailers cannot charge different on- and off-the-comparison-site prices, then the
comparison site has incentives to charge fees so high that some firms are excluded. The fact that some firms are excluded
generates price dispersion, which creates value for consumers. This value, in turn, can be extracted by the comparison site
via consumer participation fees. The market allocation is not efficient since products are sold at prices that exceed marginal
cost. By contrast, when on- and off-the-comparison-site prices can be different, the comparison site has an incentive to lure
all the players to the site, which generates an allocation that is efficient.
When online retailers sell products that are horizontally differentiated, the comparison site creates value for consumers by
aggregating product information. In this way, the comparison site easily becomes a more attractive marketplace than the
decentralized one (Galeotti and Moraga-González, 2009). In equilibrium, even if firms cannot price discriminate the
comparison site attracts all the players to the platform and an efficient outcome ensues.
Page 18 of 21
Comparison Sites
Allowing for vertical product differentiation brings interesting additional insights. The platform faces the following trade-off. On
the one hand, it can attract high-quality and low-quality producers to the platform so as to increase competition, aggregate
information and generate rents for consumers that are ultimately extracted via registration fees. Alternatively, the comparison
site can set advertising fees that are so high that low-quality sellers are excluded, thereby creating value for the top sellers.
When quality differences are large, the latter strategy pays off. The comparison site excludes low quality from the platform,
and grants higher rents for advertising sellers. Part of these rents are ultimately extracted by the comparison site. Some value
for consumers is destroyed and the resulting allocation is inefficient.
Along the way, we have kept things simple and therefore left aside important issues. One important assumption has been that
firms and consumers could only use a single platform as an alternative to the search market. In practise, multiple comparison
sites exist and questions about how they compete with one another and their sustainability in the long-run arise. Moreover, if
retailers are ex-ante differentiated, one aspect worth to investigate is whether they distribute themselves across platforms
randomly of else they are sorted in a top-down way across them. One practice we observe in recent days is that retailers are
given the possibility to obtain priority positioning in comparison sites’ lists after paying an extra fee. Adding this choice
variable to the problem of the platform's manager would probably complicate matters much. One way to address this issue is
to use the framework put forward in the literature on position auctions. Priority positions could be auctioned as it is the case in
search engines advertising. For instance, Xu, Chen and Whinston (2011) study a model where firms sell homogenous
products and bid for prominent list positions. They relate the extent of market competitiveness to willingness to bid for
prominent positions. Another realistic feature we have ignored throughout is incomplete information. When quality is private
information of the retailers, the (p. 250) problem of the platform is how to screen out good from bad thereby creating value
for consumers. For a mechanism design approach in the context of search engines to this problem see Gomes (2011). Finally,
another simplifying assumption has been that search costs are negligible within a platform. When searching the products
displayed in the price comparison site is still costly for consumers, platforms may have incentives to manipulate the way the
alternatives are presented, thereby inducing more search and higher click turnover (see Hagiu and Jullien, 2011).
On the empirical side, much research remains to be done. In fact, not much is yet known about the extent to which theoretical
predictions are in line with the data, especially in settings where product differentiation is important. Comparison sites can
potentially provide a wealth of data. Detailed data from comparison sites may be helpful to learn how exactly consumers
search for products. Some consumers might sort by price, while others may sort by quality ratings or availability. This type of
information may give firms indications about the best way to to present their product lists. Click-through data can facilitate the
estimation of structural models of demand. Finally, using data from bids for prominent positions may be a useful way to
estimate the characteristics of the supply side of the market.
Acknowledgments
We thank Martin Peitz, Joel Waldfogel, and Roger Cheng for their useful comments. Financial support from Marie Curie
Excellence Grant MEXT-CT-2006–042471 is gratefully acknowledged.
References
An, Y., M.R. Baye, Y. Hu, J. Morgan, and M. Shum (2010), “Horizontal Mergers of Online Firms: Structural Estimation and
Competitive Effects,” mimeo.
Anderson, S.P. (2012), “Advertising and the Internet,” In: The Oxford Handbook of the Digital Economy, M. Peitz and J.
Waldfoegel (eds.). New York/Oxford: Oxford University Press.
Anderson, S.P. and S. Coate (2005), “Market Provision of Broadcasting: A Welfare Analysis,” Review of Economic Studies, 72,
947–972.
Armstrong, M. (2006), “Competition in two-sided markets,” Rand Journal of Economics, 37, 668–691.
Armstrong, M. and J. Wright (2007), “Two-Sided Markets, Competitive Bottlenecks and Exclusive Contracts,” Economic Theory
32, 353–380.
Athey, S. and G. Ellison (2011), “Position Auctions with Consumer Search,” Quarterly Journal of Economics 126, 1213–1270.
Baye, M.R. and J. Morgan (2001), “Information Gatekeepers on the Internet and the Competitiveness of Homogeneous Product
Markets,” American Economic Review 91, 454–474.
Baye, M.R., J. Morgan, and P. Scholten (2004), “Price Dispersion in the Small and in the Large: Evidence from an Internet Price
Comparison Site,” The Journal of Industrial Economics 52, 463–496.
Page 19 of 21
Comparison Sites
Baye, M.R., J. Morgan, and P. Scholten (2006), “Information, Search, and Price Dispersion,” Chapter 6 in Handbook in
Economics and Information Systems, Volume 1 (T. Hendershott, Ed.), Amsterdam: Elsevier. (p. 252)
Baye, M.R., R. Gatti, P. Kattuman, and J. Morgan (2006), “Did the Euro Foster Online Price Competition? Evidence from an
International Price Comparison Site,” Economic Inquiry 44, 265–279.
Baye, M.R. and J. Morgan (2009), “Brand and Price Advertising in Online Markets,” Management Science 55, 1139–1151.
Belleflamme, P. and E. Toulemonde (2009), “Negative Intra-Group Externalities in Two-Sided Markets,” International Economic
Review, 50–1, 245–272.
Brown, J. and A. Goolsbee (2002), “Does the Internet Make Markets More Competitive? Evidence from the Life Insurance
Industry,” Journal of Political Economy 110, 481–507.
Brynjolfsson, E. and M.D. Smith (2000), “Frictionless Commerce? A Comparison of Internet and Conventional Retailers,”
Management Science 46, 563–585.
Caillaud, B. and B. Jullien (2003), “Chicken and Egg: competition among intermediation service providers,” Rand Journal of
Economics 34, 309–328.
Chen, Y. and C. He (2011), “Paid Placement: Advertising and Search on the Internet,” Economic Journal 121, F309–F328.
Demsetz, H. (1968), “The Cost of Transacting,” Quarterly Journal of Economics, 33–53.
Van Eijkel (2011), “Oligopolistic Competition, OTC Markets and Centralized Exchanges,” unpublished manuscript.
Ellison, G. and S. Fisher Ellison (2009), “Search, Obfuscation, and Price Elasticities on the Internet,” Econometrica 77, 427–
452.
Evans, D.S. (2003), “The Antitrust Economics of Two-Sided Markets,” Yale Journal on Regulation 20, 325–382.
Galeotti, A. and J.L. Moraga-González (2009), “Platform Intermediation in a Market for Differentiated Products,” European
Economic Review, 54, 417–28.
Gehrig, T. (1993), “Intermediation in Search Markets,” Journal of Economics and Management Strategy 2, 97–120.
Gomes, R. (2011), “Optimal Auction Design in Two-Sided Markets,” unpublished manuscript.
Hagiu, A. and B. Jullien (2011), “Why Do Intermediaries Divert Search?” Rand Journal of Economics 42, 337–362.
Koulayev, S. (2010), “Estimating Demand in Online Search Markets, with Application to Hotel Bookings,” unpublished
manuscript.
Nocke, V., M. Peitz and K. Stahl (2007), “Platform Ownership,” Journal of the European Economic Association 5, 1130–1160.
Rochet, J-C. and J. Tirole (2003), “Platform Competition in Two-Sided Markets,” Journal of the European Economic Association
1, 990–1029.
Rochet, J.-C., and J. Tirole (2006), “Two-Sided Markets: A Progress Report,” Rand Journal of Economics 37, 645–67.
Rubinstein, A. and A. Wolinsky (1987), “Middlemen,” The Quarterly Journal of Economics 102, 581–94.
Smith, M.D. and Brynjolfsson E. (2001), “Customer Decision-Making at an Internet Shopbot: Brand Matters,” Journal of
Industrial Economics 49, 541–558.
Spiegler, R. (2011), Bounded Rationality and Industrial Organization, Oxford University Press.
Spiegler, R. and K. Eliaz (2011), “A Simple Model of Search Engine Pricing,” Economic Journal 121, F329–F339.
Spulber, D. (1999), Market Microstructure: Intermediaries and the Theory of the Firm, Cambridge University Press. (p. 253)
Weyl, E.G. (2010), “A Price Theory of Multi-Sided Platforms,” American Economic Review 100, 1642–1672.
Watanabe, M. (2010), “A Model of Merchants,” Journal of Economic Theory 145, 1865–1889.
Yavas, A. (1994), “Middlemen in Bilateral Search Markets,” Journal of Labor Economics 12, 406–429.
Yavas, A. (1996), “Search and Trading in Intermediated Markets,” Journal of Economics and Management Strategy 5, 195–
Page 20 of 21
Comparison Sites
216.
Notes:
(1.) Weyl (2010) presents a general model. The literature on two-sided markets has typically focused on situations where
network effects are mainly across sides. As we will see later, for profit-making of comparison sites, managing the externalities
within retailers is of paramount importance.
(2.) Alternatively, and without affecting the results qualitatively, these consumers can be seen as buying from a “local” firm,
or from a firm to which they are loyal.
(3.) Later in the chapter, we show that our main results hold when per-click fees are used. As mentioned in the Introduction,
some comparison sites may obtain the bulk of their revenue from the selling of advertising space instead. This alternative
business model is considered elsewhere in this volume.
(4.) In this chapter we will allow for price discrimination across, but not within, marketplaces. That is, the firms may be able to
post on the price comparison site a price different from the one they charge to local/loyal consumers. Things are different
when retailers can price discriminate across loyal consumers. This is often the case in over-the-counter (OTC) markets,
where the bargaining institution is predominant. For a model that studies the role of price discrimination over-the-counter in a
two-sided market setting see Van Eijkel (2011).
(5.) Matching values are realized only after consumers visit the platform or the individual shops. This implies that ex-ante all
consumers are identical. This modelling seems to be appropriate when firms sell newly introduced products.
(6.) It is well known that in models with network externalities multiple equilibria can exist. In our model there is always an
equilibrium where no agent uses the comparison site. This equilibrium is uninteresting and will therefore be ignored.
(7.) To be sure, the model of Baye and Morgan (2001) is more general because they study the N-retailers case and because
consumers have elastic demands. In addition, for later use, we have assumed that consumers who register with the platform
cannot visit the local shop any longer. The results do not depend on this assumption and we will also make it here for overall
consistency of our chapter.
(8.) For the moment, we are ignoring the possibility firms advertise on the platform a price different from the one they charge
to the non-participating consumers.
(9.) One reason why CPC tariffs may recently have become more widely used is that they involve less risk for the platform. An,
Baye, Hu, Morgan, and Shum (2010) also allow for a click-trough fee in a Baye and Morgan (2001) type model, but do not
model the optimal fee structure of the comparison site.
(10.) The parameter values used in Figure 9.1 are μi = μj = 1, α i = 1, δi = 1, and δi = 0.
Jose-Luis Moraga-Gonzalez
Jose-Luis Moraga-Gonzalez is Professor of Microeconomics at VU University-Amsterdam and Professor of Industrial Organization at
the University of Groningen.
Matthijs R. Wildenbeest
Matthijs R. Wildenbeest is Assistant Professor of Business Economics and Public Policy at the Kelley School of Business, Indiana
University.
Page 21 of 21
Price Discrimination in the Digital Economy
Oxford Handbooks Online
Price Discrimination in the Digital Economy
Drew Fudenberg and J. Miguel Villas-Boas
The Oxford Handbook of the Digital Economy
Edited by Martin Peitz and Joel Waldfogel
Print Publication Date: Aug 2012
Online Publication Date: Nov
2012
Subject: Economics and Finance, Economic Development
DOI: 10.1093/oxfordhb/9780195397840.013.0010
Abstract and Keywords
This article addresses the effects of price discrimination that is based on more detailed customer information, both
under monopoly and under competition. The activities of the firms, namely using information gained from
consumers through their purchase decisions, raise the possibility that consumers may understand that the
decisions which they take may impact the options that they will have available in the future periods. It is noted that
competing firms can potentially gain more from learning the valuation of consumers than not learning, which could
be a force for greater competition for consumers. In some markets, firms learn consumer characteristics that
directly influence the cost of servicing them. For situations with competition, if the competitors are aware that firms
have purchase history information, more information may actually result in more intense competition after the
information is gained.
Keywords: price discrimination, customer information, monopoly, competition, competing firms, consumers
1. Introduction
With the developments in information technology firms have more detailed digital information about their
prospective and previous customers, which provides new mechanisms for price discrimination. In particular, when
firms have information about consumers’ previous buying or search behavior, they may be able to use this
information to charge different prices to consumers with different purchase histories. This form of price
discrimination is present in several markets, and it may become increasingly important with greater digitalization of
the market transactions. The development of the information technologies and web-browser cookies allows firms to
collect, keep, and process more information about consumers, and can affect the prices and products offered by
firms to different consumers.
This article discusses the effects of price discrimination based on information gained by sellers from consumer
purchase histories under monopoly and competition. When price discrimination is based solely on purchase
histories, it is called behavior-based price discrimination. This article also discusses the effect of finer (p. 255)
information at the customer level on price discrimination practices. Having more information about the consumers’
product valuation (through purchase histories or other methods) helps the firm extract more surplus from the
consumers. However, consumers may anticipate this possibility, and therefore alter their initial purchases. Firms
may also be proactive in sorting consumers that purchase in the early periods in order to gain more information on
their preferences. With competition, more information about consumers may lead to pricing as if there were less
product differentiation as firms target each separate price to a less heterogeneous consumer pool. This effect can
lead to lower equilibrium prices. Nevertheless, competing firms can potentially benefit from customer recognition in
spite of lower equilibrium prices if the information leads to changes in the products sold—either higher sales of
same-cost products or a switch of sales to lower cost products—or to sales to consumers who are less costly to
Page 1 of 15
Price Discrimination in the Digital Economy
serve.
This article follows closely the results by Hart and Tirole (1988), Schmidt (1993), and Villas-Boas (2004) on
monopoly, and the results by Villas-Boas (1999) and Fudenberg and Tirole (2000) for competition, while discussing
the effects of switching costs considered in Chen (1997) and Taylor (2003). For a more extended survey of the
behavior-based price discrimination literature see Fudenberg and Villas-Boas (2006).1
The article is organized in the following way. The next section considers the potential short-term market effects of
firms having more information about their consumers. We consider both the case of monopoly and of competition,
and for each case we consider the possibility of no information being tracked, the possibility of information being
solely based on purchase histories (with either fixed consumer preferences, changing consumer preferences, or
entry of new generations of consumers), and the possibility of firms learning the preferences of their customers
through other channels in addition to purchase choices. Section 3 looks at the strategic behavior of consumers
when they foresee that their purchase decisions affect the market opportunities available to them in the future,
because of the information gained by firms. Section 4 discusses the strategic actions of firms when gaining
information about consumers through their purchases. Section 5 discusses several extensions, and Section 6
concludes.
2. Information About Consumers
Consider a firm selling a product to a set of heterogeneous consumers of mass one. Suppose consumers have a
valuation for the product v with cumulative distribution function F(v), density f(v), with the property that v[1 − F(v)]
is quasi-concave in v. Suppose marginal costs are zero, and consider the firm being able to know something about
the valuation of some consumers, and being able to charge consumers differently depending on its information.
Suppose also that there are no (p. 256) strategic issues about the firm learning more from consumers in this
purchase occasion, and that consumers are not concerned about any effects on their future payoffs of their
decisions to buy or not to buy at this purchase occasion. This can be seen as the ex-post situation after the firm
learned something about the consumers in the previous periods. This can also be seen as the situation in the
second period of a two-period model, where the firm learned something from the consumers that bought in the first
period.
There are several modeling possibilities of what the firm knows about the valuation ν of each consumer. One
possibility is that the firm does not know anything about the valuation of each consumer and just knows that the
cumulative distribution of ν is F(ν). This could be the case when the firm is not able to keep track of which
consumers bought the product in the previous period or there are no information technologies to keep track of the
consumer purchase histories. This could also be the case if the valuations of each consumer were independent
over time. In this “no information” case, as the firm does not know anything about ν per consumer, the firm just
charges a price P which is the same for all consumers. Since consumers are not concerned about the firm learning
their valuation while purchasing the product, they buy the product as long as ν ≥ P. The profit-maximizing price P
is then the optimal static monopoly price P* = argmaxp P[1 − F(p)].
Another possibility is that consumers have the same valuation from period to period, and entered the market in the
previous period, and the firm is able to identify which consumers bought the product in the previous period (with
consumers being around for two consecutive periods).2 Suppose that there was a demand x in the previous period
and that the consumers who bought the product in the previous period are the ones with the highest valuation.
Then, we know that all consumers who bought in the previous period have valuation greater or equal to some
valuation v* which is determined by x = 1 − F(v*).
The firm can then charge two prices, one price for the consumers who bought in the previous period, which the
firm knows to have valuations above v*, and another price for the consumers that did not buy in the previous
period, which the firm knows to have valuation less than v*. Let us denote the price to the consumers who bought
in the previous period as Po, and the price to the new consumers as Pn. For the price to the previous customers, if
v* ≥ P* then the firm just charges Po = v*, as the optimal price for those consumers without any constraints on their
valuations would have been P*. For the same reason, if v* 〈 P* then the firm chooses to charge Po = v*. One can
then write Po = max[v*, P*] which considers both cases.
*
Page 2 of 15
Price Discrimination in the Digital Economy
For the price to the new consumers the firm chooses to charge pn = argmaxp P[F(v*) − F(p)], which accounts for
the fact that the firm is only targeting the consumers with valuation below v*. Considering both the price to the
previous customers and the price to the new consumers, the firm is better off in this period with the ability to
identify the consumers that bought in the previous period, as it is able to better extract the consumer surplus
through price discrimination. This is a continuous setup of the two-type model considered in Hart and Tirole (1988)
and Villas-Boas (2004).
(p. 257) One variation of this possibility is that consumers’ valuations change through time. When this happens
the firm may potentially want to charge a lower price to the consumers who purchased in the previous period, and
a higher price to the new customers. One example of changing preferences is one where with some probability ρ
consumer change preferences from one period to the next, and in the next period the consumer's valuation is an
independent draw from the distribution F(v). Consider the effect of this on the profit-maximizing price for the
previous customers. If v* ≥ P*, the firm's profit from the previous customers for a price Po ≤ v* is Po[(1 − ρ)1 −
F(v*)] + ρ[1 − F(v*)]1 − F(Po)]}. One can obtain that for sufficiently small ρ we have the optimal Po = v*. After a
certain threshold value of ρ, the optimal price for the previous customers, Po, starts decreasing from v* with
increases in ρ, and keeps on decreasing until it reaches P* when ρ = 1. If v* 〈 P*, the optimal price for the previous
customers is always P*, independent of ρ. Regarding the price for the new customers, the firm's profit from the new
customers is Pn{(1 − ρ)[F(v*) − F(Pn)] + ρF(v*)[1 − F(Pn)]}. One can then obtain that the optimal Pn is increasing in
ρ, reaching P* when ρ = 1. Note that a firm is still strictly better off with this possibility of identifying its past
consumers as long as ρ 〈 1.
Another interesting variation is the case when a new generation of consumers comes into the market, so that the
firm knows that some of their new consumers may have a high valuation for the product, and may be in the market
for future periods. This case is considered in Villas-Boas (2004) with overlapping generations of consumers. In
comparison to the model above, this variation adds a force for the firm to raise the price charged to the new
consumers, as some of the new potential consumers have a high valuation for the firm's product.
Finally, another possibility is that the firm is able to learn something about the valuations of the consumers that
bought from it, and charge them differently because of their valuation. Note that this case is not “pure” behaviorbased price discrimination as in the case above. In the case above, the firm only learns whether a consumer
valuation is such that a consumer makes a certain choice (either buy or not buy). In some cases a firm may be
able to learn something more about a consumer's valuation from a consumer that bought the product. For example,
a consumer who chose to buy the product may reveal to the firm during the purchase process some information
about her/his preferences and valuation, perhaps during the exchange of information with a salesperson, or the
interaction of the consumer with a website through the search process, or after the purchase during the servicing
of the consumer.
In addition to learning about a consumer's valuation, the firm may learn about the cost of servicing the consumer.
This can be particularly important in insurance or credit markets, where a firm may learn after purchase whether a
customer is more or less likely to have an accident, or more or less likely to default on its debt. It can also be
important in markets where customers can vary in the cost of servicing their purchases. For example, a cell
telephone company knows how often each of its customers calls their service center while it does not have that
information for noncustomers. In these cases a firm can benefit even more in this period from having information on
the consumers who bought in the previous period.
(p. 258) An extreme version of learning about the previous customers’ valuations is when the firm learns all the
valuation of the consumers that bought the product, and can perfectly price discriminate between them. In this
case, previous consumers with valuation v ≥ v* would each be charged a price Po(v) = v, each ending up with
zero surplus in this period. For consumers with valuation v 〈 v*, the ones that did not purchase in the previous
period, the firm does not know the consumers’ valuation, and therefore the price that they are charged is the same
as obtained above, Pn argmaxp P[F(v*) − F(p)]. In this case, consumers end up with lower surplus after revealing
their preference information, and the seller is better off, than in the case where the seller has only information
about the consumer purchase histories.
Competition
Page 3 of 15
Price Discrimination in the Digital Economy
Consider now a variation of the model where there is competition, as in Fudenberg and Tirole (2000) and VillasBoas (1999). Suppose that we have a duopoly with Firms A and B competing in a market. A consumer with
valuation v for Firm A has valuation z − v for Firm B, where z is a constant, and such that 2v represents the relative
preference of a consumer for good A over good B. Denoting the support of v as [v̱, v̄], the two firms are symmetric
if z = v̄ + v̱, and f(v) is symmetric around
, which we assume. Suppose also that v̱ is high enough such
that the full market is covered. Then, as above, suppose conditions under which (1) firms do not know anything
about the consumers’ valuations, (2) firms can identify the consumers that bought from them, or (3) consumers
learn the valuations of the consumers that bought from them.
If firms do not know anything about consumers valuations they will just set one price for all consumers. For Firm A,
charging price PA, with the competitor charging the price PB, the demand is
obtain similarly the demand for Firm B, and obtain that in equilibrium,
Fv is the uniform distribution this reduces to
. We can
. For the case where
.
If firms are able to identify the consumers that bought from them in the previous period, consumer preferences
remain unchanged, and there are no new consumers in the market, a firm may know that the consumers that
bought from it have a relative preference for its product, and may then be able to charge one price to its previous
customers, and another price to the previous customers of the competitor. Given the demand that a firm obtained
in the previous period, it can identify that the threshold valuation of a consumer that bought from it in the previous
periods. Denote this threshold valuation as v*, the valuation for Firm A, such all consumers with valuation v 〉 v* for
Firm A chose Firm A, and all consumers with (p. 259) valuation v 〉 v* for Firm A chose Firm B (as, by definition,
their valuation is greater than z − v* for Firm B). We can then separate the problem in finding the equilibrium within
the previous customers for Firm A, and within the previous customers for Firm B.
For the previous customers of Firm A, Firm A can charge price PoA and Firm B can charge price PnB. The price for
Firm A of maximizing the profits from its previous customers is determined by
and the one for Firm B is determined by
. Solving these two equations together one can obtain the
equilibrium prices. For example, for the uniform distribution example one can obtain
and
. Under general conditions (for example, satisfied by the uniform distribution) one can
obtain that in equilibrium PnB is smaller than PoA, and that PoA is smaller than the equilibrium price under no
information,
. The intuition for the first result is that Firm B has to price lower because it is trying to attract
consumers that have a higher preference for Firm A. The intuition for the second result is that Firm A's price has to
respond to a lower price from Firm B, and given strategic complementarity in prices, Firm A's price is lower than the
no information equilibrium prices. A similar analysis can be obtained for Firm B's previous customers. As the market
is fully covered, we can then obtain that all consumers pay a lower price than in the no-information case, and that
industry profits in the current period (second period of a two-period model) are now lower. That is, after learning
the information about consumers, competition leads to lower profits, while the result was different under monopoly.
Page 4 of 15
Price Discrimination in the Digital Economy
This is for example the result in Fudenberg and Tirole (2000). This result is similar to the one in competition with
third-degree price discrimination, where there is a market force leading to lower profits than in competition with
uniform pricing, because there is now less differentiation across firms for the consumers being served for each pair
of prices. An example of this is Thisse and Vives (1988), where firms know the exact location of the consumers and
compete with location-based price discrimination. Note also that this result is for the industry profits after the
information is revealed. It may still be possible in this setting that the present value of industry profits is greater
when considered before information is obtained from consumers, if consumers foresee that their choices in the
earlier periods affect the offers that they will get from the firms in future periods. This effect is studied in Section 4.
One particular characteristic of the framework above is that firms know as much about their previous customers as
about the previous customers of the competitors. That is, if a new consumer approaches a firm the firm knows that
the consumer was a customer of the competitor in the previous period. One way to take out this characteristic from
the model is to allow new generations of consumers to come into the market as considered in Villas-Boas (1999). In
that case, a firm knows that a pool (p. 260) of new consumers is composed of old consumers that do not have a
high valuation for its product (and a high valuation for the competitor), and by a new generation of consumers,
where several of them may have a high valuation for the firm's product. This is then a force for the firm to charge a
higher price to its new customers (as some of them may have a high valuation), which then, by strategic
complementarity of prices, leads the competitor's price to its previous customers to be higher.
Another variation, as considered above for the monopoly case, is that a fraction of consumers change preferences
from the previous period to the new period (e.g., Chen and Pearcy, 2010). That is, some of a firm's previous
customers may now have a lower valuation for the firm's product than in the previous period, and some of the
competitor's previous customers may now have a higher valuation for the firm's product. This will then lead the
firm, in comparison to the case when the preferences do not change, to lower the price for its previous customers,
and raise the price to its new customers.
Another possibility for a firm to learn about its previous customers’ valuations, also as discussed above for the
monopoly case, is to learn something about the valuation of each consumer during the purchase process or when
servicing the consumer post-purchase, for the consumers that bought from the firm. An extreme version of this
case is when the firm learns perfectly the valuation of its previous customers. For Firm A, the price for a previous
customer with a valuation v would then be the price (if greater than its marginal costs) that it would lead that
consumer to be just indifferent between purchasing the product from Firm A at that price, and purchasing the
product from Firm B at its price for the ne customers. That is, v = PoA(v) = z − v − PnB. Note that if
,
the competitor would find it profitable to attract some of the previous customers of Firm A, as Firm A does not
charge a price below its marginal cost. If we have
, then the equilibrium involves PnB = 0 (as marginal
costs are assumed equal to zero) and PoA(v) = 2v − z. For the uniform distribution case one can still obtain that in
this case profits are lower than in the no information case. One can however consider variations of this model
where the firms’ learn the profitability of their previous customers (e.g., insurance markets, credit markets), which
can potentially lead to more information leading to ex-post greater profits, as the informed firm may be able to
extract more surplus from the different consumers (see, for example, Shin and Sudhir, 2010, for a study along
these lines).
3. Strategic Behavior of Consumers
The activities of the firms considered in the previous section, namely using information gained from consumers
through their purchase decisions, are contingent on those decisions, which raises the possibility that consumers
may understand (p. 261) that the decisions that they take may affect the options that they will have available in
the future periods. In this section we assume that the firm does not have any information about any individual
consumer prior to that consumer's decision, and further assume that the firm only uses this information for one
more period after this consumer's decision (as in the analysis in the previous section). This could be seen as the
consumer entering the market in this period and staying in the market for only two periods. The consumers
Page 5 of 15
Price Discrimination in the Digital Economy
discount the next period payoff with the discount factor δc ≤ 1.
A consumer considers whether to buy the product now at a price pf and be offered a price po in the next period, or
not to buy it this period, and be offered the product in the next period at the price pn. If a consumer with valuation v
buys the product in this period and next period, he gets a surplus v − pf + δc (v − pn, 0). If the consumer decides
not to buy this period, he will get a surplus δC max[ν − pi,0]. As noted in the preceding section, the marginal
consumer who is indifferent between buying and not buying in the current period, v*, ends up with zero surplus in
the next period, because it faces a price po ≥ v*. Then the valuation v* of a consumer indifferent between buying
and not buying in the current period is determined by
From this it is clear that if, for some reason, the price to the new consumers is expected to increase in the next
period, pn ≥ pf , then all consumers with valuation greater than the current price charged, pf , should buy the
product. Therefore, in this case we have v* = pf . This can occur, for example, when there is entry of new
consumers in the next period, such that the firm chooses to raise its price to the new consumers. Villas-Boas
(2004) shows that in such a setting the equilibrium involves price cycles, where in some periods the price to the
new consumers is indeed higher than the price to new consumers in the previous period.
If, on the other hand, the consumers expect the price to the new consumers to be lower next period, pn 〈 pf (which,
as shown in the earlier section, happens when there is no entry of more consumers next period), then consumers
will be strategic, and there are some consumers with valuation above the current price pf who prefer to wait for the
next period, because they get a better deal next period. In this case the valuation of the marginal consumer
indifferent between buying and not buying in the current period can be obtained as
.
Consumers with valuation v ∈ [pf , v*] would buy the product if they were myopic, but do not buy if they are
strategic. This effect may hurt the firm, as fewer consumers buy in this period when the firm is able to practice
behavior-based price discrimination, than when the firm does not keep information of the consumers that bought
from it. This is known as the “ratchet effect” of the consumers loosing surplus (i.e., being charged a higher price)
by revealing some information about their valuations (Freixas et al., 1985).
(p. 262) If the firm is able to learn something about the valuation of each consumer during the purchase process,
then the marginal consumer if buying in the current period also ends up getting zero surplus in the next period, and
therefore, the valuation of the marginal consumer buying in the current period is obtained in the same way.
In another variation of the model mentioned above, if preferences change through time, then a marginal consumer
buying in the current may have a positive surplus in the next period if his preferences change to higher valuation
than the price to the previous customers. Because some of the previous customers may lower their valuation the
firm may also lower its price to the previous customers. These two effects are a force for more consumers to be
willing to buy at a given price in the current period (lower price and consumers are less likely to have zero surplus
in the next period of buying in the current period). In the extreme case where valuations are completely
independent from period to period, the problem of the seller ends up being like a static problem in every period,
and all consumers with valuation above the price charged in a current period buy the product in that period.
Competition
Under competition, if the firms are not able to identify their previous customers, then customers make choices in
each period as if there are no future effects, and the market equilibrium is as characterized above for the no
information case.
If the firms are able to identify their previous customers, then in the competition model of the previous section, a
marginal consumer has to decide between buying from Firm A and getting the competitors’ product in the next
period at the price for its new customers, getting a surplus of v − pfA + δc (z − v − pnB), and buying from Firm B
and getting product A in the next period at the price of its new customers, getting a surplus of z − v − pfB + δc (z −
*
Page 6 of 15
Price Discrimination in the Digital Economy
v − pnA). The valuation v* for product A of the consumer indifferent between buying either product in the current
period has then to satisfy making these two surpluses equal, leading to (2v* − z)(1 − δc ) = pfA − pfB + δc (pnB −
pnA) where pnA and pnB are also a function of v*, given the analysis in the previous section. From above, if the
marginal consumers decide to buy product A in the current period instead of product B, then they know that Firm B
will charge a higher price to new consumers in the next period (as those new consumers have now a greater
preference for Firm B). That means that a price cut by Firm A in the current period leads to fewer consumers
switching to product A than if the next period prices were fixed, as consumers realize that by switching they will be
charged a higher price in the next period from the product that they will buy, product B. That is, the possibility of
behavior-based price-discrimination in the future makes the consumers less price sensitive in the current period.
For the uniform distribution example we can obtain
, where the (p. 263) demand for
product A is composed of consumers with a valuation for product A of v ≥ v*. This lower price sensitivity may lead
then to higher prices in the first period, as discussed in the next section.
When some of the consumers’ preferences change from period to period the results above continue to hold, but
we get closer to the no information case.
When there is entry of new consumers, as described in the section above, the effect of current demand realization
on future prices for new consumers is reduced. In that case, demand may become more price sensitive when firms
can identify their previous customers. This is because the consumers at the margin know that they switch products
from period to period. Then, their problem is the one in which order to get the products: product A in the current
period followed by product B in the next period, or the reverse order. In the steady state of a symmetric equilibrium
in an overlapping generations model, consumers become less concerned about this order as their discount factor
δυ increases, so they become more and more price sensitive (Villas-Boas, 1999). This effect is also present in the
case where there is no entry of new consumers, but in that case the effect mentioned above of greater prices in
the next period for switching consumers dominates.
When firms learn the consumers’ exact valuations from their previous customers, the marginal consumers in a
symmetric equilibrium end up with a surplus equal to
, as the poaching price is zero and a firm
charges a price equal to 2v − z to its previous customers in the next period, as presented in Section 2. In this
case, for the symmetric market, the marginal consumers who switch to product A know that they will be able to buy
product B at a price that is below what they would get if they stayed with product B. This means that in this case
the demand is more price sensitive when charging a lower price in the current period than in the case when firms
did not learn about the valuations of their previous customers. For the uniform example, if pfA 〈 pfB, we have the
marginal valuation for product A of a consumer that buys product A in the current period defined by
. This shows for the uniform case that demand is more price sensitive than when firms do
not learn about the valuations of their previous customers.
4. Firms Gaining Information About Consumers
Now we consider the strategic decisions of the firms when their decisions change the information that firms gain in
the current period. In order to focus exclusively on the strategic effects of gaining information, suppose that the
firm does not currently know anything about the consumers’ valuations, and that it will only use the information
obtained now for the next period, where payoffs will be discounted (p. 264) with discount factor δF. Note that if no
information is gained by the consumers’ choices, there are no dynamic effects and the firm's decision is exactly as
considered in Section 2 for the no information case.
Monopoly
We first examine the impact of various information structures under monopoly.
Page 7 of 15
Price Discrimination in the Digital Economy
Consider the case where the seller in the next period is able to recognize its previous customers so that it knows
that their valuation for the product is above a certain threshold, and that the valuation of the other consumers is
below that threshold. The problem of the seller (if the price to the new consumers falls in the next period, which can
be shown to hold) is then
where Pn(Pf ) is obtained as described in Section 2 as
, and π2 (Pf )
is the profit in the next period, which, given the analysis in the preceding sections, can be presented as
where the first term represents the profit next period from the consumers that purchased the product also in the
current period, and the second term represents the profit next period from the consumers that did not purchase the
product in the current period.
For the case when firms and consumers discount the future in the same way, δc = δF = δ, one can show that at the
optimum the valuation of the marginal consumers choosing to purchase in the current period, v*, is strictly greater
than the valuation of the marginal consumers. As discussed above, this is because consumers are aware that
prices may fall in the future, and therefore prefer to wait for the lower prices rather than buy now and also be
charged a high price in the next period. It can also be shown under these conditions, and for the same reason, that
the present value of profits is lower in this case than when the firm does not keep information about who are its
previous customers. That is, in a monopoly setting, a firm knowing whom its previous customers are can have
lower profits.
As noted above, a variation of this possibility is when some consumers’ preferences change from this period to the
next. In this case, the results above continue to go through, but we now get closer to the no information case.
(p. 265) Another possibility is that there is entry of new consumers in the next period and the firm faces a market
consisting of overlapping generations of consumers with each generation coming into the market in each period. In
such a setting, a low price for new consumers satisfies all the existing low valuation consumers and may be
followed in the next period by a high price for new consumers to take advantage of the new high valuation
consumers coming into the market in the next period. Villas-Boas (2004) shows that this is indeed the case and that
the equilibrium involves price cycles in both the prices to the new and previous customers. In periods when the
firm charges low prices to new consumers, they know that the next period price is higher, and then they all buy in
that period as long as their valuation is above the price charged. In periods when the firm charges high prices,
consumers know that the next period prices to new consumers will be low, and therefore some consumers with
valuation above the price charged decide not to buy.
Another interesting issue to consider is what happens when consumers live as long as firms. Hart and Tirole (1988)
consider this case when consumers can be of only two types. They find that if the discount factor δ 〉 1/2 there is
no price discrimination when the horizon tends to infinity, with the firm charging always the low price. The intuition
is that if the high valuation consumer ever reveals his valuation he will get zero surplus forever, and if there were
an equilibrium where the high valuation consumer would buy at a price above the lowest price, he would gain from
deviating and being seen as a low valuation consumer forever. Schmidt (1993) considers a variation of this setting
where the seller is the one with private information (on its costs), with a discrete number of consumer types
(possibly more than two types), and focusing on the Markov-perfect equilibria. The results there can also be
presented in terms of private information by the consumers. In that case, the result is as in Hart and Tirole—there is
Page 8 of 15
Price Discrimination in the Digital Economy
no price discrimination as the horizon tends to infinity. The intuition is that if consumers are given a chance to build
a reputation that they have a low valuation they will do so (see also Fudenberg and Levine 1989).
Another interesting variation of randomly changing preferences is considered in Kennan (2001). There, a positive
serial correlation leads to stochastic price cycles as purchases reveal a high consumer valuation, which is
followed by a high price, while no purchases reveal low consumer valuation, which is followed by low prices.
Another possibility discussed in the previous sections is the case where a firm when serving the customers that
choose to purchase also learns their valuation. In that case consumers are also strategic in the current period, and
the firm is hurt by the reduced number of consumers that decide to buy in the current period. However, because
now the firm is able to extract in the next period all the valuation from the consumers that purchase in the current
period, the present value of profits can actually be higher with this ability to learn the valuation of its previous
customers.
(p. 266)
Competition
We discuss now the effect of competition within the competition framework described in Section 2. When the firms
are not able to recognize their previous consumers we are back to the no information case, and the market
equilibrium is as characterized in Section 2 for that case.
If firms are able to recognize their previous customers while noting that consumers that choose their product have
a valuation above a certain threshold, we can use the analysis in the two previous sections to set up the problem
of each firm. In order to present sharper results let us focus on the case of the uniform distribution of consumer
valuations. The problem for Firm A can then be set as
where v*(pfA, pfB) is the valuation for product A of the marginal consumer choosing product A in the current period,
as computed in Section 3, and
represents the next period profit for Firm A given the current period
prices. For the uniform distribution, as we obtained in Section 3, we have
Given this we can then write
given v*. Note that
.
given the analysis of Section 2 of the equilibrium in the next period
is composed of both the profits from Firm A‘s previous customers and the
profits from Firm B's previous customers.
In such a setting one can then show that the first period prices are higher than in the no information case. This is
because the current period demand is less elastic than when there is no behavior-based price discrimination, as
discussed in Section 3, because consumers know that if they do not switch the firm will offer them in the next
period a lower price (a price to the new consumers in the next period). Note also that in this setting, and in relation
to the symmetric outcome, if firms had more or fewer customers they would be better off in the next period. For the
uniform distribution these two effects cancel out in the two-period model, and the first-period prices end up being
independent of the firms’ discount factor (Fudenberg and Tirole 2000).
If some of the consumer preferences can change from this period to the next, the results presented here would go
through, but now the equilibrium prices would be lower and closer to the case where there cannot be behaviorbased price discrimination.
Another interesting possibility is when there is entry of new consumers in each period in a set-up with overlapping
generations of consumers, such that in each period new consumers for a firm could be either previous customers
from the competitor or new consumers to the market. In this situation one has to fully consider the dynamic effects
over several periods of the pricing decisions in the current period (Villas-Boas 1999). One result is that the prices
to (p. 267) the new consumers end up being below the ones in the case with no behavior-based price
discrimination if the consumers care enough about the future. This is because the demand from the new
Page 9 of 15
Price Discrimination in the Digital Economy
consumers is more elastic than in the case when no information is collected because the marginal consumers are
just deciding the order in which they buy the different products, given that the marginal consumers switch products
in equilibrium.
Another possibility, along the lines discussed in the earlier sections, is when firms are able to learn the valuation of
their previous consumers during the purchase process. In this case competing firms can potentially gain more from
learning the valuation of consumers than not learning, which could be a force for greater competition for
consumers.
5. Discussion and Related Topics
The results above concentrate on the case where the firms can only offer short term contracts. In some cases
firms may be able to attract consumers with the promise of guaranteed future prices. Hart and Tirole (1988)
consider this situation in a monopoly setting, and show that with such contracts the seller is able to sell to the high
valuation consumers at a price above the valuation of low valuation consumers, because it is able to commit to an
average price in future periods that does not extract all the surplus from the high valuation consumers. Hart and
Tirole also show that the outcome obtained is the same as if the seller were selling a durable good.3 This also
shows that behavior-based price discrimination in a monopoly setting leads to a lower present value of profits than
in the case of a monopolist selling a durable good. Battaglini (2005) considers the case of long-term contracts with
two types of infinitely lived consumers with preferences changing over time (as in Kennan 2001) and continuous
consumption. He shows that in the optimal contract the efficient quantity is supplied in finite time (after the type of
the consumer is the high valuation type).
Fudenberg and Tirole (2000) consider the effect of long-term contracts in competition in a two-period model. They
show that long-term and short term contracts co-exist in equilibrium, with some consumers taking short-term
contracts, and that there is less switching in equilibrium when long-term contracts are possible. The intuition for this
last result is that the existence of long-term contracts (purchased by consumers with more extreme preferences)
leads firms to be more aggressive (lower prices) in their short-term contracts, which yields less switching.
Under competition and short-term contracts, Esteves (2010) and Chen and Zhang (2009) consider the case when
the distribution of valuations is in discrete types. In this case, the equilibrium involves mixed strategies. Esteves
considers the case with two consumer types, each having a preference for one of the competing firms, and myopic
consumers, and finds that first period prices tend to fall when (p. 268) the discount factor increases. Chen and
Zhang consider the case with three consumer types, with one type that has the same valuation for both firms, and
two other types that have extreme preferences for each firm. In this set-up expected prices in the first period are
higher even when consumers are myopic, as a firm that sells more in the first period is not able to discriminate in
the next period between the consumer types that can consider buying from that firm.
Another interesting issue to consider under competition is that in some markets where firms can practice behaviorbased price discrimination, consumers have also switching costs of changing supplier. Note that switching costs
alone can lead to dynamic effects in the market as studied in the literature.4 The effects of the interaction of
switching costs with behavior-based price discrimination are studied in Chen (1997) and Taylor (2003). Chen
considers a two-period model and shows that with switching costs, behavior-based pricing leads to rising prices
over time. Taylor considers the case of multiple periods and shows that prices are constant over time until the last
period, and shows that moving from two firms to three firms, there is aggressive competition for the switcher
consumers. In addition, Taylor considers the case when the firms may have some prior information about the
switching costs of different consumers.5
On a related topic, firms could also offer different products depending on the purchase history of consumers. Some
results on these effects are presented in Fudenberg and Tirole (1998), Ellison and Fudenberg (2000), and Zhang
(2011).
When studying the effects of price discrimination based on purchase history, one can also think of the implications
for privacy concerns. For studies on the effects on privacy along these and related dimensions see, for example,
Taylor (2004a, 2004b), Calzolari and Pavan (2006), and Ben-Shoham (2005).6
Page 10 of 15
Price Discrimination in the Digital Economy
In some markets, firms also learn consumer characteristics that directly affect cost of servicing them. This is for
example the case in credit markets, insurance markets, and labor markets. It is also the case when the costs of
servicing customers are heterogeneous across consumers, and are learned through interaction with them. In the
digital economy these aspects can become important with after-purchase service and the possibility of returns.
The effects of these issues in credit markets are considered, for example, in Pagano and Jappelli (1993), Padilla
and Pagano (1997, 2000), Dell’Ariccia et al. (1999), and Dell’Ariccia and Marquez (2004).7
Another important recent practice of firms with the digital economy is to make offers to consumers based on their
search behavior. Armstrong and Zhou (2010) look at this case under competition and find that firms may have an
incentive to offer a lower price on a first visit by a consumer than a return visit. This possibility can then lead to
higher equilibrium prices in the market. For a recent survey of related effects of product and price comparison sites
see Moraga and Wildenbeest's (2011) chapter 9 of this Handbook.
Finally, another interesting issue to study would be the effect of purchase history on the advertising messages that
consumers may receive.8
(p. 269) 6. Conclusion
This paper presents a summary of the effects of price discrimination based on purchase history. With the digital
economy this type of information became more available for firms, and consequently they are engaging more
frequently in this type of price discrimination in markets such as telecommunications, magazine or newspaper
subscriptions, banking services credit cards.9 For situations where firms have substantial market power (monopoly)
we found that firms benefit after the information is gained, but that this may lead consumers to be more careful
when revealing information, which could potentially hurt the firm's profits. For situations with competition, if the
competitors are aware that firms have purchase history information, more information may actually lead to more
intense competition after the information is gained. However, before information is gained, consumers may become
less price sensitive as the marginal consumers may get a better price offer in the next period if they do not switch
brands in response to a price cut. This may then lead to softer competition prior to firms gaining information. With
new consumers coming into the market this effect is attenuated, as the prices for the new customers are now less
affected by the fact that these new customers have a lower valuation for the firm.
References
Acquisti, A., Varian, H.R., 2005. Conditioning Prices on Purchase History. Marketing Science 24, 367–381.
Armstrong, M., Zhou, J., 2010. Exploding Offers and Buy-Now Discounts. Working paper, University College London.
Battaglini, M., 2005. Long-Term Contracting with Markovian Consumers. American Economic Review 95, 637–658.
Belleflamme, P., Peitz, M., 2010. Industrial Organization: Markets and Strategies, Cambridge University Press:
Cambridge, U.K.
Ben-Shoham, A., 2005. Information and Order in Sequential Trade. Working paper, Harvard University.
Borenstein, S., 1985. Price Discrimination in Free-Entry Markets. RAND Journal of Economics 16, 380–397.
Bouckaert, J., Degryse, H., 2004. Softening Competition by Inducing Switching in Credit Markets. Journal of Industrial
Economics 52, 27–52.
Brandimarte, L., Acquisti, A., 2012. The Economics of Privacy. In M. Peitz, J. Waldfogel (eds.), The Oxford Handbook
of the Digital Economy. Oxford/New York: Oxford University Press.
Calzolari, G., Pavan, A., 2006. On the Optimality of Privacy in Sequential Contracting. Journal of Economic Theory
130, 168–204.
Chen, Yongmin, 1997. Paying Customers to Switch. Journal of Economics and Management Strategy 6, 877–897.
Page 11 of 15
Price Discrimination in the Digital Economy
Chen, Yuxin, Iyer, G., 2002. Consumer Addressability and Customized Pricing. Marketing Science 21, 197–208.
Chen, Yongmin, Pearcy, J.A., 2010. Dynamic Pricing: When to Entice Brand Switching and When to Reward
Consumer Loyalty. RAND Journal of Economics 41, 674–685.
Chen, Yuxin, Narasimhan, C., Zhang, Z.J., 2001. Individual Marketing and Imperfect Targetability. Marketing Science
20, 23–41.
Chen, Yuxin, Zhang, Z.J., 2009. Dynamic Targeted Pricing with Strategic Consumers. International Journal of
Industrial Organization 27, 43–50.
Choi, J.P., 2012. Bundling Information Goods. In M. Peitz, J. Waldfogel (eds.), The Oxford Handbook of the Digital
Economy. Oxford/New York: Oxford University Press.
Corts, K.S., 1998. Third-Degree Price Discrimination in Oligopoly: All-Out Competition and Strategic Commitment.
RAND Journal of Economics 29, 306–323.
Dell’Ariccia, Friedman, E., Marquez, R., 1999. Adverse Selection as a Barrier to Entry in the Banking Industry. RAND
Journal of Economics 30, pp. 515–534. (p. 271)
Dell’Ariccia, Marquez, R., 2004. Information and Bank Credit Allocation. Journal of Financial Economics 72, 185–
214.
Dobos, G., 2004. Poaching in the Presence of Switching Costs and Network Externalities. Working paper, University
of Toulouse.
Ellison, G., Fudenberg, D., 2000. The Neo-Luddite's Lament: Excessive Upgrades in the Software Industry. RAND
Journal of Economics 31, 253–272.
Engelbrecht-Wiggans, R., Milgrom, P., Weber, R., 1983. Competitive Bidding and Proprietary Information. Journal of
Mathematical Economics 11, 161–169.
Esteves, R.B., 2010. Pricing under Customer Recognition. International Journal of Industrial Organization 28, 669–
681.
Farrell, J., Klemperer, P., 2007. Co-ordination and Lock-In: Competition with Switching Costs and Network Effects. In:
M. Armstrong, R. Porter (Eds.), Handbook of Industrial Organization 3, Elsevier, pp. 1967–2072.
Freixas, X., Guesnerie, R., Tirole, J., 1985. Planning under Incomplete Information and the Ratchet Effect. Review of
Economic Studies 52, 173–191.
Fudenberg, D., Levine, D., 1989. Reputation and Equilibrium Selection in Games with a Patient Player. Econometrica
57, pp. 759–778.
Fudenberg, D., Tirole, J., 1998. Upgrades, Trade-ins, and Buybacks. RAND Journal of Economics 29, pp. 238–258.
Fudenberg, D., Tirole, J., 2000. Customer Poaching and Brand Switching. RAND Journal of Economics 31, pp. 634–
657.
Fudenberg, D., Villas-Boas, J.M., 2006. Behavior-Based Price Discrimination and Customer Recognition. In: T.
Hendershott (Ed.) Handbooks in Information Systems: Economics and Information Systems, Chap.7, Elsevier,
Amsterdam, The Netherlands.
Hart, O.D. Tirole, J., 1988, Contract renegotiation and coasian dynamics, Review of Economic Studies 55, 509–540.
Hermalin, B.E., Katz, M.L., 2006. Privacy, Property Rights & Efficiency: The Economics of Privacy as Secrecy.
Quantitative Marketing and Economics 4, 209–239.
Hirshleifer, J., 1971. The Private and Social Value of Information and the Reward to Inventive Activity. American
Economic Review 61, 561–574.
Page 12 of 15
Price Discrimination in the Digital Economy
Holmes, T.J., 1989. The Effects of Third-Degree Price Discrimination in Oligopoly. American Economic Review 79,
244–250.
Hui, K.-L., Png, I.P.L., 2006. The Economics of Privacy. In: T. Hendershott (Ed.) Handbooks in Information Systems:
Economics and Information Systems, Elsevier, Amsterdam, The Netherlands.
Iyer, G., Soberman, D., Villas-Boas, J.M., 2005. The Targeting of Advertising. Marketing Science 24, 461–476.
Kennan, J., 2001. Repeated Bargaining with Persistent Private Information. Review of Economic Studies 68, 719–755.
Moraga, J.-L., Wildenbeest, M., 2011. Price Dispersion and Price Search Engines. In M. Peitz, J. Waldfogel (eds.), The
Oxford Handbook of the Digital Economy. Oxford/New York: Oxford University Press.
Morgan, J., Stocken, P., 1998. The Effects of Business Risk on Audit Pricing. Review of Accounting Studies 3, pp.
365–385.
Padilla, J., Pagano, M., 1997. Endogenous Communications among Lenders and Entrepreneurial Incentives. Review
of Financial Studies 10, pp. 205–236. (p. 272)
Padilla, J., Pagano, M., 2000. Sharing Default Information as a Borrower Incentive Device. European Economic
Review 44, pp. 1951–1980.
Pagano, M., Jappelli, T., 1993. Information Sharing in Credit Markets. Journal of Finance 48, pp. 1693–1718.
Pazgal, A., Soberman, D., 2008. Behavior-Based Discrimination: Is It a Winning Play, and If So, When? Marketing
Science 27, pp. 977–994.
Roy, S., 2000. Strategic Segmentation of a Market. International Journal of Industrial Organization 18, pp. 1279–
1290.
Schmidt, K., 1993. Commitment through Incomplete Information in a Simple Repeated Bargaining Game. Journal of
Economic Theory 60, pp. 114–139.
Shaffer, G., Zhang, Z.J., 2000. Pay to Switch or Pay to Stay: Preference-Based Price Discrimination in Markets with
Switching Costs. Journal of Economics and Management Strategy 9, pp. 397–424.
Sharpe, S., 1990. Asymmetric Information, Bank Lending and Implicit Contracts: A Stylized Model of Customer
Relationships. Journal of Finance 45, 1069–1087.
Shin, J., Sudhir, K., 2010. A Customer Management Dilemma: When Is It Profitable to Reward Existing Customers?
Marketing Science 29, pp. 671–689.
Stegeman, M., 1991. Advertising in Competitive Markets. American Economic Review 81, pp.210–223.
Taylor, C.R., 2003. Supplier Surfing: Price-Discrimination in Markets with Repeat Purchases. RAND Journal of
Economics 34, pp. 223–246.
Taylor, C.R., 2004a. Consumer Privacy and the Market for Customer Information. RAND Journal of Economics 35,
pp. 631–650.
Taylor, C.R., 2004b. Privacy and Information Acquisition in Competitive Markets. Working paper, Duke University.
Thisse, J.-F., Vives, X., 1988. On the Strategic Choice of Spatial Price Policy. American Economic Review 78, pp.
122–137.
Ulph, D., Vulkan, N., 2007. Electronic Commerce, Price Discrimination and Mass Customization. Working paper,
University of Oxford.
Villanueva, J., Bhardwaj, P., Balasubramanian, S., Chen, Y., 2007. Customer Relationship Management in
Competitive Environments: The Positive Implications of a Short-Term Focus. Quantitative Marketing and Economics
5, pp. 99–129.
Page 13 of 15
Price Discrimination in the Digital Economy
Villas-Boas, J.M., 1999. Dynamic Competition with Customer Recognition. RAND Journal of Economics 30, pp. 604–
631.
Villas-Boas, J.M., 2004. Price Cycles in Markets with Customer Recognition. RAND Journal of Economics 35, pp. 486–
501.
Wathieu, L., 2007. Marketing and the Privacy Concern. working paper, Harvard University.
Zhang, J., 2011. The Perils of Behavior-Based Personalization. Marketing Science 30, pp. 170–186.
Notes:
(1.) Fudenberg and Villas-Boas (2006) discusses in further detail the market forces described here, and also
discusses the effects of long-term contracts (including the relationship to durable goods and bargaining), multiple
products and product design, switching costs, privacy concerns and credit markets. One important aspect of price
discrimination in the digital economy that is not discussed here is that of bundling of information goods; see
chapter 11 by Choi (2012) in this Handbook for a survey of this literature. For a textbook treatment of behaviorbased price discrimination see Belleflamme and Peitz (2010).
(2.) The previous period consumer decisions are considered in the next section.
(3.) Hart and Tirole also discuss what happens under commitment. See also Acquisti and Varian (2005) on the
comparison of commitment with noncommitment. Acquisti and Varian consider also the effect of the seller offering
enhanced services.
(4.) See, for example, Farrell and Klemperer (2007) for a survey of the switching costs literature.
(5.) For the analysis of the second period of a model with switching costs see Shaffer and Zhang (2000). Dobos
(2004) considers the case of horizontal differentiation, switching costs, and network externalities. See also
Villanueva et al. (2007) on the effect of customer loyalty, and Pazgal and Soberman (2008) on the incentives for
firms in investing on technologies to track purchase histories.
(6.) For related studies on privacy matters see also Hirshleifer (1971), Hermalin and Katz (2006), and Wathieu
(2007). See also chapter 20 by Brandimarte and Acquisti (2012) in this Handbook for a survey on the economics of
privacy. For a recent survey on privacy issues related to information technology see Hui and Png (2006).
(7.) See also Engelbrecht-Wiggans et al. (1983), Sharpe (1990), Morgan and Stocken (1998), and Bouckaert and
Degryse (2004).
(8.) For studies of targeted advertising see, for example, Stegeman (1991), Roy (2000), and Iyer et al. (2005). Also
related is the literature on competition with price discrimination such as Thisse and Vives (1988), Borenstein
(1985), Holmes (1989), Corts (1998), Chen et al. (2001), Chen and Iyer (2002), and Ulph and Vulkan (2007).
(9.) See for example, “Publications are Trying New Techniques to Win over Loyal Readers.” The New York Times,
January 4, 1999, p. C20.
Drew Fudenberg
Drew Fudenberg is the Frederick E. Abbe Professor of Economics at Harvard University.
J. Miguel Villas-Boas
J. Miguel Villas-Boas is the J. Gary Shansby Professor of Marketing Strategy at the Haas School of Business, University of
California, Berkeley.
Page 14 of 15
Bundling Information Goods
Oxford Handbooks Online
Bundling Information Goods
Jay Pil Choi
The Oxford Handbook of the Digital Economy
Edited by Martin Peitz and Joel Waldfogel
Print Publication Date: Aug 2012
Online Publication Date: Nov
2012
Subject: Economics and Finance, Economic Development
DOI: 10.1093/oxfordhb/9780195397840.013.0011
Abstract and Keywords
This article, which reports the theory of product bundling, is particularly relevant in the context of the digital
economy. It also addresses the price discrimination theory of bundling. Firm's bundling decision may alter the
nature of competition and thus have strategic effects in its competition against its rivals. In the case of competitive
bundling, the strategic incentives to bundle depend on the nature of available products and the prevailing market
structures for the available products. The incentives to bundle depend on the influences of bundling on price
competition. Consumers indirectly benefit from the number of adopters of the same hardware. Microsoft's bundling
practices have found antitrust investigations on both sides of the Atlantic. Bundling is a profitable and credible
strategy in that it increases the bundling firm's profit and may make the rival firms unable to sell their journals. The
leverage theory of bundling has significant implications for antitrust analysis.
Keywords: product bundling, digital economy, price discrimination, firms, competition, competitive bundling, Microsoft, leverage theory, antitrust
analysis
1. Introduction
Bundling is a marketing practice that sells two or more products or services as a package. For instance, the socalled “Triple Play” bundle in the telecommunication sector combines phone, Internet, and TV services, which is
often offered at a discount from the sum of the prices if they were all bought separately. As more people adopt ereader devices such as Amazon Kindle and Barnes & Nobel Nook, publishers are increasingly offering bundles of
digital titles to boost demand, and the bundling strategy is expected to grow in popularity as sales of electronic
books surge.1 Sometimes, bundling takes the form of technical integration of previously separate products with the
evolution and convergence of technologies. For instance, smartphones provide not only basic phone services, but
also offer advanced computing abilities and Internet connectivity as well as other features such as digital cameras,
media players, and GPS capabilities, to name a few, which were all offered previously by separate products .
Computer operating systems exhibit a similar pattern over time. Modern operating systems such as Microsoft's
Windows 7 and Apple's Mac OS X Snow Leopard include numerous features that were not part of earlier operating
systems. Many of these features originally were offered only as separate software products.
Economists have proposed many different rationales concerning why firms practice bundling arrangements as a
marketing tool. This chapter reviews the literature on bundling and suggests topics for future research with special
emphasis on information goods. In particular, it discusses the nature of information goods (p. 274) and how the
special characteristics of information goods make the practice of bundling an attractive marketing strategy in the
digital economy. There are two types of bundling. With pure bundling, a firm offers the goods for sale only as a
bundle. With mixed bundling, a firm also offers at least some individual products separately in addition to the
bundle, with a price for the bundle discounted from the sum of the individual good prices. In the monopoly setting
without any strategic interaction, we can immediately rule out pure bundling as a uniquely optimal strategy
Page 1 of 23
Bundling Information Goods
because pure bundling can be considered as a special case of mixed bundling in which individual products are
offered at the price of infinity. However, in the context of strategic interaction, pure bundling can be used as a
commitment device and be a uniquely optimal strategy, as will be shown.
There are many reasons that firms offer bundled products. One obvious reason is efficiency due to transaction
costs. Technically speaking, any reasonably complex product can be considered as bundled products. For
instance, a car can be considered as a bundle of engine, car body, tires, and so on. Obviously, it is much more
efficient for the car manufacturers to assemble all the parts and sell the “bundled” product rather than for
consumers to buy each part separately and assemble the car themselves. Other motives for bundling identified in
the economics literature include reducing search and sorting costs (Kenney and Klein, 1983), cheating on a cartel
price, evasion of price controls, protection of goodwill reputation, and so on.2
In this chapter, I focus on three most prominent theories of bundling, which are price discrimination, competitive
bundling, and the leverage theory, with special emphasis on information goods. In particular, I discuss special
features of information goods and their implications for the bundling strategy.
First, information goods have very low marginal costs (close to zero) even though the costs of producing the first
unit might be substantial. The low marginal costs of information goods make bundling a large number of goods an
attractive marketing strategy even if consumers do not use many of them. For instance, major computer operating
systems today include a host of functionalities and features most consumers are not even aware of, let alone use.
As I discuss later, this property has important implications for the price discrimination theory of bundling.
Second, information goods are often characterized by network effects, meaning that the benefits a user derives
from a particular product increases with the number of other consumers using the same product. For example, the
utility of adopting particular software increases as more consumers adopt the same software because there are
more people who can share files. A larger user base also induces better services and more development of
complementary products that can be used together. With network effects, bundling can be a very effective
leverage mechanism through which a dominant firm in one market can extend its monopoly power to adjacent
markets that otherwise would be competitive. Thus, network effects can be an important factor in the bundling of
information goods with strategic reasons.
Finally, it is worthwhile mentioning that the convergence in digital technologies also facilitates bundling of diverse
products which were previously considered (p. 275) as separate. Consider, for instance, the convergence of
broadcasting and telephone industries. Traditionally, they represented very different forms of communications in
many dimensions including the mode of transmission and the nature of communication. However, the convergence
of digital media and the emergence of new technologies such as the Internet have blurred the boundaries of the
previously separate information and communication industries because all content including voice, video, and data
can now be processed, stored and transmitted digitally (Yoo, 2009). As a result, both telephone companies and
cable operators can offer bundled services of phone, Internet, and TV and compete head-to-head.
The rest of the chapter is organized as follows. In Section 2, I start with the price discrimination theory of bundling
with two goods (Stigler, 1963) and extend the model to study the strategy of bundling a large number of information
goods that is based on the law of large numbers (Bakos and Brynjolfsson, 1999). In Section 3, I review strategic
theory of bundling and discuss the relevance of these theories for information goods and how they can be applied
to specific cases. I first discuss competitive bundling in which entry and exit is not an issue in the bundling
decision. Then, I move on to the leverage theory of bundling in which the main motive is to deter entry or induce
the exit of rival firms in the competitive segment of the market. In the review of the leverage theory of bundling, I
also explore antitrust implications and discuss prominent cases, as well as investigating optimal bundling strategies
and their welfare implications. Throughout the chapter, I aim to discuss how the predictions of the model match with
empirical observations and evidence.
2. Bundling Information Goods for Price Discrimination
2.1. Price Discrimination Theory of Bundling
The idea that bundling can be used as a price discrimination device was first proposed by Stigler (1963). The
Page 2 of 23
Bundling Information Goods
intuition is that bundling serves as a mechanism to homogenize consumer preferences over a bundle of goods
under certain conditions, which facilitate extraction of consumer surplus. To understand the mechanism, consider
the following example.
Suppose that there are two products, Word Processor and Spreadsheet. For each product, consumers either buy
one unit or none. Assume that there is no production cost. There are two types of consumers, English Major and
Finance Major. The total number of consumers is normalized to 1, with the mass of each type being 1/2. The
reservation values for each product by different types of consumers are given by the Table 11.1. (p. 276)
Table 11.1
Word Processor
Spreadsheet
English Major
$60
$40
Finance Major
$40
$60
It can be easily verified that if Word Processor and Spreadsheet are sold separately, the optimal price for each
product is $40 and all consumers buy the product. Thus, the total profit for the firm would be $80 (=$40 + $40).
Now let us assume that the firm sells the two products as a bundle.3 Then, both types of consumers are willing to
pay up to $100 for the bundle. As a result, the firm can charge $100 and receives a profit of $100 from the bundle,
which is more profitable than selling the two products separately. Notice that both types of consumers pay the
same price for the bundle, but effectively they pay different prices for each product. To see this, we can do the
thought experiment of how much each consumer is paying for each product. English Majors implicitly pay $60 for
the Word Processor and $40 for the Spreadsheet program. In contrast, Finance Majors implicitly pay $40 for the
Word Processor but $60 for the Spreadsheet. In this sense, bundling can be considered as a price discrimination
device.4
As illustrated by the example, if consumers have negative correlation of reservation values across products,
bundling will homogenize consumers’ total reservation value for the collection of the products. This was first
pointed out by Stigler (1963) and the idea has been extended by Adams and Yellen (1976) who compare the
relative merits of independent pricing, mixed bundling and pure bundling. From the perspective of an optimal
pricing mechanism, it is clear that mixed bundling (weakly) dominates independent pricing and pure bundling
because the latter two can be considered as special cases of mixed bundling.5 Their paper thus focuses on under
what conditions mixed bundling can strictly dominate the other two. To understand those conditions, Adams and
Yellen list three desiderata that are satisfied by perfect price discrimination and can be benchmarks against which
the profitability of other pricing schemes can be measured:
1. Extraction: No individual realizes any consumer surplus on his purchases.
2. Exclusion: No individual consumes a good if the cost of that good exceeds his reservation price for it.
3. Inclusion: Any individual consumes a good if the cost of that good is less than his reservation price for it.
The relative merits of each pricing scheme depend on how well these desiderata are satisfied. For instance,
independent pricing always satisfies Exclusion because individual prices are never set below cost. However, the
other two criteria (Extraction or Inclusion) are easily violated by independent pricing, but can be mitigated by pure
bundling if consumers’ preferences are negatively correlated, and as a result bundling reduces consumer
heterogeneity. One major drawback of (p. 277) pure bundling is in complying with Exclusion. This can be a
serious problem if the cost of each good is high and there is a substantial chance that the reservation value of
consumers can be less than the production cost. In such a case, mixed bundling can be a way to mitigate the
problem.
To see this, consider the following numerical example. There are four consumers whose reservation values for two
goods A and B are given, respectively, by (10, 90), (40, 60), (60, 40), and (90, 10). Let the production cost of each
good be 20. It can be easily verified that the optimal prices under an individual pricing scheme are given by pA* =
pB*=60 with an overall profit of 160. In this example, consumers’ reservation values are perfectly negatively
Page 3 of 23
Bundling Information Goods
correlated with the identical reservation value of 100 for the bundle, which allows the Extraction and Inclusion
requirements to be simultaneously satisfied by pure bundling. The optimal pure bundle price is P*=100 with a profit
of 240, which is higher than the profit under an individual pricing scheme. However, pure bundling in this example
entails inefficiency due to the violation of Exclusion in that the first consumer purchases good A although his
reservation value for the good (10) is less than its production cost (20). Similarly, the fourth consumer inefficiently
purchases good B under a pure bundling scheme. This problem can be avoided by the use of mixed bundling in
which the bundle is priced at 100 as before, but individual components are priced at pA* = pB*=90 −εεε, which
induces these two consumers to purchase only good B and A, respectively. The resulting profit from mixed
bundling is 260. The additional profit of 20 compared to the pure bundling scheme comes from the avoidance of
wasteful consumption with pure bundling.
To foreshadow our discussion later, note that the major benefit of mixed bundling over pure bundling comes from
the mitigation of the Exclusion problem associated with pure bundling, which arises when the production cost of
each good is positive. For information goods, the marginal cost is almost negligible and Exclusion is not a serious
issue. This implies that pure bundling is as good as mixed bundling with information goods, as elaborated below.
The early analyses of bundling, exemplified by Stigler and Adams and Yellen, were based on a series of numerical
examples and left an impression that negative correlation is required for the bundle to be profitable. Schmalensee
(1984) extends the analysis to the case where consumers’ reservation values are drawn from bivariate normal
distributions and show that pure bundling can be more profitable even when the correlation of consumers’
valuations is nonnegative. McAfee, McMillan, and Whinston (1989; hereafter MMW) further expand the analysis by
providing the first systematic approach to find conditions under which (mixed) bundling can be profitable. In
particular, they show that mixed bundling is profitable with independent reservation values. As a simple illustration
of this, consider an example in which individuals have the following four valuations with equal probability—(40,40),
(40,70), (70,40), (70,70)—and no costs. This example satisfies the independence assumption. It can be easily
verified that selling individual products yields the profit of 320 while selling a bundle yields a profit of 330.
To derive a sufficient condition, MMW conduct the following heuristics. Suppose that pA* and pB* denote the optimal
prices for goods A and B, respectively, (p. 278) when they are sold independently. Consider an alternative
strategy of mixed bundling in which the bundle is priced at pA* + pB*, while still offering individual products at the
same price as before. Then it is clear that this alternative scheme has no effect on consumers’ purchase patterns
and the firm's profits. Now suppose that one of the prices for the individual goods, say B, is increased by ε, that is,
the two goods are also offered independently at pA* and pB* + ε. There are three first order effects of locally
raising pB in the mixed bundling scheme. First, there is a direct price effect from the inframarginal consumers who
buy only good B, which has a positive impact on the firm's profit. Second, there is reduction in demand for good B
due to the loss of marginal consumers who buy only good B, which has a negative effect on the firm's profit. Third,
sales of good for A increase due to consumers who switch from buying only good B to purchasing the bundle. This
last effect is positive. The considerations of these three effects yield the sufficient condition identified in MMW. The
condition is the first general result that allows for arbitrary joint distributions and constitutes a significant advance
over the previous literature that has relied mostly on numerical examples and simulations.
One corollary that comes out of MMW's approach is that mixed bundling dominates independent sales when
consumers’ reservation values are independently distributed. To see this, consider the first two effects of a local
change in the price of B. These two effects are exactly the same considerations the monopolist would consider in
setting its price if it were selling good B to consumers whose valuations of good A is less than pA*. With the
independence assumption, however, the consumers’ demand for good B is independent of their valuations for
good A, which implies that the first two effects must cancel out and be zero at the optimal price pB* by definition.
Thus, only the third effect remains and mixed bundling is profitable with independent distribution of reservation
values across products.
The sufficient condition identified in MMW suggests that mixed bundling is optimal in a wider range of cases than
just the independent distribution case. However, the condition is somewhat difficult to interpret. More recently,
Chen and Riordan (2010) use a new statistical concept called copulas to model stochastic dependence of
consumers’ reservation values.6 The use of copulas yields new analytical results that are easier to interpret. They
strengthen the result of MMW to show that a multiproduct firm achieves a higher profit from mixed bundling than
separate selling if consumer values for the two products are negatively dependent, independent, or have limited
7
Page 4 of 23
Bundling Information Goods
positive dependence.7 Next, I further discuss the relevance of their results in relation to information goods.
2.2. Price Discrimination Theory of Bundling Applied to Information Goods
In this section, we apply the price discrimination theory of bundling to information goods. To this purpose, there are
two important characteristics of information goods that are relevant and need to be taken into account.
(p. 279) First, as pointed out by Arrow (1962), information has the property of a public good in that the
consumption of information by one agent does not affect the availability of that good to other potential users; once
the first unit is produced, marginal cost for additional units is essentially zero. Second, information goods can
reside in cyberspace and need not be limited by physical limitations. As a result, a large number of information
goods can be bundled without incurring any substantial costs.
Note that, in contrast to information goods, with some conventional goods there is a non-trivial cost to consumers
of including unwanted features, over and above any impacts on price. To appreciate the absence of physical
limitation in the bundling decision, consider the example of Swiss army knives. They typically include various tools
in addition to knife blades and are offered in many different configurations and prices. Adding additional tools not
only raises the marginal cost of manufacturing, but it also makes the knife bulkier, which most users dislike (all else
equal). As a result, manufacturers need to offer a wide variety of types of Swiss army knives that vary in both the
number and mix of tools, thus catering to different consumer preferences. A knife with all possible tools would be
both expensive to produce and too bulky for most consumers. In contrast, a bundle of information goods that
includes far more products than any one consumer wants can appeal to a large number of potential buyers (who
vary in which subset of the bundled products they will use, but can ignore the rest) and can be much less
expensive to produce than a series of different versions, each with a limited set of products that appeals to a
particular subset of consumers. As a result, we observe the providers of information goods engage in large scale
bundling of information goods. For instance, cable television companies typically bundle hundreds of channels,
and major publishers bundle a large number of their e-journals. The use of Internet as a distribution channel of
digital information also reduces distribution costs and facilitates large scale bundling.
The implications of these characteristics of information goods for bundling have been investigated by Bakos and
Brynjolfsson (1999).8 More specifically, to reflect the typical cost and demand conditions for information goods,
assume that the marginal cost for copies of all information goods is zero to the seller and each buyer has unit
demands for each information good. In addition, assume that buyer valuations for each product are independent
and identically distributed (i.i.d.) with a finite mean (μ) and variance (σ2 ).9 Under these assumptions, Bakos and
Brynjolfsson find that selling a bundle of information goods can be vastly superior to selling the goods separately.
In particular, they show that for the distributions of valuations underlying many common demand functions,
bundling substantially reduces average deadweight loss and leads to higher profits for the seller. They derive an
asymptotic result that as the number of products included in the bundle (denoted as N) increases, the deadweight
loss per good and the consumers’ surplus per good for a bundle of N information goods converges to zero, and
the seller's profit per good is maximized, approaching the level achieved under perfect price discrimination.
(p. 280) The intuition for this result comes from the law of large numbers. As N increases, the intuition of Stigler
operates with a vengeance. As the number of products included in the bundle increases without bound, the
average valuation of each product in the bundle converges to the mean value μ. As a result, the seller can capture
an increasing fraction of the total area under the demand curve, correspondingly reducing both the deadweight
loss and consumers’ surplus relative to selling the goods separately.
The idea can be graphically illustrated with the following example in Bakos and Brynjolfsson (1999).10 Assume that
each consumer's valuations for individual goods are drawn from a uniform distribution on [0,1]. Let the number of
consumers be normalized to 1. Then, the demand curve for each individual good is given by a linear inverse
demand curve P=1−Q. As the bundle size increases, the per good inverse demand curve can be represented in
Figure 11.1.
As can be seen from Figure 11.1, as the bundle size increases, the law of large numbers assures that the
distribution for the valuation of the per good included in the bundle coalesces around the mean value (μ) of the
underlying distribution, yielding a per good demand curve that becomes more elastic around the mean and less
Page 5 of 23
Bundling Information Goods
elastic away from it. As a result, the seller can capture the most area under the demand curve, reducing both the
deadweight loss and consumers’ surplus compared to the individual pricing scheme.
Click to view larger
Figure 11.1 The Per Good Inverse Demand Curve for Different Bundle Sizes. N=1, 2, and 20.
Two remarks are in order concerning the profitability of bundling information goods. First, one implicit assumption in
the discussion of large scale bundling of information goods is that there is no disutility associated with it from the
perspective of consumers. Certainly, the cost is much smaller compared to physical goods as discussed earlier,
but there still could be non-negligible costs for consumers to navigate in the set of offerings or to customize the
bundled offer to their own preferences, especially if the bundling is on a very large scale and creates the problem
of “information overload.”11 This type of diseconomies on the demand side may limit the size of bundles and firms
may need to provide customized bundles tailored to consumers’ preferences, which is made possible with the
availability of (p. 281) massive consumer data and sophisticated data mining technologies, as discussed further
below.
Second, with the development of the Internet technology and electronic commerce, firms can easily trace and
store consumers’ records of previous purchases by electronic “fingerprints.” They can use this information to infer
consumers’ preferences, which enable them to practice mass customization that produces goods tailored to each
customer's needs without significantly sacrificing cost efficiency.12 This development has the potential to obviate
the need to use bundling as a price discrimination device. The attractiveness of mass customization is especially
attractive for physical goods with substantial marginal costs. However, firms also have ex post incentives to use
the information gleaned from past sales record to engage in the so-called behavior-based price discrimination.13 In
response to such incentives, consumers can engage in “strategic demand reduction” to misrepresent their
preferences to elicit a lower price in the future. This strategic response by consumers can hurt the firms’ ex ante
profits.14 Bundling avoids such pitfalls and can be considered as a commitment mechanism not to engage in
personalized pricing in the future. In addition, for information goods there is less concern for the inefficiency of
bundling that may arise by forcing consumers to purchase goods that they value below marginal cost. Thus, the
importance of bundling as a pricing strategy is expected to continue for information goods even in the new
environment.
Chen and Riordan (2010) also are worthy to mention in relation to information goods. As discussed earlier, they
provide analytical results that mixed bundling yields higher profits than separate selling if consumer values for the
two products are negatively dependent, independent, or have limited positive dependence. The reason they need
some restriction on the degree of positive dependence is that if consumers’ reservation values of the products are
perfectly positively dependent, bundling yields the same profit as separate selling. The exact condition they derive
for the profitability of mixed bundling shows that the restriction is relaxed as the marginal costs for the two products
are lowered. This implies that mixed bundling is more likely to be profitable for information goods with zero marginal
costs, with all other things being equal.
In addition, Chen and Riordan (2010) extend their analysis of product bundling for 2 goods to a more general ngood case, and consider a more general pricing scheme than pure bundling but much simpler and practically
easier to implement than mixed bundling, which they call step bundling (SB). We know that mixed bundling is
(weakly) superior to pure bundling as a pricing scheme. However, as the number of goods included in the bundle
increases, the complexity of the mixed bundling scheme exponentially worsens. If there are n products to be
included in the bundle, there are (2n −1) possible combinations of products and corresponding prices. For
instance, if there are 10 products to bundle, there are 1023 prices that need to be set for full implementation of a
mixed bundling scheme. As a result, mixed bundling becomes quickly impractical with the increase of products in
the bundle. This explains why Bakos and Brynjolfsson's analysis is focused on pure bundling when they consider
bundling of a large number of information goods.
Page 6 of 23
Bundling Information Goods
(p. 282) Chen and Riordan propose a much simpler scheme. Let N be the set of products offered by a firm. In a kstep bundling scheme, a firm offers individual prices, pi, for each good i∈N, and bundle prices Pl for any bundles
that include l goods from Nl, where Nl, ∈ N, l = 1, 2,…, k, and k ∈ [2, |N|]. SB may include part or all k-step
bundling. As an example, consider a cable company that offers phone, Internet, and TV services. Each service
can be offered with individual prices that can differ across services. In addition, it also offers a 2-good bundle that
include Internet and TV services, and the Triple Play bundle that includes all three services. This would be an
example of 3-step bundling. Chen and Riordan extend their analysis of product bundling for 2 goods to a more
general n-good case and show that their main results on the profitability of mixed bundling naturally extends to kstep bundling.
Chu, Leslie, and Sorensen (2011) propose a further simplified form of mixed bundling, called bundle-size pricing
(BSP). In this pricing scheme, a firm sets different prices for different sized bundles. There is one price for any
single good. There is another price for any combinations of two goods, and so on. With such a pricing mechanism,
there are only N prices if there are N products that constitute the “grand” bundle. So if there are 10 products to
bundle, this requires only 10 prices, compared to 1023 prices with the mixed bundling scheme. BSP differs from SB
in that there is only one price for all individual goods there is no restriction on what goods are allowed to be
included in a bundle. More precisely, SB is identical to BSP if pi = p for all i∈N, and Nl = N, for all l = 2,…, |N|. Thus,
BSP can be considered as a special form SB. Chu et al. show that BSP attains almost the same profit as mixed
bundling under most circumstances. The main intuition for their result, once again, comes from the heterogeneityreduction effect of bundling. If bundles are large, different bundles of the same size do not need to be priced very
differently as the heterogeneity of consumers is reduced with bundling. As a result, prices for large-sized bundles
under bundle-size pricing tend to be very close to those under mixed bundling. The price pattern under BSP can be
very different for individual goods, but it turns out that bundles are much more important to profits and the
discrepancies in individual good prices play a negligible role. This implies that BSP approximates mixed bundling in
terms of profits. By extension, this also implies that SB attains almost the same profit as mixed bundling under most
circumstances. Many of the prices under mixed bundling thus are redundant.
In contrast to Bakos and Brynyolfsson who derive asymptotic results as N→∞, the analyses of Chen and Riordan
(2010) and Chu et al. (2011) focus on a finite N. With numerical analysis, Chu et al. show that in most cases BSP
attain nearly the same level of profits attainable under mixed bundling. To illustrate the empirical relevance of their
findings, they also estimate the demand facing a theater company that produces a season of 8 plays and compute
the profitability of various pricing schemes. They show that bundle-size pricing is 0.9 percent more profitable than
individual pricing and attains 98.5 percent of the mixed bundling profits. In this particular case, we can argue that
bundling does not confer significant benefits over component pricing.15 They attribute the small effect of BSP to a
very (p. 283) high degree of positive correlation in the consumers’ preferences and the empirical results may
understate the gains from BSP in other settings. Their study may have important practical implications for a small to
moderate number of products. If the number of products becomes large, Bakos and Brynyolfsson show that pure
bundling approximates perfect price discrimination and thus BSP does not confer any significant advantage over
pure bundling.
Shiller and Waldfogel (2011) is a rare empirical study that explores that profit and welfare implications of various
pricing scheme including bundling for one type of information goods. Based on survey data on about 1000
students’ valuations of 100 popular songs in early 2008 and 2009, they estimate how much additional profit or
consumer surplus can be achieved in alternative pricing schemes vis-à-vis uniform pricing in the context of digital
music. Their study was motivated by the Apple iTune Music Store's puzzling practice of charging a uniform price of
$0.99 for all songs, regardless of their popularity. Even though their samples of songs and survey respondents are
not representative, their empirical result illustrates the potential profitability of bundling strategies for information
goods. In particular, their estimation suggests that the optimal pure bundling prices for 50 songs raise revenues by
17 percent and 29 percent, respectively, relative to uniform pricing with the 2008 and the 2009 samples. These
gains come partly at the expense of consumers. Consumer surpluses under pure bundling are reduced by 15
percent and 5 percent, respectively, in years 2008 and 2009, compared to those under uniform pricing. They also
show that the benefit of bundling increases with the bundle size, but at a decreasing rate; more than half of the
benefit from the 50-song bundle is achieved with bundles of just 5.
Finally, Bakos and Brynjolfsson (2000) extend their monopolistic analysis to different types of competition, including
both upstream and downstream, as well as competition between a bundler and a single good producer and
Page 7 of 23
Bundling Information Goods
competition between two bundlers of different sizes. They show that bundling confers significant competitive
advantages to bundling firms in their competition against non-bundlers due to their ability to capture a larger share
of profits compared to smaller rivals.16
In the next section, we delve into the analysis of bundling as a strategic weapon against rival firms in more detail.
3. Bundling Information Goods for Strategic Reasons
In section II, we considered price discrimination motives of bundling in the monopoly context. In oligopolistic
markets, firms may have incentives to bundle their products for strategic reasons. More specifically, a firm's
bundling decision may change (p. 284) the nature of competition and thus have strategic effects in its
competition against its rivals. In the presence of (potential) competitors, the strategic effects of bundling thus
should be considered in conjunction with any price discrimination motives.
I first provide an overview of strategic theory of bundling and discuss the relevance of these theories for
information goods and how they can be applied to specific cases. In the discussion of the theory, I distinguish two
cases depending on whether bundling is used as an exclusionary strategy: competitive bundling and the leverage
theory. The case of competitive bundling considers a situation in which the rivals’ entry or exit decisions are not
affected by bundling and thus exclusion does not take place. As shown below, in the case of competitive bundling,
the strategic incentives to bundle depend crucially on the nature of available products and the prevailing market
structures for the available products. Bundling arrangements can also be used as an exclusionary device and
have potential to influence the market structure by deterring entry or inducing exit of the existing firms. This issue
is addressed in the discussion of the leverage theory of bundling.
3.1. Competitive Bundling
The models of competitive bundling consider a situation in which firms compete, but do not intend to drive rival
firms out of the market because the strategy of market foreclosure is too costly or simply not possible. In such a
scenario, it is every firm's interest to soften price competition in the market. The incentives to bundle thus depend
on the effects of bundling on price competition.
3.1.1. Bundling as a Product Differentiation Strategy
When firms produce a homogenous product and compete in prices as in the standard Bertrand model, bundling
can be used as a product differentiation strategy and can relax price competition. Chen (1997), for instance,
considers a duopoly competing in the primary market and prefect competition prevailing in the production of other
goods with which the primary good can be bundled. He considers a two-stage game in which the two firms in the
primary market decide whether to bundle or not, and then they compete in prices given their bundling strategies
chosen in the previous stage. He shows that at least one firm in the duopoly market chooses the bundling strategy
in equilibrium, and both firms earn positive profits even though they produce a homogenous product and compete
in prices.17 The intuition behind this result is that the bundled product is differentiated from the stand-alone
products and allows above-cost pricing with market segmentation.18
Carbajo, De Meza, and Seidman (1990) provide a related model in which bundling may be profitable because it
relaxes price competition. More specifically, they consider a situation in which one firm has a monopoly over one
good but competes with another firm in the other market. In addition, they assume a prefect positive (p. 285)
correlation between the consumers’ reservations prices of the two goods to eliminate any bundling motives that
come from price discrimination. In their model, bundling provides a partitioning mechanism to sort consumers into
groups with different reservation price characteristics. The firm with bundled products sells to high reservation
value consumers while the competitor sells to low reservation value consumers. In contrast to Chen (1997) where
bundling relaxes price competition in the primary (tying good) market, bundling in Carbajo et al. relaxes price
competition in the secondary (tied good) market. In particular, if Bertrand competition prevails in the duopoly
market, bundling is always profitable in their model.19 Bundling, once again, provides a way to differentiate their
product offerings and soften price competition.
3.1.2. Bundling in Complementary Markets
Page 8 of 23
Bundling Information Goods
Consider a situation in which individual products, A and B, which can be included in the bundle, are perfect
complements that need to be used on a one-to-one basis. For instance, they can be considered as component
products that form a system. There are two firms, 1 and 2, in the market, whose unit production costs for
components A and B are denoted by ai and bi, respectively, where i = 1, 2. For simplicity, assume that the two
firms produce homogenous products and consumers have an identical reservation value for the system good.
In this setup, it is easy to demonstrate that unbundling always (weakly) dominates a bundling strategy. If the two
products are not bundled, the consumers can mix and match and will buy each component from the vendor who
charged the lowest price so long as the sum of the two prices does not exceed the reservation value, which is
assumed to be sufficiently large. Without bundling, competition will be at the component level and in each
component market the firm with the lower production cost will win the whole market by selling at the production cost
of the rival firm (minus ε). The prices in component markets A and B will be realized at max (a1, a2) and the max
(b1, b2), respectively.
If two products are bundled, the two products must be purchased from the same firm. In other words, bundling
changes the nature of competition from localized competition to all-out competition. As a result, the firm with the
overall advantage in production costs will make all sales at the system cost of the rival firm. The total price
consumers pay for the bundled system is max (a1+ b1, a2 + b2), which is equal to or less than [max (a1, a2) + max
(b1, b2)], the total price to be paid if the two components are sold separately.
In this simple setup, there is no difference between bundling and no bundling if one firm is more efficient in the
production of both components. However, if one firm is more efficient in one component and the other firm is in the
other, unbundling is more profitable for the firms. To see this, without loss of generality, assume that a1〈 a2, b1 〉b2,
and a1+ b1 〈 a2 + b2, that is, firm 1 is more efficient in component A and in the overall components while firm 2 is
more efficient in component B. Then, each firm's profit under unbundling is given by (p. 286) (a2 −a1) 〉 0 and (b1
− b2 ) 〉0 for firm 1 and firm 2, respectively. In contrast, the corresponding profits under bundling is given by (a2 +
b2 ) – (a1+ b1) and zero, respectively for firm 1 and firm 2. Since (a2 + b2) − (a1 + b1) 〈 (a2 − a1) under our
assumption, unbundling is more profitable for both firms.
Matutes and Regibeau (1988) develop this idea further in a model of complementary products with product
differentiation and heterogeneous consumers. Consumers’ preferences are represented by a point in a twodimensional Hotelling model. They show that the pure bundling strategy is dominated by the individual marketing
strategy. In their model with product differentiation, there can be an added advantage associated with unbundling.
With bundling, there are only two systems available. With unbundling, consumers can mix and match and form four
different systems, which enables consumers to choose systems that are closer to their ideal specifications. This
implies that in system markets, firms would be reluctant to engage in competitive bundling unless there are other
countervailing incentives to bundle.
3.1.3. Bundling in Substitute Markets and Inter-Firm Joint Purchase Discounts
Armstrong (2011) extends the bundling literature in two respects by allowing products to be substitutes and
supplied by separate sellers.20 An example of bundling in substitute markets is a “city pass” that allows tourists to
visit all attractions in a city. However, tourists are typically time-constrained and can visit only a few during their
visits. He shows that firms have incentives to engage in bundling or joint purchase discounts when demand for the
bundle is elastic relative to demand for stand-alone products. This result holds regardless of whether component
products are produced by one integrated firm or separate firms.
When products are supplied by separate firms, Armstrong (2011) finds that they can use joint purchase discounts
as a mechanism to relax price competition. The reason is that when firms offer a joint purchase discount, the innate
substitutability of their products is reduced, which enables them to set higher prices. This implies that the seemingly
consumer-friendly joint purchase discount scheme can be used as a collusive mechanism, and thus should be
viewed with a skepticism by antitrust authorities.
He also shows that a firm often has a unilateral incentive to offer a joint-purchase discount when their customers
buy rival products. In such a case, joint purchase discounts can be implemented without any need for coordination
between firms. In practice, the implementation of joint purchase discounts by separate firms can be difficult due to
Page 9 of 23
Bundling Information Goods
the need to make sequential purchases and produce a verification of purchase from the rival firm. These problems
can now be mitigated by online platforms or product aggregators that allow simultaneous purchases.
3.2. The Leverage Theory of Bundling
According to the leverage theory of bundling, a multiproduct firm with monopoly power in one market can
monopolize a second market using the leverage provided by its monopoly power in the first market. This theory,
however, has been controversial. I briefly describe the evolution of this theory.
(p. 287) 3.2.1. One Monopoly Profit and the Chicago School Critique
At first, it may seem that anticompetitive bundling will be profitable for a monopolist in many circumstances
because monopolists can “force” consumers to take what they do not want in exchange for being allowed to
purchase the monopolized product. Under that view, by bundling the monopolized product with otherwise
competitive products, the monopolist can drive others from the adjacent markets, and thus achieve power in those
markets. However, the logic of the theory has been criticized and subsequently dismissed by a number of authors
from the University of Chicago school such as Bowman (1957), Posner (1976), and Bork (1978) who have argued
that the use of leverage to affect the market structure of the tied good (second) market is impossible.
In particular, the Chicago school critique points out that that monopolists already have economic profits in the
market they monopolize, and there are costs to anticompetitive bundling. A monopolist does not have the freedom
to raise prices or impose burdensome conditions on consumers without suffering reduced sales. In reality, the
monopolist's pricing decision is constrained by market demand, that is, consumers’ willingness to pay for the
product, which depends in part on the availability of substitutes. As a result, an enterprise, even a monopolist,
foregoes profits in its original market when it engages in bundling that does not generate efficiencies and reduces
consumer value. Such anticompetitive bundling leads to lower demand for the monopolist's product. To quote
Posner (1976), “let a purchaser of data processing be willing to pay up to $1 per unit of computation, requiring the
use of 1 second of machine time and 10 punch cards, each of which costs 10 cents to produce. The computer
monopolist can rent the computer for 90 cents a second and allow the user to buy cards on the open market for 1
cent, or, if tying is permitted, he can require the user to buy cards from him at 10 cents a card—but in that case he
must reduce his machine rental charge to nothing, so what has he gained?”21 This implies that the monopolist firm
never has the incentive to bundle for the purpose of monopolizing the adjacent good market. As a result, price
discrimination, as opposed to leverage, has come to be seen as the main motivation for tying until the theory was
revived by Whinston (1990), which spawned a resurgence of interest in the theory.
3.2.2. Strategic Leverage Theory
In a very influential paper, Whinston (1990) has shown that the Chicago school arguments can break down under
certain circumstances. In particular, he considered a model in which the market structure in the tied good market is
oligopolistic and scale economies are present, and showed that bundling can be an effective and profitable
strategy to alter market structure by making continued operation unprofitable for tied good rivals.
Whinston sets up a model with firms that differ in their production costs. To apply his intuition to information goods, I
modify his model and present it in a setting with products of different qualities. To understand Whinston's argument,
consider the following simple model. There are two independent products, A and B. (p. 288) They are unrelated in
the sense that they can be consumed independently and their values to consumers are independent of whether
they are consumed separately or together.22 Consumers, whose total measure is normalized to 1, are assumed to
be identical and have a unit demand for each product. The market for product A is monopolized by firm 1 with unit
production cost of zero. It is assumed that entry into market A is not feasible. The product A is valued at vA by
consumers. The market for product B, however, can be potentially served by two firms, firm 1 and firm 2. Unit
production cost for product B is also zero for both firms. However, the two firms’ products are differentiated in
terms of quality. Consumers’ valuation of product B produced by firm 1 and firm 2 are respectively given by vB1
and vB2 .
The game is played in the following sequence. In the first stage, the monopolistic supplier of product A (firm 1)
makes a bundling decision on whether to bundle A with another product B. In the second stage, firm 2 makes an
Page 10 of 23
Bundling Information Goods
entry decision after observing the incumbent firm's bundling decision. The entry entails sunk fixed costs of K. If
there is entry by the rival firm, a price game ensues in the final stage. The bundling decision is assumed to be
irreversible.
I apply backward induction to solve the game. If there is no entry, firm 1 charges (vA + vB1) if the products are
bundled, and vA and vB1 for products A and B, respectively, if the two products are not bundled. In either case, firm
1 receives the monopoly profits of (vA + vB1) without entry by firm 2.
Now suppose that there is entry by firm 2. The pricing stage outcome depends on the bundling decision. Suppose
that the monopolist bundles product A and B and charges price P for the bundled product. In this case, consumers
have two choices. The first option is to buy the bundled product from the monopolist at the price of P and the
second one is to buy only product B from firm 2. For the first option to be chosen by the consumers, P should
satisfy the following condition:
This implies that the maximum price the tying firm can charge for the bundled product is given by P = (vA + vB1−
vB2 ). Without any marginal cost, firm 1's profit is also given by (vA + vB1 − vB2 ). Of course, firm 1 will sell the
bundle only if this profit is nonnegative. In other words, which firm will capture market B depends on the
comparison of (vA + vB1) and vB2 . If the former is higher than the latter, firm 1 will serve market B, and otherwise
firm 2 will serve market B. One way to interpret the result above is that after bundling firm 1 can compete against
firm 2 as if its quality of B were (vA + vB1). Alternatively, we can also interpret this result as firm 1 behaving as if its
cost of B were negative at –vA. The reason is that after bundling, firm 1 can realize the monopoly surplus of vA only
in conjunction with the sale of product B. Thus, the firm is willing to sell product B up to the loss of vA.
Now suppose that K〈 vB2 − vB1 〈 vA. The first inequality means that firm 2 can successfully enter the market B if
there was no bundling since its quality advantage is more than the sunk cost of entry. However, the second
inequality implies that the quality advantage for firm 2 is not sufficiently high to compete against the bundled
products since firm 1 is still able to sell the bundled products with a positive profit (p. 289) even if firm 2 priced its
product at its marginal cost zero. Thus, firm 2 is foreclosed from market B since it cannot recoup its sunk cost of
entry when firm 1 engages in bundling.
Essentially, bundling converts competition into an “all-out war” for firm 1 and serves as a commitment mechanism
to price more aggressively whereas unbundling segments the markets and allows limited and “localized” market
competition. The Chicago school critique of the leverage theory missed this “strategic effect” due to their
adherence to the assumption of competitive, constant returns-to-scale structure in the tied good market. It is
important to keep in mind, however, that in Whinston's basic model, inducing the exit of the rival firm is essential for
the profitability of tying arrangements.23 If the bundling firm fails to induce exit of the rival firm, the bundling
strategy also hurts the bundler as it intensifies competition.24
Notice that in Whinston's model, unbundling is ex post optimal once entry takes place. Thus, the monopolist should
have the commitment ability to bundle to deter entry. Technical bundling through product design is one way to
achieve such commitment. For instance, in the Microsoft bundling case, the Internet browser and media player
programs are so tightly integrated with the rest of its OS, they cannot be removed without destabilizing the OS.25 In
this regard, it is worth mentioning Peitz (2008) who provides a model of entry deterrence that does not require the
commitment of the firm to bundle. More specifically, he considers a market with two products, one of which is
monopolized with consumers’ reservation values uniformly distributed. In the competitive market, the products are
horizontally differentiated. In such a setup, he shows that bundling is always optimal irrespective entry, and thus
credible.
3.2.3. Dynamic Leverage Theory
The analysis of Whinston has been subsequently extended in several directions by various authors such as
Carlton and Waldman (2002) and Choi and Stefanadis (2001). Whinston's model assumes that entry into the
monopolized market is impossible and shows how tying can be used to extend monopoly power in one market into
an otherwise competitive market. These papers, in contrast, consider an oligopolistic environment and show that
bundling can be used to deter entry into a complementary market to preserve the monopolized market or
Page 11 of 23
Bundling Information Goods
strengthen market power across several markets.
The basic intuition for these results in Carlton and Waldman (2002) and in Choi and Stefanadis (2001) is that entry
in one market is dependent on the success of entry in a complementary market. As is presented in more detail
later, Carlton and Waldman develop a dynamic model where an entrant with a superior complementary product
today can enter the primary product market in the future. They show that bundling can be used to deny an entrant
sales when it has only the complementary product and this reduced sales today can prevent future entry into the
primary market in the presence of scale economies. Choi and Stefanadis (2001) model a situation in which the
incumbent firm is a monopolist in markets A and B and the two products are perfect complements. Neither has
value to consumers without the other. The primary vehicle for entry is innovation, but success in innovation is
uncertain. By tying the two products, the incumbent ensures that potential entrants must invest in both products
and must innovate successfully in both in order to enter in the next period. Under some conditions, the tie can be
profitable by discouraging investment by potential entrants, and thus reducing or even eliminating the risk of future
entry.
Choi (2004) develops a model that also involves current investment in innovation determining future sales.
However, in contrast to the Choi and Stefanadis model, this model assumes that demand for the two goods is
independent. That is, there is no relationship between consumers’ demand for the two goods. In Choi's model, tying
by the monopolist initially intensifies price competition, because the monopolist must lower the price of the tied
combination to attract buyers. This reduces the monopolists’ own profits in the first period. With tying, however, the
monopolist has a greater incentive to invest in innovation and the other firm has a lesser incentive. Under certain
conditions, tying can increase profits for the monopolist.26
3.3. The Leverage Theory of Bundling Applied to Information Goods
In this section, we discuss the implications of bundling for competition in various information good industries. In
particular, several information goods are characterized by network effects with demand-side scale economies. The
literature distinguishes two types of network effects. The notion of direct-network effects refers to a situation in
which each user's utility of adopting a particular good increases as more people adopt the same good. The most
commonly cited examples of direct network effects include communications services (telephone, fax, email, etc)
and languages. As these examples suggest, direct network effects arise through the ability to interact or
communicate with other people. Indirect network effects typically arise through complementary products. For
instance, in system markets comprised of hardware and software, consumers do not derive additional direct utilities
from consumers using the same hardware. However, each consumer's utility can be increasing in the number of
available software titles. As more consumers adopt a particular platform of hardware, more software titles
compatible with that platform will be available. Thus, consumers indirectly benefit from the number of adopters of
the same hardware.
3.3.1. Bundling of Information Goods with Direct Network Effects
Certain information goods exhibit network effects in consumption. One prominent example is software. For instance,
the use of the same software allows easy collaboration and sharing of files among co-workers and friends. Carlton
and Waldman's (2002) model, inspired by the Microsoft antitrust case, shows that the presence of (direct) network
externalities for the complementary good can result in the strategic use of bundling to deter entry into the primary
market. More specifically, Carlton and Waldman's model involves an incumbent that is a monopolist in the primary
market (say, operating system), called A.27 The monopolist and a potential entrant can produce a complementary
product (say, web browser), B. If the entrant is successful in B in the current period, it can enter A in the future and
challenge the monopolist.28 However, if the entrant is not successful in B during this period, it cannot enter market
A in the future. In their model, by bundling A and B during this period, the monopolist can prevent successful entry
in B this period and thus protect its existing monopoly in A against subsequent entry.
3.3.1. Bundling in Two-Sided Markets with Indirect/Inter-Group Network Effects
There are many markets where indirect network effects arise through platforms that enable interactions between
two distinct groups of end users. For instance, game developers and gamers interact through video game platforms
such as Nintendo, Sony Play Station, and Microsoft X-Box, in which each side benefits more from the presence of
29
Page 12 of 23
Bundling Information Goods
more participants on the other side.29 This type of platform competition has been analyzed under the rubric of
“two-sided markets” in the literature.30
A few studies analyze the effects of bundling arrangements on platform competition in two-sided markets, partly
motivated by recent antitrust cases involving Microsoft and credit card payment systems. In the European
Commission Microsoft case, it has been alleged that the company's bundling practice of requiring Windows
operating system users to accept its Windows Media Player software is predatory and hurts digital media rivals
such as RealNetworks. In the streaming media software case, content providers and consumers constitute the two
sides of the market. For instance, the more content available in streaming media, the more valuable media player
programs become, and vice versa.
Choi (2007) develops a model that reflects the Microsoft case in the EC.31 More specifically, the model assumes
that there are two intermediaries competing for market share within each group. There is free entry in the market
for content provision. Content providers are heterogeneous in their fixed cost of creating content. The choice of
consumers’ platform is analyzed by adopting the Hotelling model of product differentiation in which the two
platforms are located at the two extreme points of a line. Consumers are uniformly distributed along the line and
each consumer's utility of participating in a platform depends on the number of content providers on the same
platform. In such a model, Choi (2007) shows that bundling can be a very effective mechanism through which a
dominant firm in a related market can penetrate one side of the two-sided market to gain an advantage in
competition for the other side. The welfare effect of bundling is, however, ambiguous and depends on the relative
magnitude of inter-group externalities and the extent of product differentiation. If the extent of inter-group
externalities is (p. 292) significant compared to that of product differentiation, bundling can be welfareenhancing, as the benefit from internalizing the inter-group network externalities can outweigh the loss of product
variety.
Amelio and Jullien (2007) provide another analysis of tying in two-sided markets. They consider a situation in which
platforms would like to set prices below zero on one side of the market to solve the demand coordination problem
in two-sided markets, but are constrained to set non-negative prices. In the analysis of Amelio and Jullien, tying can
serve as a mechanism to introduce implicit subsidies on one side of the market in order to solve the
aforementioned coordination failure in two-sided markets. As a result, tying can raise participation on both sides
and can benefit consumers in the case of monopoly platform. In a duopoly context, however, tying also has a
strategic effect on competition. They show that the effects of tying on consumer surplus and social welfare depend
on the extent of asymmetry in externalities between the two sides.
Gao (2009) considers a new type of bundling in two-sided market, which he calls “hybrid” bundling. He observes
that there are many examples of “mixed” two-sided markets in which economic agents participate in both sides of
the market. For instance, people can participate in online trading both as a seller and as a buyer. Hybrid bundling is
a two-part tariff system in which a bundled membership fee is charged for access to both sides of the market and
two separate transaction fees that need to be paid for each transaction depending on which side the agent is
making the transaction. In other words, hybrid bundling is a combination of pure bundling of membership fees for
two-sides and unbundled transaction fees on each side. He provides conditions under which hybrid bundling is
profitable and indicates main factors that favor it. One of the factors relevant to two-sided markets with information
goods is scope of economies in providing two services to the same user compared to two different users. In the
provision of information goods, the fixed cost of servicing a user usually comes from the registration process and
there could be significant economies of scope if necessary information can be collected and verified in one
signing-up process rather than two separate occasions.
3.3.3. Virtual Bundling
In systems markets where several complementary components work together to generate valuable services, a firm
can practice “virtual bundling” by making its components incompatible with components made by other firms. Even
if all component products are sold separately, consumers do not have an option to mix and match components
from different manufacturers and purchase the whole system from one vendor in the absence of compatibility. The
possibility of virtual bundling arises if compatibility requires adoption of a common standard and consequently can
be achieved only with the agreement of all firms.
In the software case, for instance, virtual bundling can take place if a dominant firm has proprietary interface
Page 13 of 23
Bundling Information Goods
information and refuses to license it to prevent interoperability with third parties’ complementary software. Consider
the recent (p. 293) antitrust cases concerning Microsoft in the US and Europe. One of the issues in the cases
evolved around the interoperability information. Microsoft was alleged to deny disclosing interface information
which rival work group server operating system vendors needed to interoperate with Microsoft's dominant Windows
PC operating system. 32 A similar issue has arisen in an antitrust investigation against Qualcomm in Korea in which
the Korean Fair Trade Commission was concerned with the possibility that nondisclosure of ADSP (Application
Digital Signal Processor) interface information may restrain competition in the mobile multimedia software market.33
The non-disclosure of proprietary interoperability information constitutes virtual bundling and its strategic effects
can be analyzed in a similar manner.
3.3.4. The Importance of Multihoming
Most of the existing literature on strategic theory of bundling assumes single-homing, that is, consumers buy from
only one vendor/product for each product category. In addition, none of these papers in the tying literature
seriously take into consideration the possibility of multihoming. Carlton and Waldman (2002), for instance, assume
that “if a consumer purchases a tied good consisting of one unit of the monopolist's primary good and one unit of
its complementary good, then the consumer cannot add a unit of the alternative producer's complementary good
to the system (italics added, p. 199).” In other words, either they do not allow the possibility of multihoming or
multihoming does not arise in equilibrium.34
However, it is common in many markets that consumers engage in multihoming, that is, consumers purchase
multiple products (or participate in multiple platforms). Multihoming is especially important in markets with network
effects because it allows consumers to reap maximal network benefits. For instance, consider the digital media
market as a two-sided market in which content providers and end users constitute each side. In this market, if more
content is provided in the format of a particular company, then more users will use Media Player of such company
to access such content. Moreover, if more users select a particular company's Media Player, then content
providers will obviously have an incentive to provide their content in that particular format, creating indirect
network externalities. However, in the digital media market, many users have more than one media player and
many content providers offer content in more than one format.
One implication of the single-homing assumption is that if consumers are not allowed to participate in more than
one platform, tying automatically leads to monopolization on the consumer side. This in turn implies that content
providers have no incentives to provide any content for the rival platform. As a result, the rival platform is
foreclosed on both sides of the market. With multihoming possibilities, bundling does not necessarily imply “tipping”
and market foreclosure, which can have important implications for market competition.
Choi (2010a) constructs a model of two-sided markets that explicitly takes multihoming into consideration. To
derive an equilibrium in which both content (p. 294) providers and consumers multihome, he assumes that there
are two types of content available. One type of content is more suitable for one of the two platforms (formats)
whereas the other type of content is suitable for both platforms. Alternatively, we can interpret the first type of
content is due to exclusive contracts between the platforms and content providers. More specifically, the total
measure of content potentially available for each format is normalized to 1. Among them, the proportion of λ is of
the first type and thus can be encoded only for a particular format whereas (1 − λ) can be also encoded in the
other format. The existence of exclusive content available for each format creates incentives for consumers to
multihome. When the second type of content is encoded for both formats, content providers are said to multihome.
The consumer side of the market is once again modeled a la Hotelling. The only modification is that consumers are
now allowed to multihome. As a result, there are three choices for consumers assuming that the market is covered.
Consumers can choose to either single-home or multihome. If they decide to single-home, they choose one of the
two platforms to participate in. See Figure 11.2.
Page 14 of 23
Bundling Information Goods
Click to view larger
Figure 11.2 Two-Sided Markets with Multihoming.
Under such a framework, Choi (2010a) derives conditions under which “multihoming” equilibria exist under both
tying and no tying and analyzes welfare implications. It is shown that the beneficial effects of tying come mainly
from wider availability of exclusive content. According to the model, tying induces more consumers to multihome
and makes platform-specific exclusive content available to more consumers, which is also beneficial to content
providers. There are two channels for this to happen. First, tying induces all consumers to have access to
exclusive content for platform A. Second, the number of consumers who have access to exclusive content for
platform B also increases. However, as in the single-homing consumers case, there are less desirable matches
between the consumers and platforms, leading to higher overall “transportation costs” in the product space. Thus,
the overall welfare effects can be in general ambiguous. However, the simple structure of the model posited in Choi
(2010a) yields an unambiguous answer that tying is welfare-enhancing.
(p. 295) To explore the role of multihoming in the model, he also considers a hypothetical situation in which
bundling prevents consumers from multihoming, thus leading to the foreclosure of competing products. For
instance, the monopolist engages in technical tying in which it designs its product in such a way that a competitor's
product cannot interoperate with the tying product.35 Without multihoming, all consumers will use the tied product
only in the two-sided market. This implies that all content is provided for the bundling firm's format and that
exclusive content for the rival format will be no longer available. In such a case, it can be shown that tying is
unambiguously welfare-reducing, which is in sharp contrast to the result obtained with the assumption of
multihoming.
The upshot of the model in Choi (2010a) is that we derive diametrically opposite results depending on whether or
not we allow the possibility of multihoming after tying. This simple result undoubtedly comes from many special
features of the model. Nonetheless, the model highlights the importance of explicitly considering the role of
multihoming in the antitrust analysis of two-sided markets and provides caution in simply taking the theoretical
results derived in models with “single-homing” and extrapolating to markets where “multihoming” is prevalent.
Multihoming has potential to counteract the tendency towards tipping and the lock-in effects in industries with
network effects. As a result, bundling does not necessarily lead to the monopolization of the market with
multihoming.
Considering the prevalence of multihoming and exclusive content in information goods industries, it would be
worthwhile to enrich Choi's model. First, the model assumes that there is an exogenous amount of exclusive
content available for each format. It would be desirable to have a model where exclusivity is endogenously created
through the use of exclusive contracts.36 Another avenue of research would be to explore the implications of
bundling and multihoming for the incentives to innovate and create new content in information goods industries.
3.3.5. Bundling as a Rent Extraction Device with Multihoming
When multihoming is possible, consumers are not obligated to use the products included in the bundle and may opt
to use alternative products from other vendors. Carlton, Gans, and Waldman (2010) provide a novel model that
explains why a firm would “tie a product consumers do not use.” In their model, there is a monopolist of a primary
product and a complementary product that can be produced both by the monopolist and an alternative
producer.37 In this setting of complementary products, the monopolist's tied product provides the consumer with a
backup option. The presence of that option affects consumer willingness to pay for the rival's complementary
product, which in turn affects pricing of the monopolized product due to the complementary nature of the two
Page 15 of 23
Bundling Information Goods
products. In this model, the monopolist's tied product is never used by consumers. Nonetheless, tying can serve as
a rent-extracting mechanism, and is thus profitable.38 The practice is obviously inefficient to the extent that it
entails additional costs to include the product not used. However, the practice is not exclusionary and does not
foreclose the market (p. 296) as in models of strategic tying with single-homing such as Whinston (1990), Choi
and Stefanadis (2001), and Carlton and Waldman (2002).
3.3.6. Implications of Bundling for Merger Analysis
The possibility of bundling strategy also has important implications for merger analysis. When a merger takes place
among firms that produce different products, the merged entity can now afford to offer bundles that were not
feasible when the firms remained as separate firms. Choi (2008) analyzes the effects of mergers in complementary
markets when the merged firm can engage in bundling.
More specifically, he considers two complementary components, A and B, which consumers combine in fixed
proportions on a one-to-one basis to form a final product. For instance, A and B can be considered as operating
systems and application software, respectively, to form a computer system. He assumes that there are two
differentiated brands of each of the two components A (A1 and A2 ) and B (B1 and B2 ). This implies that there are
four system goods available, A1B1, A1B2 , A2 B1, and A2 B2 .
In such a framework, Choi (2008) analyzes how the market equilibrium changes after a merger between A1 and B1
when the merged firms engage in mixed bundling. He shows that the merged firm's bundle price is reduced
compared to the sum of individual prices before the merger, as the merged firm internalizes the pricing externalities
arising from the complementarity of the two component products. At the same time, the merged entity raises the
prices of its stand-alone components, relative to their levels prior to the merger. In response to the price cut by the
merged firm for their bundled system and the price increase for the “mix-and-match” systems, the independent
rivals cut price in order to retain their market shares. However, their price cut is less than the one reflected in the
bundle. In conjunction with higher component prices by the merged firm, independent firms lose market shares
compared to the premerger situation. As a result, the merging firms’ market share increases at the expense of the
independent firms. The independent firms unambiguously suffer from the combination of a loss of market share and
the need to cut prices.
The overall welfare effect is ambiguous because a merger with bundling in complementary markets may have both
anticompetitive effects and efficiency benefits. The efficiency benefits, for instance, take the form of internalizing
pricing externalities for the merged firm. Consumers as a group may suffer. With heterogeneous consumer
preferences, some buyers gain and others lose. For instance, those who previously purchased both products from
the two merging firms would gain due to the lower bundle price. However, those who purchased a “mix and match”
system and wished to continue doing so would suffer due to the increased stand-alone prices charged by the
merged firm. It is possible that overall consumer surplus may decline. In addition, the potential anticompetitive
effects may take the form of market foreclosure if the financial impact of the merged firm's bundling made its rivals
unable to cover their fixed costs of production. Alternatively, adverse welfare impacts may also arise from
changed R&D investment incentives.
(p. 297) Choi (2008) also analyzes the effects of a merger with pure bundling under which the firm only sells the
bundle and does not make the individual components available separately. Consistent with the results in Whinston
(1990), he shows that pure bundling is not a profitable strategy if it fails to induce exit of rival firms because it
intensifies competition. However, as in Whinston (1990), pure bundling can still be profitable if the exclusion of
rivals through predation is possible with pure bundling, but not with mixed bundling.39
3.4. Case Studies
I provide a short discussion of two recent bundling cases in information good industries and their welfare
implications.
3.4.1. The Microsoft Antitrust Cases
Microsoft's bundling practices have encountered antitrust investigations on both sides of the Atlantic. In the
European antitrust case, it has been alleged that the company's tying practice of requiring Windows operating
Page 16 of 23
Bundling Information Goods
system (OS) users to accept its Windows Media Player software is anticompetitive and hurts digital media rivals
such as RealNetworks. The European Commission (EC) also alleged that Microsoft had leveraged its market power
in the PC OS market into the adjacent work group server OS market by withholding interoperability information. Even
though the interoperability issue was couched in terms of information disclosure, the economic effect of restricting
interoperability between Microsoft's PC OS and rival work group server operating systems is akin to bundling of
Microsoft's PC OS and its own work group server OS because incompatibility between complementary components
from different manufacturers deprives consumers of the ability to mix-and-match and constitutes a virtual bundling,
as we discussed earlier. On March 24, 2004, the EC ruled that Microsoft is guilty of abusing the “near-monopoly” of
its Windows PC operating system and fined it a record 497 million Euros ($613 million). As a remedy, the EC
required Microsoft to make available a version of the Windows OS that either excludes the company's Media Player
software or includes competing products, and to disclose “complete and accurate specifications for the protocols
used by Windows work group servers” to achieve full interoperability with Windows PCs and servers.40 The ruling
was appealed, but upheld by the Court of First Instance on September 17, 2007.41
In the U.S., the Department of Justice (DOJ) alleged that Microsoft engaged in a variety of exclusionary practices. In
particular, Microsoft's practice of bundling its web browser with the dominant Windows PC OS was alleged to be
anticompetitive along with other exclusive contracts. The case was eventually settled with a consent decree that
mandated disclosure of application programming interfaces (API) information and required Microsoft to allow end
users and OEMs to enable or remove access to certain Windows components or functionalities (such as Internet
(p. 298) browsers and media players) and designate a competing product to be invoked in place of Microsoft
software. However, the DOJ did not prevent Microsoft from bundling other software with Windows OS in the future.
It is a difficult question to answer whether the DOJ in the US or the EC made the right decisions concerning
Microsoft's bundling practices.42 The welfare implications of bundling arrangements are in general ambiguous
because bundling could have efficiency effects even when it has harmful exclusionary effects. The literature
suggests that bundling can be exclusionary. At the same time, however, there may be offsetting effects of bundling
such as enhanced performance due to a seamless integration of products and reduced costs of distribution if the
bundled goods are often used together. The appropriate antitrust policy concerning bundling will be ultimately an
empirical question that assesses possible efficiency effects against potential anti-competitive effects, and depend
on the specifics of the case.
3.4.2. Bundling in the Publishing Industry
One of the major developments in the publishing industry is the distribution of content in digital forms through the
Internet. For instance, site licensing of electronic journals revolutionized the way academic articles are accessed.
For academic journals, major publishers also engage in bundling schemes in which individual journal subscriptions
may not be cancelled in their electronic format. The leading example is the ScienceDirect package by Elsevier with
access to 2,500 journals and more than nine million full-text articles (as of July 2010).
Jeon and Menicucci (2006) provide an analysis of publishers’ incentives to practice (pure) bundling and its effects
on social welfare, and derive implications for merger analysis.43 They assume that each library has a fixed budget
that can be allocated between journals and books. In addition, they assume that the library's utility from purchasing
books is a concave function of its expenditure on books whereas the value of each journal is independent of
whether or not the library purchases any other journals. In their model, the effects of bundling on pricing thus arise
solely through its impact on the library's budget allocation between journals and books. More specifically, they
show that bundling entails two distinct effects. First, it has the direct effect of softening competition from books. To
understand this, consider a publisher with two journals of the same value v for a library. With independent pricing,
the publisher charges the same price p for them. Now suppose that the publisher bundles the two journals and
charges the price of 2p for the bundle. Then, the library is strictly better off to buy the bundle than alternatively
spending 2p on books due to the diminishing marginal utility of spending money on books. As a result, the publisher
can charge more than 2p for the bundle and still induce the library to buy the bundle. Note that this effect
increases with the size of the bundle. Second, bundling strategy has an indirect effect of negative externalities on
other publishers. The reason is that a higher price for the bundle implies less money left for other publishers and
books. As a result, other publishers need to cut the price of their journals in competition with books. Bundling is
thus a profitable (p. 299) and credible strategy in that it increases the bundling firm's profit and may make the
rival firms unable to sell their journals. However, they show that bundling is socially welfare-reducing. They also
Page 17 of 23
Bundling Information Goods
explore implications of mergers and show that any merger among publishers is profitable but reduces social
welfare. Their analysis thus has important implications for recent consolidations in the academic journal publishing
industry when the possibility of bundling is considered.
4. Conclusions
This chapter has conducted a highly selective review of the literature on bundling and discussed how the theory
can be applied to information goods. I focused on two stands of the bundling literature: one on price discrimination
and the other on strategic leverage theory.
From the perspective of price discrimination, recent advances in the Internet technology present both opportunities
and challenges for bundling strategies. With the digitalization of information goods and the Internet as a new
distribution channel, there are opportunities for a large scale bundle as a price discriminating instrument because
there is less concern for the inefficiency that may arise by forcing consumers to purchase goods that they value
below marginal cost, which is very likely with physical goods. At the same time, the Internet technology enables
firms to collect information about customers’ purchase behavior and use sophisticated data-mining software tools
to infer customers’ preferences. The Internet also allows for sophisticated mass customization that produces goods
tailored to each customer's needs without significantly sacrificing cost efficiency. This development may obviate
the need to use bundling as a price discrimination device. In this new environment, firms are constantly
experimenting to figure out the best ways to capture consumer surplus, and the best business models are still in
flux with ever evolving technologies available. However, consumers’ strategic responses to such individualized
pricing may make such strategies less profitable, and bundling is expected to remain an important pricing strategy
for information goods.
The leverage theory of bundling has important implications for antitrust analysis. Bundling typically entails
efficiency effects such as a seamless integration of products and reduced distribution costs even when it has
harmful exclusionary effects. As such, there seems to be a consensus among economists that bundling should not
be treated as per se violation of antitrust laws and that the rule of reason should be adopted in the assessment of
tying arrangements. It would be an important task to come up with general guidelines to assess possible efficiency
effects of bundling against potential exclusionary effects. In particular, tying can be a very effective mechanism
through which a dominant firm in a related market can penetrate one side of the two-sided market to gain an
advantage in competition for the other side. As such, we are expected to observe more tying cases in two-sided
(p. 300) markets, and it is essential to understand the impacts of tying on competition in such markets and their
welfare consequences. In this regard, one important element to consider is whether multihoming is a viable option
for relevant players.
References
Acquisti, A., Varian, H.R., 2005. Conditioning Prices on Purchase History. Marketing Science 24(3), pp.1–15.
Adams, W.J., Yellen, J.L., 1976. Commodity Bundling and the Burden of Monopoly. Quarterly Journal of Economics
90(3), pp. 475–498.
Amelio, A., Jullien, B., 2007. Tying and Freebie in Two-Sided Markets, IDEI Working Paper No. 445.
Armstrong, M., 1999. Price Discrimination by a Many-Product Firm. Review of Economic Studies 66(1), Special
Issue, pp. 151–168. (p. 303)
Armstrong, M., 2006. Competition in Two-Sided Markets. RAND Journal of Economics, pp. 668–691.
Armstrong, M., 2011. Bundling Revisited: Substitute Products and Inter-Firm Discounts. Economics Series Working
Papers 574, University of Oxford.
Arrow, K., 1962. Economic Welfare and the Allocation of Resources for Invention. The Rate and Direction of
Inventive Activity: Economic and Social Factors, pp. 609–626.
Page 18 of 23
Bundling Information Goods
Bakos, Y., Brynjolfsson, E., 1999. Bundling Information Goods: Pricing, Profits and Efficiency. Management Science
45(12), pp.1613–1630.
Bakos, Y., Brynjolfsson, E., 2000. Bundling and Competition on the Internet. Marketing Science 19(1), pp.63–82.
Belleflamme, P., Peitz, M., 2010. Industrial Organization: Markets and Strategies. Cambridge, U.K.: Cambridge
University Press.
Bork, R.H., 1978. The Antitrust Paradox: A Policy at War with Itself. New York, Basic Books.
Bowman, W., 1957. Tying Arrangements and the Leverage Problem, Yale Law Journal 67, pp.19–36.
Bulow, J.I., Geanakoplos, J.D, Klemperer, P.D., 1985. Multimarket Oligopoly: Strategic Substitutes and Complements.
Journal of Political Economy 93, 488–511.
Carbajo, J., De Meza, D., Seidman, D.J., 1990. A Strategic Motivation for Commodity Bundling, Journal of Industrial
Economics 38, pp. 283–298.
Carlton, D.W., Gans, J., Waldman, M., 2010. Why Tie a Product Consumers Do Not Use? American Economic
Journal: Microeconomics 2, pp. 85–105.
Carlton, D.W., Waldman, M., 2002. The Strategic Use of Tying to Preserve and Create Market Power in Evolving
Industries. RAND Journal of Economics, pp. 194–220.
Chen, Y., 1997. Equilibrium Product Bundling. Journal of Business 70, pp. 85–103.
Chen, Y., Riordan, M., 2010. Preference Dependence and Product Bundling. Unpublished manuscript.
Choi, J.P., 1996. Preemptive R&D, Rent Dissipation, and the Leverage Theory. Quarterly Journal of Economics, pp.
1153–1181.
Choi, J.P., 2004. Tying and Innovation: A Dynamic Analysis of Tying Arrangements. The Economic Journal 114, pp.
83–101.
Choi, J.P., 2007. Tying in Two-Sided Markets with Multi-Homing. CESifo Working Paper No. 2073.
Choi, J.P., 2008. Mergers with Bundling in Complementary Markets. Journal of Industrial Economics, pp. 553–577.
Choi, J.P., 2010a. Tying in Two-Sided Markets with Multi-Homing. Journal of Industrial Economics 58, pp. 560–579.
Choi, J.P., 2010b. Compulsory Licensing as an Antitrust Remedy. The WIPO Journal 2, pp. 74–81.
Choi, J.P., Lee, G., Stefanadis, C., 2003. The Effects of Integration on R&D Incentives in Systems Markets.
Netnomics, Special Issue on the Microeconomics of the New Economy, pp. 21–32.
Choi, J.P., Stefanadis, C., 2001. Tying, Investment, and the Dynamic Leverage Theory. RAND Journal of Economics
32, pp. 52–71.
Chu, C.S., Leslie, P., Sorensen, A.T., 2011. Bundle-Size Pricing as an Approximation to Mixed Bundling. American
Economic Review 101, pp. 263–303. (p. 304)
Chuang, J.C.-I., Sirbu, M.A., 1999. Optimal Bundling Strategy for Digital Information Goods: Network Delivery of
Articles and Subscriptions. Information Economics and Policy 11(2), pp. 147–176.
Doganoglu, T., Wright, J., 2010. Exclusive Dealing with Network Effects. International Journal of Industrial
Organization 28, pp. 145–154.
Farrell, J., Katz, M. L., 2000. Innovation, Rent Extraction, and Integration in Systems Markets. Journal of Industrial
Economics 48, pp. 413–432.
Flores-Fillol, R., Moner-Colonques, R., 2011. Endogenous Mergers of Complements with Mixed Bundling. Review of
Industrial Organization 39, pp. 231–251.
Page 19 of 23
Bundling Information Goods
Fudenberg, D., Tirole, J., 1984. The Fat Cat Effect, the Puppy Dog Ploy and the Lean and Hungry Look, American
Economic Review 74, pp. 361–368.
Fudenberg, D., Villas-Boas, J. M., 2006. Behavior-Based Price Discrimination and Customer Recognition. In: T.J.
Hendershott (Ed.), Economics and Information Systems: Handbooks in Information Systems Vol. 1, Amsterdam,
Elsevier, pp. 377–436.
Gans, J., King, S., 2006. Paying for Loyalty: Product Bundling in Oligopoly. Journal of Industrial Economics 54, pp.
43–62.
Gao, M., 2009. When to Allow Buyers to Sell? Bundling in Mixed Two-Sided Markets. Unpublished manuscript.
Available at: http://phd.london.edu/mgao/assets/documents/Ming_Gao_JM_Paper.pdf.
Jeon, D.-S., Menicucci, D., 2006. Bundling Electronic Journals and Competition among Publishers. Journal of the
European Economic Association 4(5), pp. 1038–1083.
Kenney, R.W., Klein, B., 1983. The Economics of Block Booking. Journal of Law and Economics 26(3), pp. 497–540.
Loginova, O., Wang, X. H., 2011. Customization with Vertically Differentiated Products. Journal of Economics and
Management Strategy 20(2), pp. 475–515.
Matutes, C., Regibeau, P., 1988. Mix and Match: Product Compatibility without Network Externalities. RAND Journal
of Economics 19(2), pp. 221–234.
McAfee, R.P., McMillan, J., Whinston, M.D. 1989. Multiproduct Monopoly, Commodity Bundling, and Correlation of
Values. Quarterly Journal of Economics 114, pp. 371–384.
Mialon, S.H., 2011. Product Bundling and Incentives for Merger and Strategic Alliance. Unpublished manuscript.
Nalebuff, B., 2000. Competing Against Bundles. In: P. Hammond, Myles, G. (Eds.), Incentives, Organization, and
Public Economics, Oxford, Oxford University Press, pp. 323–336.
Nalebuff, B., 2004. Bundling as an Entry Barrier. Quarterly Journal of Economics 119, pp. 159–188.
Peitz, M., 2008. Bundling May Blockade Entry. International Journal of Industrial Organization 26(1), pp. 41–58.
Posner, R.A., 1976. Antitrust Law: An Economic Perspective, Chicago: University of Chicago Press.
Rochet, J.-C., Tirole, J., 2006. Two-Sided Markets: A Progress Report. RAND Journal of Economics 37(3), pp. 645–
667.
Rochet, J.-C., Tirole, J., 2008. Tying in Two-Sided Markets and the Honor All Cards Rule. International Journal of
Industrial Organization 26(6), pp. 1333–1347.
Schmalensee, R.L., 1984. Gaussian Demand and Commodity Bundling. Journal of Business 57, pp. 211–230. (p.
305)
Shiller, B., Waldfogel, J., 2011. Music for a Song: An Empirical Look at Uniform Song Pricing and Its Alternatives.
Journal of Industrial Economics 59, pp. 630–660.
Simon, H.A., 1971. Designing Organizations for an Information-Rich World. In: M. Greenberger (Ed.), Computers,
Communication, and the Public Interest, Baltimore, The Johns Hopkins Press, pp. 37–72.
Stigler, G.J. 1963. United States v. Loew’s, Inc.: A Note on Block Booking. Supreme Court Review 1963, pp. 152–
157.
Taylor, C.R., 2004. Consumer Privacy and the Market for Customer Information. RAND Journal of Economics 35(4),
pp.631–650.
Trachtenberg, J.A. 2011. Sellers of E-Books Bundling Titles to Promote Authors, New and Old. Wall Street Journal,
Feb 11, 2011, p. B6.
Page 20 of 23
Bundling Information Goods
Whinston, M.D. 1990. Tying, Foreclosure, and Exclusion. American Economic Review 80, pp. 837–859.
Whinston, M.D., 2001. Exclusivity and Tying in U.S. v. Microsoft: What We Know, and Don’t Know. Journal of
Economic Perspectives 15, pp. 63–80.
Yoo, C.S., 2009. The Convergence of Broadcasting and Telephony: Legal and Regulatory Implications.
Communications and Convergence Review 1(1), pp. 44–55.
Notes:
(1.) At Random House, digital titles already account for nearly 50 percent of revenue for some fiction best sellers.
See Trachtenberg (2011).
(2.) Bundling can also be practiced by a third party. For instance, online travel agencies sell vacation packages
that combine flight, hotel, and rental car with substantial savings from what would cost if they are purchased
separately. The package prices mask individual component prices. The hotels and airlines are more willing to give
lower prices not available otherwise if they do not have to show their individual prices.
(3.) Microsoft Office Suite, for instance, includes word processor (Word), spreadsheet (Excel), data management
(Access), Email (Outlook), and presentation (PowerPoint) programs.
(4.) A variation on this theme is the metering argument in which the purchase of an indivisible machine is
accompanied by the requirement that all complementary variable inputs be purchased from the same company. By
marking-up the variable inputs above marginal cost, the seller can price discriminate against intense users of the
machine with the sale of variable inputs as a metering or monitoring device for the intensity of the machine usage.
(5.) Independent pricing is a special case of mixed bundling in which the bundle price is infinity and pure bundling
is a special case of mixed bundling in which component prices are set at infinity.
(6.) A copula is a multivariate uniform distribution that couples marginal distributions to form joint distributions.
Sklar's Theorem states that the joint distribution for n variables can be represented by a copula and the marginal
distributions or respective variables.
(7.) For the exact conditions on the extent of limited positive dependence, see Chen and Riordan (2010).
(8.) See also Armstrong (1999) who provides a more general but similar asymptotic result. He shows that a two-part
tariff in which consumers pay a fixed fee and unit price of any product equal to its marginal cost, achieves
approximately the same profit as perfect price discrimination if the number of products approaches infinity.
(9.) Bakos and Brynjolfsson (1999) consider a more general case in which the valuations of each good can depend
on the number of goods purchased (and thus on the bundle size) to allow the possibility that the goods in the
bundle can be complements and substitutes.
(10.) Adapted from Figure 1 in Bakos and Brynjolfsson (1999).
(11.) See Simon (1971).
(12.) See Loginova and Wang (2011) for an analysis of implications of mass customization for competition.
(13.) For an excellent survey of the literature on behavior-based price discrimination, see Fudenberg and VillasBoas (2006).
(14.) See Taylor (2004) and Acquisti and Varian (2005).
(15.) The fully optimal mixed bundling improves the profit over individual pricing by only about 2.4 percent.
However, in the presence of large fixed costs as would be typical for information goods, this difference can make a
huge difference in terms of resource allocations if without bundling the firm cannot break even whereas it doe with
bundling.
Page 21 of 23
Bundling Information Goods
(16.) Nalebuff (2000) also shows that a firm that sells a bundle of complementary products has a substantial
advantage over rivals who sell component products individually.
(17.) With only one second good that can be bundled, only one firm bundles in equilibrium; if both firms bundle,
once again, the bundled price is driven down to the marginal costs of the bundled product because there is no
longer product differentiation. With more than one second good, it is possible that both firms choose different
bundles in equilibrium.
(18.) See Belleflamme and Peitz (2010) for a simple and elegant exposition of Chen's model.
(19.) Carbajo et al. (1990) also consider the Cournot case in the secondary market and show that bundling may not
be profitable. Once again, bundling induces a favorable response from the rival, but for a different reason because
prices are strategic complements while quantities are strategic substitutes. See Bulow, Bulow, Geanakoplos, and
Klemperer (1985) and Fudenberg and Tirole (1984) for more details.
(20.) See Gans and King (2006) who also investigate joint purchase discounts implemented by two separate firms.
They consider a model in which there are two products with each product being produced by two differentiated
firms. They show that when a bundle discount is offered for joint purchase of otherwise independent products,
these products are converted into complements from the perspective of consumers’ purchase decisions.
(21.) Richard A. Posner, “Antitrust Law: An Economic Perspective, Chicago” University of Chicago Press, 1976, p.
173. From the consumer's perspective, all that matters is the total price of data processing, not how it is divided
between the two products or services, which are machine time and punch cards in this example. We know that
profits are maximized when the total price is $1. Thus, any increase in price of the tied product price must be
matched by a reduction in the price of the tying product.
(22.) See below for the case of complementary products.
(23.) See Nalebuff (2004) who also analyzes the strategy of bundling as an entry deterrence device. He considers
a situation in which an incumbent with two independent products faces one product entrant, but does not know in
which market the entry will take place. He shows that bundling allows the incumbent to credibly defend both
products without having to lower prices in each market. Note, however, that he assumes a sequential pricing game
(Stackelberg game) to derive his conclusions. If a simultaneous pricing game is assumed, the results can be
markedly different.
(24.) In the terminology of Fudenberg and Tirole (1984), bundling is a “top dog” strategy, while non-bundling
softens price competition and is a “puppy dog” strategy. See also Bulow, Geanakoplos, and Klemperer (1985).
(25.) The Microsoft bundling cases are further discussed below.
(26.) See also Choi (1996).
(27.) See Whinston (2001) for a more detailed discussion of the Microsoft case.
(28.) Carlton and Waldman also present several variants of their model, with slightly different mechanisms for the
link between the first and second period.
(29.) See Chapter 4 on video game platforms by Robin Lee in this Handbook for more details.
(30.) See Armstrong (2006), Rochet and Tirole (2006), and Chapters 14, 7, and 3 by Anderson, Jullien, and Hagiu,
respectively, in this Handbook for an analysis of competition in two-sided markets and additional examples of twosided markets.
(31.) For an analysis of the bundling practice initiated by payments card associations Visa and MasterCard, see
Rochet and Tirole (2008). In this case, merchants who accept their credit cards were forced also to accept their
debit cards under the so-called “honor-all-cards” rule.
(32.) The Microsoft case is further discussed in Section 4.
(33.) On 13 December 2010, the Korea Fair Trade Commission (KFTC) announced that Qualcomm would disclose
Page 22 of 23
Bundling Information Goods
ADSP interface information to third party South Korean companies to allow such companies to develop mobile
multimedia software for its modem chip. The KFTC also stated that the disclosure by Qualcomm would begin within
two to 10 months from the date of the announcement. See the KFTC Press Release available at http://eng.ftc.go.kr/
(34.) One notable exception is Peitz (2008) who not only allows for multihoming, but shows that multihoming
actually occurs in equilibrium, that is, some consumers buy the bundle and the competitor's stand-alone product.
(35.) Choi's (2010a) model suggests that such a practice can be anti-competitive.
(36.) See Doganoglu and Wright (2010) for an analysis of exclusive contracts as an instrument of entry deterrence
in a market with network effects and multihoming.
(37.) Carlton, Gans, and Waldman (2010) say that ties are reversible when multihoming is allowed.
(38.) The rent-extraction mechanism in Carlton, Gans and Waldman (2010) is similar to that in Farrell and Katz
(2000) and Choi, Lee, and Stefanadis (2003).
(39.) In a similar vein, Mialon (2011) builds a model in which exclusionary bundling motivates mergers. In her
model, a merger is never profitable if not combined with pure bundling that leads to market foreclosure. A recent
paper by Flores-Fillol and Moner-Colonques (2011) extend Choi's model to allow for both joint and separate
consumption of component products.
(40.) Case COMP/C-3/37.792, Microsoft v. Commission of the European Communities Decision, para 999, available
at http://ec.europa.eu/comm/competition/antitrust/cases/decisions/37792/en.pdf. See Choi (2010b) for a discussion
of compulsory licensing as an antitrust remedy.
(41.) As of this writing, the EC is also investigation IBM for the abuse of its market dominant position in the
mainframe computer market through the bundling of its hardware and mainframe OS and refusal to license
information for interoperability with other software.
(42.) See Whinston (2001) for an excellent discussion of the US Microsoft case.
(43.) See Chuang and Sirbu (1999) for the bundling strategy of academic journals from the perspective of price
discrimination.
Jay Pil Choi
Jay Pil Choi is Scientia Professor in the School of Economics at the Australian School of Business, University of New South Wales.
Page 23 of 23
Internet Auctions
Oxford Handbooks Online
Internet Auctions
Ben Greiner, Axel Ockenfels, and Abdolkarim Sadrieh
The Oxford Handbook of the Digital Economy
Edited by Martin Peitz and Joel Waldfogel
Print Publication Date: Aug 2012
Online Publication Date: Nov
2012
Subject: Economics and Finance, Economic Development
DOI: 10.1093/oxfordhb/9780195397840.013.0012
Abstract and Keywords
This article outlines the theoretical, empirical, and experimental contributions that address, in particular, bidding
behavior in Internet auctions and the auction design in single-unit Internet auctions. Revenue equivalence
generally breaks down with bidder asymmetry and the interdependence of values. There is evidence on the
Internet for a specific kind of overbidding, occurring when there is also competition on the supply side. There is
strong evidence both for the existence of incremental and of late bidding in Internet auctions. The phenomena
appear to be driven partly by the interaction of naive and sophisticated bidding strategies. Multi-unit demand
typically increases the strategic and computational complexities and often results in market power problems. The
emergence of Internet-specific auction formats, such as advertisement position auctions, proxy bidding, and penny
auctions, initiated a whole new research literature that already feeds back into the development and design of
these auctions.
Keywords: Internet auctions, competition, market power, advertisement position auctions, proxy bidding, penny auctions
Auctions are one of the oldest and most basic market mechanisms for selling almost any kind of goods.1 One major
advantage of auctions is their ability to identify and contract the bidder(s) with the highest willingness to pay in
environments with demand uncertainty (Milgrom and Weber, 1982).
Traditionally auctions involved high transaction costs, because bidders typically had to gather at the same place
and time. Since the high transaction costs could only be offset by even greater efficiency gains, auctions were
usually only employed in situations in which price discovery was particularly pertinent. The emergence of
electronic communication and the Internet lowered the transaction costs of auctions dramatically.2 Internet
auctions can be held at any place and time, with bidders participating from all over the world and using proxy
agents to place bids whenever necessary.3 Lower bidding costs imply that the number of participating bidders is
increased, making (online) auctions even more attractive for sellers. The technological advancement gave rise to
the emergence of large auction platforms such as eBay that complement their (different types of) auction services
with related services such as payment handling and buyer insurance. At the same time, online auction platforms
generate and record an enormous amount of detailed market data, much more both in scale and scope than
available from traditional markets. This transparency has created new opportunities for research, which in turn
drives innovations in market design. In fact, the academic literature on Internet auctions has expanded vastly over
the last years.4 In this chapter, we provide a selective review of this research, concentrating on those aspects of
auctions that are specific to or that were born out of Internet auctions.5
(p. 307) In the next section we briefly review basic auction theory. Section 3 takes a look at Internet auctions
from a bidder's perspective, investigating bid amounts and bid timing. In Section 4 we consider the seller's
perspective with respect to shill bidding, reserve prices, buy-now options, and the use of all-pay auctions. Section
5 discusses typical multiunit auctions conducted over the Internet. Finally, section 6 concludes with a summary and
Page 1 of 27
Internet Auctions
outlook for potential future research.
1. Some Basic Auction Theory
In this section we briefly describe some of the most basic theoretical results on auctions that have proven useful
when studying Internet marketplaces.6 We start by describing the standard types of single-unit auctions, and
consider the more complex multi-unit auctions later in this chapter. Single-unit auctions can be classified into openbid and sealedbid auctions. In open-bid auctions, tentative prices are announced, and bidders indicate whether
they would buy at the current price or not. Such open-bid auctions can be further classified into auctions with an
ascending or descending price. In the descending-price open-bid auction (“Dutch auction”)7 , the price starts high
and is lowered by the auctioneer step by step. The item is allocated to the bidder who first accepts the current
price. In the ascending-price open-bid auction the price clock starts low and is then driven either by the
auctioneer (“Japanese auction”) or by the bidders’ own bids (“English auction”). The auction ends once only one
bidder remains in the race. The remaining bidder receives the item and pays the price at which the last bidder
dropped out. In sealed-bid auctions bidders submit their bids simultaneously. Such auctions may differ in their
pricing rule, with the most prominent rules being the first-price and the second-price auction. In both cases, the
highest bidder wins the item, paying the own bid in the former, but the second highest bid in the latter case.
Two standard models are used to describe how bidders value an item. In the private value model, each bidder
knows his own value for the item (i.e., the own maximum willingness to pay). Bidders, however, do not know the
exact values of the other bidders or of the seller. They only know the distributions from which each of the values is
drawn and the fact that all values are drawn independently. Under the common value assumption, there is a single
true value of the object that is unknown but the same for all bidders. Each bidder receives a noisy signal on the
true value of the item and knows the distributions from which the own signal and the signals of the other bidders
are drawn. In hybrid models, private values are either affiliated to each other or combined with a common value.8
In the following, we concentrate on the private values case, especially on the symmetric case, in which all private
values are drawn from the same distribution.9
In the ascending-price open-bid auction, it is a weakly dominant strategy for each bidder to drop out of the auction
when the price reaches one's own value; (p. 308) bidding more might yield a lower payoff in some cases, but
never a higher payoff. Dropping out as soon as the own value is surpassed is dominant, since at any lower price a
profitable opportunity may be lost, while at any higher price a loss may be incurred. As a result, the bidder with the
highest value receives the good and the auction outcome is efficient. The price to be paid equals the second
highest value among all bidders (possibly plus a small increment), as this is the price where the winner's strongest
competitor drops out.
A similar result is obtained for the second-price sealed-bid auction. Here, by auction rule, the bidder with the
highest bid wins and pays the second highest bid. Thus, one's own bid only affects the probability of winning, but
not the price to be paid. By bidding their own values, bidders maximize their chances of winning, but make sure
that they never pay a price greater than one's own value. Thus, as in the ascending-price auction, bidding one's
own value is a weakly dominant strategy in the second-price sealed-bid auction (as first observed by Vickrey,
1961). When played by all bidders, the strategy results in an equilibrium in which the highest bidder wins the
auction and pays a price equal to the strongest competitor's value.
Bidding in the descending-price open-bid and the first-price sealed-bid auction formats is more complex, because
the winner pays a price equal to one's own bid, not knowing the bid of the strongest competitor. Thus, when
submitting a bid in these auctions, a bidder has to trade-off the probability to win (which is lower with a lower bid)
and the expected profit in case of winning (which is higher with a lower bid). In particular, bidders can only obtain
positive payoffs if they “shade” their bids, i.e., bid less than their value. If we assume risk-neutral bidders then
each bidder's equilibrium strategy is to bid the expected value of the strongest competitor conditional on having
the highest value. The bidder with the highest bid then wins the auction, and the ex-ante expected auction price is
equal to the expected second highest value in the bidder population.
Summing up, for the case of symmetric private values (identically distributed and independently drawn), ex ante all
four auction formats are efficient and yield the same expected revenue. This is the celebrated revenue
equivalence theorem that was first stated by Vickrey (1961) and later generalized by Myerson (1981) and Riley
10
Page 2 of 27
Internet Auctions
and Samuelson (1981) to the case of any “standard auction.”10
Revenue equivalence generally breaks down with bidder asymmetry and the interdependence of values. In the
case of asymmetry, there is no clear theoretical prediction as to which type auction yields the highest revenue
(Krishna, 2002). In the case of symmetric value interdependence, the linkage principle (Milgrom and Weber, 1982)
generally predicts that the more information on the true value is made available by an auction mechanism, the
higher the revenues. Hence, the English auction, which allows bidders to observe each others’ exit strategies,
ranks first among the four classical auctions, followed by the second-price sealed-bid auction that uses the top two
values to determine the price. The first-price sealed-bid auction and the descending price auction jointly come in
last, due to the fact that neither procedure discharges information on bidders’ signals until after the auction is
terminated.
(p. 309) By setting a reserve price, sellers can increase revenues beyond the expected second highest bidder
value, if they are willing to sacrifice allocational efficiency (Myerson, 1981, Riley and Samuelson, 1981). In Internet
auctions, this reserve price can take the form of an open start price (below which bids are not allowed), a secret
reserve (which is or is not revealed once it is reached during the course of the auction), or shill bids (i.e., the seller
placing bids in his own auction). A revenue maximizing reserve price is chosen such that, in expectation, the seller
can extract some of the winning bidder's surplus, thereby increasing one's own expected revenue. The strategy is
successful if the reserve price is below the winner's value and above the value of the winner's strongest
competitor. However, the optimal reserve price comes at the cost of lowered expected efficiency: if the seller sets
the reserve price too high—higher than the highest value—then the item will not be sold.
Although the theory of simple auctions yields useful orientation, it typically does not satisfactorily describe Internet
auctions. In fact, in our survey we deal with a number of auction features that concern institutional and behavioral
complications and are not captured by the simple theory. For instance, the models do not account for endogenous
entry, flexible dynamic bidding, minimum increments, time limits, and various specific pricing and information rules.
If, say, entry into an auction is endogenous, and bidders incur small but positive costs of participation or bidding,
then bidders confronted with a reserve price might not enter so as to avoid potential losses. As a result, with
endogenous entry and bidding costs the optimal reserve price converges to the seller's reservation value with the
number of potential bidders (Levin and Smith, 1996; McAfee and McMillan, 1987; Samuelson, 1985).
Moreover, the behavioral assumptions underlying the simple equilibrium analysis are often too strong. For instance,
bidders might need to “construct” values, and preference construction is subject to endowment-, framing- and
other cognitive and motivational effects. Also, risk-averse bidders should bid more aggressively in first-price
sealed-bid and decreasing-price open-bid auctions, which thereby yield higher revenues than auctions formats
with (weakly) dominant equilibrium strategies.11 Next we discuss some selected institutional and behavioral
complexities that seem relevant for Internet auctions.
2. Bidding Behavior
Irrespective of a broad spectrum of Internet auction designs, we recognize the basic prototypes and their strategic
properties in many Internet auction markets. One popular format on auction markets is what might be interpreted as
a hybrid of the English and the second-price sealed-bid auction as prominently employed by eBay (see also
Lucking-Reiley, 2000a, 2000b). In the format called “proxy bidding” (sometimes also referred to as “maximum bid”
or “auto bid”), bidders submit their maximum bids to eBay, knowing that “eBay will bid incrementally on your behalf
(p. 310) up to your maximum bid, which is kept secret from other eBay users” (statement on eBay's bid
submission page). Only outbid maximum bids are published. If we assume for the moment that all bidders submit
their bid at the same time, then the “proxy bidding” mechanism implements a second-price sealed-bid auction: the
bidder with the highest (maximum) bid wins and pays the second highest (maximum) bid plus one bid increment.
However, as the auction is dynamic, bidders might decide to submit their maximum bids incrementally, thereby
acting as if in an increasing-price open-bid English auction. Yet we emphasize that the auction mechanism on eBay
differs in more than one way from a standard English auction. For one thing, not the last, but the highest bid wins.
For another, the auction has a fixed deadline. Still, related to what we predict in English and sealed-bid secondprice auctions, bidding one's own private value on eBay (at some point before the end of the auction) can be—
under certain assumptions—part of an equilibrium in undominated strategies (Ockenfels and Roth, 2006).
Page 3 of 27
Internet Auctions
Whether actual bidding on eBay and other Internet auction markets is in line with theory or not, can be studied in
the field or in the laboratory. Laboratory experiments test the predictions of auction theory in a highly controlled
environment.12 In such experiments, bidders’ values are often induced (and thereby controlled for) by assigning a
specific cash redemption value to each bidder. The bidder who wins the item receives a payoff equal to the
redemption value minus the price. Bidders who do not win receive zero. In addition, bidders are typically provided
sufficient information about the number of potential bidders and the distribution of the bidders’ values, so that the
assumptions made in the theoretical auction models are met.
In the following, we discuss some bidding patterns—overbidding, winner's curse, and late bidding—that have been
investigated both in the lab and on the Internet.
2.1. Overbidding in Private Value Auctions
A robust observation in laboratory experiments is that while bidding in ascending-price private-value auctions
usually comes very close to the bidding strategy predicted by theory (bidders stay in the auction until their value is
reached, and drop out at that point), bidders exhibit a tendency to bid higher than predicted in other private-value
auction formats.13 Several explanations have been put forward in the literature to explain overbidding in privatevalue auctions. The oldest of these explanations is the existence of risk aversion among bidders. A risk-averse
bidder will prefer to bid higher in order to increase the likelihood of winning in exchange for a lower profit in case of
winning. However, evidence from laboratory experiments suggests that while the assumption of risk aversion can
account for some of the overbidding observed in private-value auctions (Cox et al., 1985), it cannot explain the full
extent of overbidding and, in particular, not in all auction (p. 311) formats.14 In a more recent study, Kirchkamp et
al. (2006), provide a rigorous experimental comparison of first- and second-price auctions, in which the degree of
risk is varied. While they find no effect of risk-aversion on bidding in second-price auctions (which corresponds to
the fact that the equilibrium in second-price auctions is in weakly dominant strategies that should not be affected
by the bidders’ risk attitudes), they identify and quantify a significant effect of risk-aversion on bidding in first-price
auctions. Interestingly, the evidence once again suggests that risk-aversion is insufficient to explain the full extent
of overbidding.
Regret is a complementary explanation of overbidding behavior in laboratory experiments. The idea is that a
bidder who loses a first-price auction after submitting a discounted bid, but observes a winning bid lower than one's
own value, will regret not having bid higher (“loser-regret”). Similarly, a bidder who wins an auction, but observes a
second highest bid (much) lower than the own bid, will regret not having submitted a lower bid (“winner-regret”).
Ockenfels and Selten (2005) explore this idea in laboratory sealed-bid first-price auctions with private values. They
find that auctions in which feedback on the losing bids is provided yield lower revenues than auctions where this
feedback is not given. They introduce the concept of weighted impulse balance equilibrium, which is based on a
principle of ex-post rationality and incorporates a concern for social comparison, and show that this captures their
results and those of Isaac and Walker (1985) in a related experiment.
Filiz-Ozbay and Ozbay (2007) and Engelbrecht-Wiggans and Katok (2007) formalize a similar idea, assuming that
bidders anticipate ex ante that they would experience negative utility from post-auction regret and so take the
effect into account when submitting their bids. Filiz-Ozbay and Ozbay (2007) conduct laboratory experiments with
first-price sealed-bid auctions in which they, too, manipulate the feedback that bidders receive after the auction in
order to allow for winner-, loser- or neither form of regret. In particular, in the winner-regret condition, the auction
winner is informed about the second-highest bid after the auction, while the losers do not receive any information.
In the loser-regret condition, only the losers are informed about the winning bid, while the winners receive no
feedback. The results provide strong evidence for loser-regret, but no significant support for winner-regret. With
feedback to losers about the winning bid, bidders submit significantly higher bids than with no feedback. A small but
significantly negative effect is also found when winners are informed (rather than not informed) about the secondhighest bid.
Further evidence comes from an experiment by Engelbrecht-Wiggans and Katok (2009), who allow for learning,
and let subjects bid against computerized bidders in order to exclude social determinants of bidding behavior.
They implement different payment rules inducing a controlled variation of the variance of payoffs, which allows
them to isolate the effect of risk aversion on bidding behavior. While an effect of risk-aversion is not supported by
the data, the authors find evidence of both winner-regret and loser-regret. In addition, they find that the magnitude
Page 4 of 27
Internet Auctions
of regret persists with experience.
(p. 312) While, as Filiz-Ozbay and Ozbay (2007) suggest, winner-regret only plays a role in first-price auctions,
loser-regret may also be felt in Dutch auctions, where the loser is always informed about the winner's bid, but the
winner is typically not informed about the loser's potential bid. So far, however, no attempts have been made in the
literature to explain the overbidding that is sometimes observed in second-price or ascending-bid auctions with
regret.
Another explanation of overbidding that can rationalize overbidding both in first-price and in second-price sealedbid auctions is spite, i.e., the negative utility of seeing someone else win the auction (see Morgan et al., 2003, and
Ockenfels and Selten, 2005). Cooper and Fang (2008) provide some experimental evidence that—consistent with
the spite hypothesis—bidders in second-price auctions are more likely to overbid if they believe that other bidders
have much higher values than themselves. Note, however, that the interpretation of the observations in terms of
social comparison may be confounded by the fact that a greater difference between one's own value and that of
other bidders also implies a lower risk of being exposed when overbidding. Hence, as the difference between
values increases, the expected cost of overbidding decreases for the bidder with the lower value.15
The studies discussed so far concern laboratory experiments testing standard auction theory. There appears to be
much less overbidding in Internet auction formats. Garratt et al. (2008) conduct second-price auctions over the
Internet inducing values for experienced eBay buyers and sellers. While they find some variance of bids around
the induced values, they do not observe a particular tendency to overbid or underbid. Ariely et al. (2005) employ
eBay's hybrid format in the laboratory and find that over time—with increased experience of bidders—bids
converge to induced values. Greiner and Ockenfels (2012) use the eBay platform itself to conduct an experiment
with induced values. This study, too, does not reveal any systematic overbidding or underbidding. On average, the
losers’ last bids were very close to their private redemption values (winners’ bids are not observed on eBay). In
fact, some studies report significant underbidding due to “reactive” or “incremental” biddings. Zeithammer and
Adams (2010), for instance, find a downward bias of bids on eBay due to “some sort of reactive bidding” (see
Section 3.3 for more on this). More specifically, their field data suggest that the top proxy bid is often less than the
top valuation and too close to the second highest bid.
On the other hand, there is evidence on the Internet for a specific kind of overbidding, occurring when there is also
competition on the supply side. In a recent study, Lee and Malmendier (2011) collect price data from a large
sample of Internet auctions that have a fixed-price offer for an identical item (in exactly the same quality) available
on the same website in the course of the entire auction. This fixed-price offer is obviously an easily accessible
outside option for bidders and, thus, should define the upper bound for bids in the auction. Surprisingly, however,
they find that the auction price is higher than the fixed-price in about 42 percent of the auctions of their first
database and in about 48 percent of their second database.16 On average, the winners in the auctions pay about
2 percent more than the fixed-price offer. These observations are in line with, but even stronger than, (p. 313)
earlier results by Ariely and Simonson (2003), who report that for 98.8 percent of the 500 online auctions that they
study a lower retail price can be found within 10 minutes of search on the Internet.
Such phenomena obviously cannot easily be explained by risk-aversion, regret, or social comparison, because the
alternative offer comes at no risk and reduces the price to be paid and thus also one's relative position. One
alternative explanation might be auction fever.17 The term “auction fever” usually refers to a kind of excitement or
arousal of bidders due to the competitive interaction with other bidders. One approach to model the auction fever
phenomenon is to assume a quasi-endowment effect (Thaler, 1980), which implies that the value of an item
increases during an auction as the bidders become more and more emotionally attached to the item. Auction fever
might also be based on a rivalry effect (based on arousal in competition, Heyman et al. 2004), or on uncertain
preferences, that is, bidders’ uncertainty in the assessment of their own willingness to pay (Ariely and Simonson,
2003).
Some evidence from empirical and experimental studies suggests that both the quasi-endowment and the rivalry
effect are present. Ku et al. (2005), for example, aim to distinguish between rational bidding and the different types
of auction fever. They collect escalation of commitment, a kind of sunk-cost fallacy (having invested into search
and bidding, bidders would like to recover those costs by winning rather than realizing the loss). escalation of
commitment, a kind of sunkcost fallacy (having invested into search and bidding, bidders would like to recover
Page 5 of 27
Internet Auctions
those costs by winning rather than realizing the loss). escalation of commitment, a kind of sunk-cost fallacy (having
invested into search and bidding, bidders would like to recover those costs by winning rather than realizing the
loss).escalation of commitment, a kind of sunk-cost fallacy (having invested into search and bidding, bidders would
like to recover those costs by winning rather than realizing the loss). The authors use data from art auctions,
conducted both live and over the Internet, and combine the data with survey responses obtained from (most of)
the bidders. They find that the observations are both in line with simple models of item commitment (quasiendowment) and with competitive arousal (rivalry). Furthermore, bidders at live auctions report to have exceeded
their bid limits more often than Internet bidders. An additional laboratory experiment provides further support for
both explanations.
Similarly, Heyman et al. (2004) use a survey on hypothetical Internet auction behavior as well as a performancepaid experiment to distinguish between the rivalry effect (called “opponent effect” here) and the quasi-endowment
effect. The former is tested by correlating the final bid to the number of bids others submitted, and the latter by
correlating the final bid to the amount of time a participant held the highest bid in the course of the dynamic
auction. Again, both explanations find support in the data. Bidders submit higher final bids with more competition
and with longer temporary “possession” of the item during the auction. An implication of these results is that
lowering start prices might enhance the seller's revenue by increasing the probability or extent of auction fever,
because they allow for more rivalry and longer endowment periods. In fact, low start prices are very popular in (p.
314) Internet auctions and the currently highest bidder is often referred to as the “current winner.”
Lee and Malmendier (2011) use their field data on overbidding to examine the evidence for two alternative
explanations: auction fever versus a limited attention to the outside options. They find almost no correlation
between the time spent on the auction and the size of bids and interpret this as evidence against the quasiendowment effect and, thus, as evidence against auction fever. In contrast, they do find some evidence for the
limited attention hypothesis, because they observe a strong positive correlation between the on-screen distance of
auction listings to the corresponding fixed-price listings and the probability that the auction receives high bids.
To conclude, while laboratory research in standard auction formats has produced substantial and robust evidence
for overbidding (measured in terms of one's own value), the evidence is mixed in Internet auctions. In simple eBaylike environments, there is so far hardly any evidence for overbidding, and even some evidence for underbidding
due to incremental or reactive bidding strategies. When measured with respect to competing offers, however, there
is strong evidence that some buyers overpay. The reasons are not fully understood yet. It seems that risk aversion
actually accounts for some of the overbidding in laboratory first-price sealed-bid auctions, but it seems to have
little relevance in many other auction formats. Ex-post rationality and regret also seems to play a role. However,
risk aversion and regret cannot explain paying more than what is offered by other sellers. Auction fever, due to
competitive emotional arousal or to a quasi-endowment effect, finds mixed support in experimental and empirical
data. Limited attention may also contribute to overbidding, adding yet another behavioral effect to the set of
possible explanations. While it seems plausible that some combination of these effects drives overbidding behavior
in different auction settings, more research is need to attain a full picture.18
2.2. The Winner's Curse in Common Value Auctions
Winning a common value auction is often “bad news,” because it implies that the winner was the most optimistic
(had the highest signal) regarding the true value of the good. In fact, abundant experimental (and some empirical)
evidence shows that winners often overpay the auctioned good, i.e., fall prey to the winner's curse (Kagel and
Levin, 1986, 2002; Thaler, 1988). The greater the number of bidders, the more likely it is for the winner's curse to
occur (Bazerman and Samuelson, 1983). While the extent of the phenomenon depends on gender, grades, and
other traits (Casari et al. 2007), it is generally persistent over many rounds (Lind and Plott, 1991), and also appears
if the good has only a weak common-value component (Rose and Kagel, 2009), proving its general robustness
(Grosskopf et al., 2007; Charness and Levi 2009).
The evidence for the winner's curse from Internet auctions is weaker than the evidence from laboratory
experiments. Bajari und Hortaçsu (2003) analyze a (p. 315) sample of 407 auctions for mint and proof sets of
collectible coins on eBay. The authors argue that given the existence of an active resale market for coins and the
impossibility to inspect coins before purchase, evaluating these coins involves substantial uncertainty about their
common value component. Given the common value character of the items, rational bidders should discount their
Page 6 of 27
Internet Auctions
bids to prevent the winner's curse, and the more so the higher the number of bidders participating in the auction.
Using both Poisson regressions and a structural model incorporating bidding behavior in second-price commonvalue auctions, Bajari und Hortaçsu (2003) find a significant negative impact of the number of participating bidders
in the coin auctions on the size of individual bids. In particular, the structural model yields a significant degree of
uncertainty concerning the value of the good and shows that with each additional bidder bids are reduced by 3.2
percent on average.19 The model also indicates that a reduction of the uncertainty concerning the item's value
would lead to a significant increase in the bids relative to signal—as theory predicts. These results suggest that
bidders in the field are aware of the winner's curse and rationally adapt their behavior in order to prevent it.
Bajari and Hortaçsu (2003) do not find that inexperienced bidders are more susceptible to the winner's curse, but a
study by Jin and Kato (2006) does. They report a strong positive correlation between bidder's inexperience and the
degree to which the item is overpaid. They first gathered data on bids and prices for ungraded baseball cards that
were traded on eBay and found that buyers pay 30 percent to 50 percent more when the seller claims the card to
be of high quality. In a second step, the authors purchased 100 ungraded baseball cards from eBay, half from
seller's who made claims about high quality, and half from sellers who did not make such claims. The researchers
carried out their bidding in a way as to minimize distortions of the data: they only bid in the last five minutes and if
there was at least one bid present. After purchase, all cards were submitted to a professional grading service, in
order to obtain an objective measure of item quality. It turns out that—conditional on authentic delivery—the quality
of cards received from high-claim sellers is indistinguishable from the quality of cards purchased from the other
sellers. When including defaults and counterfeits as zero-quality deliveries, the average quality from sellers
claiming a high quality is actually lower than from sellers who made no claims. Since especially inexperienced
bidders pay significantly higher prices for the high-claim cards, it seems that they fall prey to the winner's curse by
being too optimistic about the reliability of the claims. The most optimistic bidder in each auction wins the item and
more often than not pays a too high price.20
A novel approach to assess the role of information dispersion in Internet auctions is introduced by Yin (2009), who
complements the information available from the online auction platform with survey information on the distribution
of value assessments. The survey participants (not involved in the auctions) are asked to assess the maximum
value of the items after receiving the complete online item descriptions, but no information on the seller or other
aspects of the auction. Yin (2009) uses the resulting distribution of values to estimate rational (p. 316) equilibrium
and naive bidding strategies both under the assumption of common and private values. She concludes that her
data on PC auctions are best in line with the equilibrium of a common value setting with almost no signs of a
winner's curse. The result hinges on the fact that bids negatively correlate with the dispersion of the value
assessments, because bidders bid more cautiously, avoiding the winner's curse and inducing lower prices the
greater the dispersion is. This result, too, indicates that bidder behavior in Internet auctions may be more
sophisticated than laboratory studies have suggested. More research seems necessary to identify what
behavioral, institutional and methodological differences drive the different conclusions regarding bidder naivety
and winner's curse in the laboratory and in Internet auctions.
2.3. Late and Incremental Bidding
In most traditional auction formats, the timing of bids is not an issue. In sealed-bid auctions, all bidders submit their
bid simultaneously. In increasing or decreasing price clock auctions, the auction is paced by the auctioneer and
the size of the bids. However, in English auctions and its dynamic variants on Internet platforms, the bidders
endogenously drive the auction price, turning the timing of the bids into part of the bidding strategy. In this setting,
late bidding is a pervasive feature of Internet auctions (e.g., Roth and Ockenfels, 2002; Bajari and Hortaçsu, 2003).
The theoretical and empirical work on this phenomenon suggests that it is closely related to the ending rule of the
respective auction.
The simplest rule for ending a dynamic auction is a hard close, that is, when the prespecified end of the auction
duration is reached, the auction terminates and the bidder with the highest bid is the winner of the auction. This
rule is employed on eBay, among other sites. Based on survey data, Lucking-Reiley (2000a) lists Internet auction
lengths ranging from 60 minutes up to 90 days. Other platforms, such as Amazon, Yahoo and uBid auction
platforms use or used (not all auction houses still exist) a flexible timing rule often referred to as soft close:
whenever a bid is submitted in the last, say, 10 minutes of the soft-close auction, the auction time is extended by a
Page 7 of 27
Internet Auctions
fixed amount of time, for example, another 10 minutes. While a substantial amount of late bidding (also called
“sniping”) is found on hardclose auctions, very little late bidding is observed on soft-close auctions. Figure 12.1
from Roth and Ockenfels (2002) reports the cumulative distribution of the timing of the last bid in a sample of 480
eBay and Amazon auctions with at least two active bidders. About 50 percent of the eBay hard-close auctions
attracted bids in the last 5 minutes (37 percent in the last minute and 12 percent in the last 10 seconds) compared
to only about 3 percent of the Amazon soft-close auctions receiving bids in the last 5 minutes before the initially
scheduled closing time or later.21
Figure 12.1 Cumulative Distributions of Timing of Auctions’ Last Bids Timing on eBay and Amazon (from
Roth and Ockenfels, 2002).
In a controlled laboratory experiment with a private-value setting, Ariely et al. (2005) replicate the empirical
findings. While bidders in the Amazon-like mechanism converge to bidding early, participants in the eBaymechanism tend to bid later. (p. 317)
There are many good reasons for bidding late. The difference in bid timing between eBay's hard-close and
Amazon's soft-close design suggests that sniping is driven by strategic behavior rather than simple non-strategic
timing (Roth and Ockenfels, 2002).22 In particular, sniping avoids at least four different types of bidding wars: with
like-minded bidders, with uninformed or less informed bidders, with shill-bidding sellers, and with incremental
bidders. Ockenfels and Roth (2006) develop a game-theoretical model with which they demonstrate that, given a
positive probability that a bid does not arrive when submitted in the last minute, mutual late bidding might constitute
an equilibrium between bidders who have similar values for the good (i.e., “like-minded” bidders). The reason is
that when submitting early, the bidder either loses the auction (yielding a profit of zero) or wins the auction
(yielding a profit close to zero, since the other like-minded bidders submit similarly high bids). But if all bidders
submit late, then there is a positive probability that only one of the bids arrives, implying a high profit for the winner.
However, this explanation of sniping is not fully supported by the empirical evidence. For instance, Ariely et al.
(2005) find that, contrary to the theory, changing the arrival probability of late bids from 80 percent to 100 percent
leads to an increase in the extent of late-bidding in their experiment.23
Bidding late might also be a reasonable strategy if values are interdependent (i.e., have a common value
component). Imagine an expert spots a particularly good deal on eBay, an original antique which is wrongly
described by the seller. If the expert would go into the auction early, he runs the risk of revealing this information,
as other bidders may observe his activity. Thus, he has strong incentives to wait until the end (see Bajari and
Hortaçsu, 2003, and Ockenfels and Roth, 2006, for formalizations of this idea).
Sniping may also be an effective strategy against shill-bidding. If the seller bids himself in the auction in order to
raise the price close to the maximum bid of the highest bidder, then delaying the bid until the end of the auction
undermines that (p. 318) strategy effectively, as it does not leave the seller enough time to approach the real
maximum bid (Barbaro and Bracht, 2006; Engelberg and Williams, 2009).
Late bidding might be a best response to a different type of bidding commonly observed in Internet auctions:
incremental bidding. Here, a bidder does not make use of the proxy mechanism provided by the auction platform,
but rather increases his “maximum” bid incrementally—as if bidding in an English auction. Against incremental
bidders, waiting and sniping prevents a bidding war (e.g., Ely and Hossain, 2009). The empirical evidence for the
existence of incremental bidding and sniping as a counter-strategy is quite strong. Among others, Ockenfels and
Roth (2006) report the co-existence of incremental and late bidding on eBay. Wintr (2008) observes substantially
later bids from other bidders in the presence of incremental bidders in the auction. Similarly, Ely and Hossain
(2009) find their model of incremental and late bidding confirmed in the data collected in a field experiment. See
Page 8 of 27
Internet Auctions
also Ariely et al. (2005), Ockenfels and Roth (2006), Wilcox (2000), Borle et al. (2006) and Zeithammer and Adams
(2010), who all find evidence that incremental bidders tend to be more inexperienced.
When some bidders use late bidding in response to incremental bidding, why is there incremental bidding in the
first place? A number of attempts to explain the incremental bidding pattern in Internet auctions have been made in
the literature. Proposed explanations include auction fever or quasi-endowments (see above), as well as bidding
simultaneously in multiple competing auctions. The auction fever explanation is straightforward. Bidders are
assumed to become emotionally so involved in the auction that the perceived in-auction value of the item (or of
winning the auction) increases in the course of the auction (see Section 3.1). Bidders facing competing auctions
are assumed to bid incrementally to avoid being “caught” with a high price on one auction, while the competing
auctions end with lower prices. In fact, several authors have shown that bidding incrementally across auctions
(always placing the next bid at the auction with the currently lowest price) can be an equilibrium in a competing
auction setting (Anwar et al., 2006; Peters and Severinov, 2006).24 Similarly, in Nekipelov's (2007) model of
multiple concurrent auctions, bidders have an incentive to bid early to deter entry, which might also lead to
incremental bidding patterns. Finally, incremental bidding may also be due to some form of uncertain preferences.
Rasmusen (2007), for example, proposes a model in which bidders are initially not fully aware of their own value.
But, the bidders only invest into thinking hard and gathering information as the auction proceeds and they
recognize that they might actually have a chance to win. To avoid negative surprises, bidders start with low bids
and increase these only as good news on the true value arrives. Similarly, Hossain (2008) develops a model of a
bidder who only knows whether his value exceeds a given price once he sees the price. Hossain shows that such
a bidder would start with a low bid and then bids incrementally. In a recent theoretical paper Ambrus and Burns
(2010) show that, when bidders are not able to continuously participate in an auction but can only place bids at
random times, there exist equilibria involving both incremental and late bidding. Regarding the profitability of late
bidding, the field evidence is mixed. (p. 319) Bajari and Hortaçsu (2003) and Wintr (2008), for example, find no
differences in final prices paid by early or late bidders, while Ely and Hossain (2009), Gray and Reiley (2007) and
Houser and Wooders (2005) find small positive effects of sniping on bidder profits.
To sum up, there is strong evidence both for the existence of incremental and of late bidding in Internet auctions.
The phenomena seem to be driven partly by the interaction of naive and sophisticated bidding strategies. Also, the
design of the auction termination rules proves to be an important strategic choice variable of Internet auction
platforms, strongly affecting the timing of bids. However, the evidence on the extent to which incremental and late
bidding affects efficiency and revenue is mixed, although there are indications that sniping tends to hamper
auction efficiency and revenue. Ockenfels and Roth (2010) provide a more detailed survey of the literature on
sniping and ending rules in Internet auctions.
3. Seller Strategies
3.1. Auction Formats
Internet auction platforms can choose which auction formats to offer, and sellers and buyers can choose which
platform to use. With so many choices, the question obviously is which designs are successful in which
environments. The Revenue Equivalence Theorem that we discussed in Section 2 suggests that auction design
may not matter much, given a few simple assumptions are met. There is quite a bit of empirical and experimental
evidence, however, that even in very controlled environments revenue equivalence does not hold.25
In the first empirical Internet auction study, Lucking-Reiley (1999) compares the revenue of different mechanisms
for online auctions in a field experiment conducted on an Internet newsgroup. Auctions on that platform were
common so that different mechanisms could be easily tested. The author auctioned identical sets of role-playing
game cards to experienced bidders, who were familiar with the items. The time horizon of the auctions was much
longer than in related laboratory studies (days rather than minutes). In the first experiment, Lucking-Reiley (1999)
sold 351 cards using first-price and Dutch auctions, controlling for order effects. The main result is that for 70
percent of the card sets, the Dutch mechanism yielded higher revenues than the first-price auction, with an
average price advantage of 30 percent. This result is both in contrast to auction theory, which predicts identical
revenues across these formats for any risk preferences, and to the findings from laboratory experiments, in which
usually higher revenues are observed in first-price auctions.
Page 9 of 27
Internet Auctions
In the second experiment, Lucking-Reiley (1999) sold 368 game cards in English and second-price auctions. This
time he observes an order effect. The auction type conducted first (be it English or second-price auction) yields
lower revenues than (p. 320) the second auction type. However, the overall differences in revenues are small
and barely significant. Given the small differences in observed revenues, Lucking-Reiley (1999) conjectures that
English auctions and second-price auctions are approximately revenue equivalent in the field.
The design of Internet auctions, however, eludes most traditional formats, patching auction types together or
designing completely new formats. Formats such as the unique bid auctions, which we discuss in detail in a later
subsection, or the combination of proxy-bidding and buy-it-now pricing took over the market long before any
auction theorist had a chance to study them. In fact, excellent auction theory reviews (e.g., Klemperer 1999 or
Milgrom 2004) are outpaced so quickly by the inventiveness of Internet auction designers that we must concede
that auction research is transforming into history faster than we can write it up.
On the other hand, however, auction researchers have also come up with formats that are still awaiting an
application—or at least a serious test—in the field. Katok and Kwasnika (2008), for example, present an intriguing
study on the effects that can be achieved by changing the pace of auctions. Comparing first-price sealed-bid
auctions with Dutch auctions, they find that the former yield greater revenues when time pressure is high.
Interestingly, the opposite is true, when the clock runs slowly. In another study that proposes that auction duration
may be an important design variable, Füllbrunn and Sadrieh (2012) compare standard hard-close ascending-bid
open auction with candle auctions that only differ in the termination mechanism. They find that using a stochastic
termination rule as in candle auctions can substantially speed up the auction without a loss of efficiency, revenue,
or suspense. These examples show that the research on auction formats is still far from being concluded,
especially because new design parameters are being introduced frequently.
3.2. Reserve Prices
Auction theory postulates that expected revenues in an auction can be maximized by setting an appropriate
reserve price or minimum bid (Section 2). Reiley (2006) uses a field experiment to analyze the effects of reserve
prices in Internet auctions. He focuses on three theoretical predictions: First, increasing the reserve price should
decrease the number of bidders participating in the auction. Second, a higher reserve price should decrease the
probability of selling an item. And finally, increasing the reserve price should increase the revenue, in case the
auction is successful. The field experiment employed sealed-bid, first-price auctions with an Internet news group
for game cards, rather than a dedicated auction platform such as eBay. The reserve price in the auctions was
used as the main treatment variation. As a reference for induced reserve prices, Reiley (2006) used the so-called
Cloister price, which equals the average selling price for the different game cards on a weekly basis.
(p. 321) Reiley (2006) finds that, as theory predicts, introducing a reserve price equal to 90 percent of the
reference price decreases the number of bidders and the number of bids. Varying the reserve price yielded a
monotonic linear relation between reserve price and number of bidders. In particular, increasing the reserve price
from 10 percent to 70 percent of the reference price decreased the number of bids from 9.8 to 1.0 on average
(with no further significant change for higher reserve prices). Correspondingly, the probability of selling an item is
reduced with higher reserve prices. Finally, auction revenues for the same card, conditional on auction success,
are significantly higher if a reserve price of 90 percent was set, compared to no reserve price.26 However,
considering unconditional expected auction revenues (i.e., assuming that an unsold card yielded a price of $0)
draws an ambiguous picture: in pair-wise comparison, auctions with a reserve price of 90 percent do not yield
higher revenues than those without a reserve price, as many cards were not sold with the reserve price present.
The net profits were lowest with too low reserve prices of 10 percent to 50 percent (with expected revenue of
about 80 percent of Cloister value) and too high reserve prices of 120 percent to 150 percent (yielding about 90
percent of Cloister value, on average). Intermediate reserve prices between 80 percent and 100 percent resulted
in the highest expected revenues, about 100 percent of Cloister value on average.
Other field experiments that test the variation of the public reserve price in richer Internet auction settings find
similar results. Ariely and Simonson (2003) compare high and low minimum prices in high supply (“thick”) and low
supply (“thin”) markets. They report that the positive effect of the reserve price is mediated by competition, that is,
reserve prices are hardly affective when the market is competitive, but do well in increasing revenue when the
market is thin. While the result is well in line with the theoretical prediction in the reserve price competition model
Page 10 of 27
Internet Auctions
by McAfee (1993), the authors conjecture that bidders are more likely to “anchor” their value estimates on the
starting price, when the market is thin and only few comparisons are possible than in highly competitive markets.
Obviously, the anchoring hypothesis requires some form of value uncertainty or bounded rationality on the
bidders’ part. The phenomenon may be connected to the framing effect observed by Hossain and Morgan (2006).
Selling Xbox games and music CDs on eBay, they systematically vary the reserve price and the shipping and
handling fees. For the high-priced Xboxes, they observe that total selling prices (i.e., including shipping fees) are
highest when the reserve price is low and the shipping fee is high. Since they do not observe the same effect for
the low-priced music CDs, they conjecture that the (psychological) effect of reserve prices on bidding behavior is
rather complex and depends on a number of other parameters of the item and the auction environment.
Some Internet auction platforms allow sellers to set their reserve prices secretly rather than openly. In particular for
high-value items such secret reserve prices seem to be very popular (Bajari and Hortaçsu, 2003). Theoretically, in
a private value setting with risk neutral bidders, hiding a reserve price does not have positive effects on the seller's
auction revenues (Elyakime et al., 1994; Nagareda, 2003). (p. 322) However, secret rather than open reserve
prices may have positive effects if bidders are risk-averse (Li and Tan, 2000), have reference-based preferences
(Rosenkranz and Schmitz, 2007), or are bidding in an affiliated-value auction (Vincent, 1995).
Estimating a structural econometric model with eBay data on coin auctions, Bajari and Hortaçsu (2003) find that an
optimal secret reserve price increases expected revenues by about 1 percent. Katkar and Reiley (2006) find a
contradicting result. They implement a field experiment to compare the effect of secret versus public reserve
prices. They auction 50 pairs of identical Pokemon trading cards on eBay. One card of each pair is auctioned with
a public reserve price at 30 percent of its book value and the other with the required minimum public reserve price
and a secret reserve at 30 percent of the book value. The auctions with secret reserve prices attract fewer
bidders, are less likely to end successfully, and result in an about 9 percent lower final auction price in the case of
success. The negative effect of secret reserve prices is confirmed by an empirical study on comic books sales on
eBay. Dewally and Ederington (2004) find that auctions using a secret reserve price (that is known to exist, but is
secret in size) attract fewer bidders and generate less revenue for the seller.
3.3. Shill Bidding
A shill bid is a bid placed by the seller (or a confederate) without being identified as a seller's bid. A shill bid can be
differentiated from other “normal” bids by its purpose, which obviously is not for the seller to buy the own property
(especially not, since buying the own property incurs substantial trading fees on most auction platforms). Shill
bidding, in general, has one of two purposes. On the one hand, a shill bid may be used much as a secret reserve
price, allowing the seller to keep the item, if the bids are not above the seller valuation. On the other hand, shill bids
may be used dynamically to “drive up” (or to “push up”) the final price, by incrementally testing the limit of buyers’
willingness to pay.
While both types of shill bidding are forbidden on most platforms (and legally prohibited by some jurisdictions), the
reasons for prohibiting them are very different. The reserve price shill bids are prohibited because using them
evades the high fees that auction platforms ask for installing proper reserve prices. The dynamic shill bids are
prohibited to protect the buyers’ rent from being exploited by sellers.27
Fully rational bidders, who are informed that sellers can shill bids, will take shill bidding into account and adjust their
bidding behavior. Depending on the setting, this may or may not result in loss of efficiency and a biased allocation
of rent. Izmalkov (2004), for example, shows that in an independent private value model with an ascending open
auction, the shill bidding equilibrium is similar to Myerson's (1981) optimal auction outcome, which relies on a public
reserve price. For the common-value setting, Chakraborty and Kosmopoulou (2004) demonstrate (p. 323) that
bidders, anticipating shill bidding, drop their bids so low that the sellers would prefer to voluntarily commit to a noshilling strategy. The intuition underlying both results is simply that strategically acting bidders will adjust their bids
to take the presence of shill bidding into account.
In an experimental study on common-value second-price auctions, Kosmopoulou and de Silva (2007) test whether
buyers react as theoretically predicted when sellers are allowed to place bids, too. Buyers are either informed or
not about the bidding possibility for the seller. The results show that while there is general overparticipation in the
auction and the number of bidders is not affected by the opportunity to shill-bid, the bidders, who are aware of the
Page 11 of 27
Internet Auctions
seller's shill-bidding opportunity, bid more carefully than the others. This leads to lower prices and positive bidder
payoffs even when sellers use shill bidding.
Identifying shill bidding in the field is difficult, because the seller can register with multiple identities or employ
confederates to hide their shill bids.28 One way of identifying shill bids is to define criteria that are more likely for
shill bids than for normal bids. Using this approach on a large field data set, Kauffmann and Wood (2005) estimate
that (if defined very narrowly) about 6 percent of eBay auctions contain shill bids. They also find that using a shill
bid positively correlates with auction revenues. A positive effect on revenue, however, is not reported by Hoppe
and Sadrieh (2007), who run controlled field experiments comparing public reserve prices and reserve price bids
by confederates. Estimating the optimal reserve price from the distribution of bids, they show that the prices in
successful auctions are higher both with estimated optimal public reserve prices and shill bids as compared to the
auctions with the minimum reserve price. The only positive effect of bid shilling that they can identify for the sellers
concerns the possibility to avoid paying fees for public reserve prices.
3.4. Buy-it-Now Offers
A buy-it-now offer is the option provided by the seller to buyers to end the auction immediately at a fixed price.
Such a feature was introduced on yahoo.com in 1999, and eBay and other auction sites followed in 2000 (LuckingReiley, 2000a). Buy-it-now offers are popular among sellers: about 40 percent of eBay sellers use the option
(Reynolds and Wooders, 2009). There are two different versions of the buy-it-now feature used on Internet
platforms: either the fixed-price offer disappears as soon as the first auction bid arrives (e.g., on eBay), or it
remains valid until the end of the auction (e.g., on uBid, Bid or Buy, or the Yahoo and Amazon auctions).
At first glance, the popularity of buy-it-now offers is puzzling: the advantage of an auction lies in its ability to elicit
competitive prices from an unknown demand. Posting a buy-it-now offer (especially a permanent one) puts an
upper bound on the price which might be achieved in an auction. One explanation for the existence of buy-it now
offers is impatience of the economic agents. Obviously, if bidders (p. 324) incur costs of waiting or participation,
then they might prefer to end the auction earlier at a higher price. If sellers are impatient, too, then this will even
reinforce the incentives to set an acceptable buy-it-now offer (Dennehy, 2000).
A second explanation is risk-aversion on the side of the buyer or the seller. If bidders are risk averse, they would
prefer to accept a safe buy-it-now offer rather than to risk losing the auction (Budish and Takeyama, 2001;
Reynolds and Wooders, 2009). This can hamper auction efficiency if bidders have heterogeneous risk preferences
(as more risk-averse bidders with lower values might accept earlier than less risk-averse bidders with higher
values), but not if they have homogenous risk-preferences (Hidvégi, Wang and Whinston, 2006). Mathews and
Katzman (2006) show that while a risk-neutral seller facing risk-neutral bidders would not use a temporary buy-itnow option, a risk-averse seller may prefer selling using the buy-it-now option. The authors furthermore show that
under certain assumption about the bidders’ value distribution, the buy-it-now option may actually be paretoimproving, that is, make both the bidders and the seller better off by lowering the volatility of the item's price.
Empirical studies (e.g., Durham et al., 2004, Song and Baker, 2007, and Hendricks et al., 2008) generally find small
positive effects of the buy-it-now option on the total revenue in eBay auctions. In particular, experienced sellers
are more likely to offer a buy-it-now price and buy-it-now offers from sellers with good reputations are more likely to
be accepted. There, however, are also some conflicting results. For example, Popkowski Leszczyc et al. (2009)
report that the positive effect of buy-it-now prices is moderated by the uncertainty of the market concerning the
distribution of values.
In laboratory experiments, risk-aversion may be a reason for offering or accepting the buy-it-now option, but
impatience can be ruled out, since participants have to stay for a fixed amount of time. Seifert (2006) reports less
bidding, higher seller revenues, and lower price variance when a temporary buy-it-now option is present and there
are more than three bidders in a private value auction. Shahriar and Wooders (2011) study the effect of a
temporary buy-it-now option on seller revenues in both a private and a common value environment. In the former,
revenues are significantly higher and have a lower variance with a buy-it-now option than without, as predicted
theoretically in the presence of risk-averse bidders. For the common value setting, standard theory neither predicts
an effect of the buy-it-now option in the case of risk-neutral, nor in the case of risk-averse bidders. The authors
provide a “winner's curse” account that can explain the observed (small and insignificant) revenue increase with
Page 12 of 27
Internet Auctions
the buy-it-now option. Peeters et al. (2007) do not find a strong positive effect of the temporary buy-it-now option in
their private value auctions. But, they discover that rejected buy-it-now prices have a significant anchoring effect
on the prices in the bidding phase, with bids rarely exceeding the anchor. Grebe, Ivanova-Stenzel, and Kröger
(2010) conduct a laboratory experiment with eBay members, in which sellers can set temporary buy-it-now prices.
They observe that the more experienced the bidders are, the higher the buy-it-now price is that the sellers choose.
This they attribute to the (p. 325) fact that more experienced bidders generally bid closer to their true values,
thus increasing the auction revenue compared to less experienced bidders. Hence, the more experienced the
bidders, the higher the seller's continuation payoff (i.e., the payoff from the bidding stage) and, thus, the higher the
chosen buy-it-now price.
All in all the literature on the buy-it-now price is growing steadily, but the results remain at best patchy. It seems
clear that impatience and risk-aversion on both market sides are driving forces of the widening usage of buy-it-now
prices. However, there is also some limited evidence that rejected buy-it-now prices may be used as anchors for
the following bidding stage, certainly limiting the bids, but perhaps also driving them towards the limit.
3.5. Bidding Fees and All-Pay-Auctions
An interesting recent development in Internet auctions is the emergence of platforms which charge bidding fees.
These auctions share features with all-pay auctions known in the economics literature. Generally, in all-pay
auctions, both winning and losing bidders pay their bids, but only the highest bidder receives the item (see Riley,
1980, Fudenberg and Tirole, 1986, and Amann and Leininger, 1995, 1996, among others, for theoretical
contributions). Similarly, in the auctions conducted on Internet sites like swoopo.com, a bidding fee is to be paid for
each bid, and independent of the success of that particular bid.
In particular, swoopo.com auctions start off at a price of $0. Like on eBay, bidders can join the auction at any time
and submit a bid. After each bid, the ending time of the auction is reset, implementing a soft-close mechanism
similar to the one for Amazon auctions discussed in our section on sniping behavior above. The extension time
decreases with the current price, with a final time window of 15 seconds only. There is a prescribed nominal bid
increment of typically a penny ($0.01, which is why these auctions are often called “penny auctions”). However,
there is also a fee connected with the submission of each bid, usually equaling $0.75.29 When a penny auction
ends as the time window elapses without a new bid, the bidder who submitted the last bid increment wins the item
and pays the final price p. But all participating bidders have paid their bidding fees, so the seller (platform)
revenues are equal to the final item price p plus all bidding fees (equaling 75p with bid increments of $0.01 and
bidding fees of $0.75), which means that revenues can be quite substantial. On the other hand, the final price p is
usually much lower than the average resale price of the item, and auction winners may indeed make substantial
surpluses, in particular if they joined the auction late.
In the following we focus on two recent papers by Augenblick (2010) and Ockenfels et al. (2010), which combine
theoretical and empirical work on all-pay auction both based on data from the platform swoopo.com.30 Both papers
set up theoretical models to analyze the most important features of the specific auction mechanism used by
swoopo.com. These models assume n bidders for whom the (p. 326) item at auction has a common value. The
game is modeled for discrete periods of time and with complete information, with each point in time t representing a
subgame. Ockenfels et al. (2010) compute the pure-strategy subgame-perfect equilibria of the swoopo game with
homogenous bidders and find that not more than one bid is placed. However, there also exist mixed-strategy
equilibria in which all bidders bid with positive probabilities that decrease over time. Augenblick (2010) focuses his
analysis on the discrete hazard rates in the auction game, that is, the expected probabilities that the game ends in
a given period. He first shows that no bidder will participate once the net value of the item is lower in the next
period than the cost of placing a bid. Then, Augenblick (2010) identifies an equilibrium in which at least some
players strictly use mixed strategies. In both models, bidders are more likely to engage in bidding if they are
subject to a naïve sunk cost fallacy that increases their psychological cost of dropping out the longer they
participate in the auction.
Ockenfels et al. (2010) analyze data of 23,809 swoopo auctions. Augenblick (2010) has collected data on 166,000
swoopo auctions and 13.3 million individual bids. The most obvious and important insight from the field data in both
studies is extensive bidding. On average, each auction attracts 534 and 738 bids, respectively, in the two
datasets. On the one hand, final prices are on average only 16.2 percent of the retail reference price (Ockenfels et
Page 13 of 27
Internet Auctions
al., 2010). This underlines that auction winners can strike a bargain, provided they do not submit too many bids
themselves. On the other hand, the seller makes substantial profits, with total auction revenues on average
between 25 percent (Ockenfels et al., 2010) and 50 percent (Augenblick, 2010) higher than retail prices.
In a recently emerged new variant of the all-pay auction, the winner is not determined by the highest bid, but by
the lowest unique bid submitted in the auction.31 The auction runs for a predetermined period. Each bid in the
auction costs a certain fee, but in contrast to the penny auctions the bid in the unique lowest bid auction is not a
fix increment on the current bid. Instead, the winning bid is the lowest amongst the unique bids. Bidders may submit
as many bids as they wish.
Gallice (2009) models the unique lowest bid auction as an extensive form game with two or more bidders, a seller,
and a fixed duration in which the buyers can submit their costly bids. Gallice (2009) points out that the game does
not yield positive profits for the seller when bidders are rational. But when bidders are myopic, the unique bid
auction is profitable, because—as in all other all-pay auctions—the auction winner and the seller can both earn
substantially by splitting the contributions of the losing bidders.
Eichberger and Vinogradov (2008) also present a model of unique lowest bid auctions that allows for multiple bids
from each individual. Unlike the model presented by Gallice (2009), they model the unique bid auction as a oneshot game, in which each bidder submits a strategy specifying a set of bids. Pure strategy equilibria only exist in
this game if the cost of bidding is very high relative to the item's value. These equilibria are asymmetric, with only
one player submitting a positive bid. For the more realistic case of moderate or low cost of bidding, Eichberger and
Vinogradov (2008) derive the mixed strategy equilibria of the game with a restricted strategy space. Allowing
bidders only to choose strategies that specify bids on “all values up to x,” they show (p. 327) that in the unique
symmetric mixed strategy equilibrium of the restricted game, the probability of choosing a strategy decreases both
in the number of bidders and in x for all values of x greater than zero.32 Comparing their theoretical results to the
data from several unique bid auctions in the United Kingdom and Germany, Eichberger and Vinogradov (2008) find
that the downward slope of the distribution of observed bids is in line with their equilibrium predictions, but the
observed distributions can only be accounted for by assuming bidder heterogeneity.33
In two related studies, Rapoport et al. (2009) and Raviv and Virag (2009) study one-shot unique bid auctions in
which the bidders can only place a single bid each. Rapoport et al. (2009) study both lowest and highest unique
bid auctions, while Raviv and Virag (2009) study a highest unique bid auction that differs from the others in that
only the winner pays the own bid, while all losers pay a fixed entry fee. Despite the many subtle differences across
models and across theoretical approaches, the equilibria in these one-shot single-bid games are very similar to
those of the multiple-bid auctions discussed above. As before, the symmetric equilibria are in mixed strategies with
probabilities that monotonically decrease in bids for the lowest—and increase for the highest—unique bid auction.
The empirical findings of Raviv and Virag (2009) are well in line with the general structure of the predicted
equilibrium probabilities.
Unique bid and other all-pay auctions seem to be a quickly growing phenomenon in Internet auctions. It appears,
however, that these institutions are not primarily designed to allocate scarce resources efficiently (theory suggests
they do not). Rather, the suspense of gambling and other recreational aspects seem to dominate the auction
design considerations. Such motivational aspects of participation are not yet well understood and require more
research.
4. Multi-Unit Auctions
From the perspective of auction theory, as long as each bidder only demands a single unit, the theoretical results
obtained for the single-unit auctions are easily generalized to the multi-unit case. Multi-unit demand, however,
typically increases the strategic and computational complexities and often leads to market power problems.
4.1. Pricing Rules with Single-Unit Demand
For multi-unit auctions we can distinguish four standard mechanisms analogous to the single-unit case. Items can
be sold in an open auction with an increasing price that stops as soon as the number of bids left in the auction
matches the number of items offered (i.e., as soon as demand equals supply). Items can be sold in a decreasing
Page 14 of 27
Internet Auctions
price auction, where a price clock ticks down and bidders may buy units (p. 328) at the respective current price
as long as there are units on supply. A “pay-as-bid” sealed-bid auction can be employed, in which each
successful bidder pays the own bid for the unit demanded. Finally, items can be offered on a uniform-price sealedbid auction, where all successful bidders pay the same price per unit.
As long as each bidder only demands one unit, multi-unit auction theory typically predicts (under some standard
assumptions) that the auction price equals the (expected) n+1th-highest value across bidders, with the n bidders
with the highest values winning the auction. The reason is that in an auction where each winner has to pay his bid
(“pay-as-bid” sealed-bid or decreasing clock), winners would regret having bid more than necessary to win, that
is, having bid more than one increment above the highest losing bid. Thus, game-theory predicts that bidders
submit their bids such that the price they will have to pay does not exceed the expected n+1th-highest bid. Since
this is true no matter in which auction mechanism they are bidding, the seller on average will obtain the same
revenues in all formats.34
The multi-unit price rule most commonly implemented on Internet platforms like eBay states that the highest n
bidders win and pay a price equal to the nth highest bid. In other words, the lowest winning bid determines the
price. Kittsteiner and Ockenfels (2008) argue that, because each bidder may turn out be the lowest winning bidder,
bidders strategically discount their bids in this auction, inducing risks and potential efficiency losses. A more
straightforward generalization of eBay's hybrid single-unit auction format is to set the price equal to the n+1thhighest bid, that is, equal to the highest losing bid. This pricing rule has the advantage that it reduces the variance
in revenues, increases the number of efficient auction allocations, and makes bidding the own value a (weakly)
dominant strategy. Cramton et al. (2009) experimentally study both pricing rules and find that the nth-bid pricing
rule yields higher revenues than the n+1th-bid rule. However, in a dynamic auction like eBay, the last bidder who
places a bid will be able to influence the price, and therefore strategize. Based on such arguments, eBay changed
its multi-unit design from an nth- to an n+1th-highest-bid pricing rule, before it dropped the multi-unit auction
platform altogether (Kittsteiner and Ockenfels, 2008).35
4.2. Ad Auctions
Among the largest multi-unit auctions in the Internet are the ones used by search engines to sell keyword-related
advertisements in search results, such as Google's “AdWords Select” program. Edelman et al. (2007) and Varian
(2007, 2009) investigate this specific type of position auction. While in the early days of the Internet advertisement
space was sold in bilateral markets, the industry was revolutionized when GoTo.com (later renamed Overture)
started using large-scale multi-unit auctions to sell search word ads in 1997. In 2002 Google followed suit, but
replaced Overture's simple highest-bid-wins format by an auction that combined the per-click price and the
expected traffic (originally measured by the “PageRank” (p. 329) method that was patented by Google) into an
aggregate average-per-period-income bid.36 This rather complex mechanism has become even more complicated
over time, with one key element of the current system being the “Quality Score,” which combines a number of
criteria including historic click-through-rates and the “page quality” (that basically is an enhanced PageRank).
Google keeps the exact calculations of the measures secret in order to reduce strategic adaptation and fraud
(Varian, 2008).
The “Generalized Second Price” (GSP) auction is the original position auction format that was first used by
Overture. Basically, whenever an Internet user submits a search query, the search engine parses the search
phrase for keywords and conducts a quick auction for advertisement positions on the results page, using
presubmitted bids from advertisers. The bidder with the highest bid (per user click) receives the first (most popular)
position, the second-highest bidder the next position, and so forth. Under the rules of the Generalized Second Price
auction each bidder only pays the bid of the next-highest bidder (plus one increment). Thus, the highest bidder
only pays the second-highest bid, the second-highest bidder pays the third-highest bid, etc., until the bidder
receiving the last advertisement position slot pays the highest unfulfilled bid.37 With only one advertisement slot
this pricing rule corresponds to the regular single-unit second-price auction.
At first glance, the popularity of the GSP may seem surprising given that another mechanism, the Vickrey-ClarkeGroves mechanism (VCG), exists that has a unique equilibrium in (weakly) dominant strategies. In the VCG
mechanism, each bidder pays the externality imposed on the other bidders by occupying a specific advertisement
slot. That is, each bidder pays the difference between the total value all other bidders would have received, if the
Page 15 of 27
Internet Auctions
bidder would have not participated, and the total value all other bidders receive in the current allocation. In ad
position auctions, a bidder's participation shifts all subsequent bidders one slot lower in the list. Thus, in the VCG,
each bidder pays the sum of value differences between the bid for the current slot and the bid for the next slot up.
In contrast, the GSP auction has multiple equilibria and no unique equilibrium in dominant strategies. To derive a
useful prediction for the GSP auction, Edelman et al. (2007) and Varian (2007) introduce the notion of “locally
envy-free” Nash equilibria and seek the rest-point in which bidder behavior stabilizes in infinitely repeated GSP
auctions. In such an equilibrium, no bidder has an incentive to swap bids with the bidder one position above or
below. Both theoretical papers demonstrate the existence of a “locally envy-free Nash equilibrium” in the GSP
auction, in which the position allocation, the bidder's payments, and the auction revenues correspond to the
dominant equilibrium of the VCG mechanism. This equilibrium is the most disadvantageous amongst all locally
envy-free equilibria for the seller, but it is the best for the buyers. Edelman et al. (2007) furthermore show that this
equilibrium corresponds to the unique perfect Bayesian equilibrium when the GSP auction is modeled as an English
auction instead.
In sum, multiple equilibria that do not impose truth-telling exist in GSP auctions. Some of these equilibria, however,
are locally envy-free. One of these is particularly (p. 330) attractive to bidders, as their payoffs in the equilibrium
coincide with those in the dominant strategy-equilibrium of the VCG mechanism and the unique perfect equilibrium
of a corresponding English auction. Moreover, all other locally envy-free equilibria of the GSP lead to lower profits
for bidders and higher revenue for the auctioneer. Edelman et al. (2007) speculate that the complexity of VCG and
the fact that a VCG potentially leads to lower revenues than the GSP, if advertisers fail to adapt simultaneously to
such a change, are reasons for the empirical success of GSP (see also Milgrom, 2004, for a critique of the VCG
mechanism in another context). Varian (2007) provides some empirical evidence that the prices observed in
Google's ad auctions are in line with the locally envy-free Nash equilibria. Using the same underlying equilibrium
model, Varian (2009) finds a value/price relation of about 2–2.3 in ad auctions, while (non-truthful) bids are only
about 25 percent to 30 percent larger than prices.
Fukuda et al. (2010) compare the GSP auction and the VCG mechanism in a laboratory experiment. They implement
ad auctions with 5 advertisers and 5 ad slots. Contrary to the assumptions of the underlying theory discussed
above, bidders are not informed on the values of the others. In order to test the robustness of results, two different
valuation distributions (a “big keyword” and a “normal keyword” distribution) are implemented that differ in the
expected revenues for the advertisers. Click-through-rates of the different ad positions were held constant across
conditions. The authors find that revenues in both mechanisms are close to the lower bound of the locally envyfree equilibrium discussed above, but slightly higher in the GSP treatment than in VCG. More equilibrium behavior
and higher allocative efficiency is observed in the VCG treatment, and, moreover, both of those measures increase
over time, indicating learning and convergence.
4.3. Multi-Unit Demand
So far, we have assumed that each bidder only demands a single unit. If bidders demand more than one unit, their
strategy space becomes more complex and the theoretical results of revenue equivalence no longer hold.
Consider a case with 2 bidders, where A demands only one unit and B demands two. If B wants to buy both units,
the price per unit that B must pay must be greater than A's value (as to keep A out of the market). But if B engages
in “demand reduction” and only asks for one unit, both units will sell at a price of zero. Demand reduction
incentives (or supply-reduction in reverse auctions) exist in all multi-unit auctions. The specific auction format,
however, may affect the extent to which demand reduction and collusion amongst bidders is successful (see, e.g.,
Ausubel and Cramton, 2004, Milgrom, 2004, and the references cited therein).
One relatively simple example of multi-unit auctions with multi-unit-demand, among many others, are auctions
conducted by governments to sell carbon permits. Many of these auctions are conducted via the Internet,
increasing transparency, reducing transaction costs, and allowing a broad spectrum of companies (p. 331) to
participate. Yet even in such simple environments, there are many important design choices to be made, including
pricing rules, information feedback, timing of markets, product design etc. (see Ockenfels, 2009, and the
references therein).
In the context of the Regional Greenhouse Gas Initiative (RGGI) in some eastern states of the U.S., Holt et al. (2007)
38
Page 16 of 27
Internet Auctions
test the performance of a sealed-bid and a clock auction format in a one-vintage multi-unit auction experiment.38
They find no difference in the allocative efficiency of the auctions. However, in an extension of the original
experiment allowing for communication between bidders, Burtraw et al. (2009) observe more bidder collusion and
therefore lower seller revenues in a clock auction, in which, unlike in the sealed-bid auction, bidders can update
their information about the bidding behavior of others during the course of the auction. Mougeot et al. (2009) show
that the existence of speculators in the market is effective in curbing collusion, but in turn may hamper the
efficiency of the allocation.
Porter et al. (2009) run a two-vintage multi-unit auction experiment in the context of the Virginia NOx auctions. The
authors find more efficient allocations and higher seller revenues with a clock auction when permit demand is very
elastic. Betz et al. (2011) compare the sequential and simultaneous auctioning of two vintages of multiple units of
permits under three different auction designs: a sealed-bid auction and a clock auction with or without demand
revelation. Consistent with most previous results, auction formats did not differ significantly with respect to
allocative efficiency and seller revenues. An unexpected finding of the experiment is that sequentially auctioning
the two vintages yields more efficient allocations, higher auction revenues, and better price signals than auctioning
the two vintages simultaneously. While this result stands in contrast with the notion that in general simultaneous
allocation procedures allow for higher allocative efficiency than sequential ones, it is in line with a tradeoff between
auction complexity and efficiency assumed by many practical auction designers (see, e.g., Milgrom, 2004). Further
empirical research is needed to test whether this specific result is robust and generalizes from the lab to real-world
auctions.39
Another important complication in multi-unit (and multi-object) auctions arises when there are complementarities
between the items auctioned. For example, a buyer might wish to purchase either four or two tires, but neither
three nor just one. The uncertainty about the final allocation and potential “exposure” creates strategic incentives
for bidders to distort their bids. A possible solution is to elicit bids over all possible packages, thereby allowing
bidders to express the complementarities in their bundle values. However, the bidding procedures for such
auctions are not only complex, but also pose many strategic difficulties. While many of these issues can be and
are addressed, surveying the relevant literature would go beyond the scope of this chapter. Fortunately, there is a
large literature dealing with these complexities from a theoretical, empirical, experimental and engineering
perspective. See Bichler (2011) de Vries and Vohra (2003) and Cramton et al. (2006) for excellent overviews.
Katok and Roth (2004), among others, give an insight on some of the laboratory evidence in the context of Internet
auctions.
(p. 332) 5. Conclusions
This chapter gives a selective overview of theoretical, empirical, and experimental research on Internet auctions.
This includes behavioral issues such as overbidding, auction fever, the winner's curse, and sniping. From the
seller's perspective we review the effects of setting open or secret reserves, using shill-bidding strategies, and
offering buy-it-now options. We also discuss the recent emergence of auction platforms that charge bidding fees
and perform unique bid auctions. Finally, we also mention challenges that arise if multiple items are sold in a single
auction.
One conclusion from this research is that a comprehensive understanding of bidding and outcomes requires a
good understanding of both institutional and behavioral complexities, and how they affect each other. This, in turn,
requires a broad toolkit, that includes economic theory and field data analyses, as well as behavioral economics,
psychology, and operations research. It is the complementing evidence from different perspectives and collected
under different degrees of control that allows us to derive robust knowledge on Internet auctions. And it is this
knowledge that can make a difference in the real-world. In fact, the emergence of Internet-specific auction formats,
like advertisement position auctions, proxy bidding, and penny auctions initiated a whole new research literature
that already feeds back into the development and design of these auctions. Research on Internet auctions one of
those fields in economics with a direct and mutually profitable link between academic research and industry
application.
References
Page 17 of 27
Internet Auctions
Amann, E., Leininger, W., 1995. Expected Revenue of All-Pay and First-Price Auctions with Affiliated Signals. Journal
of Economics 61(3), pp. 273–279.
Amann, E., Leininger, W., 1996. Asymmetric All-Pay Auctions with Incomplete Information: The Two-Player Case.
Games and Economic Behavior 14(1), pp. 1–18.
Ambrus, A., Burns, J., 2010. Gradual Bidding in eBay-like Auctions. Working Paper, Harvard University.
Anwar, S., McMillan, R., Zheng, M., 2006. Bidding Behavior in Competing Auctions: Evidence from eBay. European
Economic Review 50(2), pp. 307–322.
Ariely, D., Ockenfels, A., Roth, A., 2005. An Experimental Analysis of Ending Rules in Internet Auctions. The RAND
Journal of Economics 36(4), pp. 890–907.
Ariely, D., Simonson, I., 2003. Buying, Bidding, Playing, or Competing? Value Assessment and Decision Dynamics in
Online Auctions. Journal of Consumer Psychology 13(1–2), pp.113–123.
Augenblick, N., 2010. Consumer and Producer Behavior in the Market for Penny Auctions: A Theoretical and
Empirical Analysis. Working Paper, Haas School of Business, UC Berkeley.
Ausubel, L.M., Cramton, P., 2004. Auctioning Many Divisible Goods. Journal of the European Economic Association
2, pp. 480–493.
Bajari, P., Hortaçsu, A., 2003. Winner's Curse, Reserve Prices, and Endogenous Entry: Empirical Insights from eBay
Auctions. The RAND Journal of Economics 34(2), pp. 329–355.
Bajari, P., Hortaçsu, A., 2004. Economic Insights from Internet Auctions. Journal of Economic Literature 42(2), pp.
457–486. (p. 336)
Bapna, R., Goes, P., Gupta, A., 2003. Analysis and Design of Business-to-Consumer Online Auctions. Management
Science 49(1), pp. 85–101.
Barbaro, S., Bracht, B., 2006. Shilling, Squeezing, Sniping: Explaining latebidding in online second-price auctions.
Working Paper, University of Mainz.
Baye, M.R., Morgan, J., 2004. Price Dispersion in the Lab and on the Internet: Theory and Evidence. The RAND
Journal of Economics 35(3), pp. 449–466.
Bazerman, M. H., Samuelson, W. F., 1983. I Won the Auction but Don’t Want the Prize. Journal of Conflict Resolution
27(4), pp. 618–634.
Betz, R., Greiner, B., Schweitzer, S., Seifert, S., 2011. Auction Format and Auction Sequence in Multi-item Multi-unit
Auctions—An experimental comparison. Working Paper, University of New South Wales.
Bichler, M., 2011. Combinatorial Auctions: Complexity and Algorithms. In: Cochran, J.J. (ed.), Wiley Encyclopedia of
Operations Research and Management Science. John Wiley & Sons Ltd, Hoboken, N.J.
Bolton, G.E., Ockenfels, A., 2000. ERC: A Theory of Equity, Reciprocity, and Competition. American Economic
Review 90(1), pp. 166–193.
Bolton, G. E., Ockenfels, A., 2010. Does Laboratory Trading Mirror Behavior in Real World Markets? Fair Bargaining
and Competitive Bidding on eBay. Working Paper, University of Cologne.
Borle, S., Boatwright, P., Kadane, J. B., 2006. The Timing of Bid Placement and Extent of Multiple Bidding: An
Empirical Investigation Using eBay Online Auctions. Statistical Science 21(2), pp. 194–205.
Brynjolfsson, E., M. D. Smith. 2000. Frictionless commerce? A comparison of Internet and conventional retailers.
Management Science 46(4), pp. 563–585.
Budish, E. B., Takeyama, L. N., 2001. Buy Prices in Online Auctions: Irrationality on the Internet? Economic Letters,
72(3), pp. 325–333.
Page 18 of 27
Internet Auctions
Burtraw, D., Goeree, J., Holt, C., Myers, E., Palmer, K., Shobe, W., 2009. Collusion in Auctions for Emission Permits.
Journal of Policy Analysis and Management 28(4), pp. 672–691.
Casari, M., Ham, J. C., Kagel, J. H., 2007. Selection Bias, Demographic Effects and Ability Effects in Common Value
Auction Experiments. American Economic Review 97(4), pp.1278–1304.
Cassady, R., 1967. Auctions and Auctioneering. University of California Press, Berkeley.
Chakraborty, I., Kosmopoulou, G., 2004. Auctions with shill bidding. Economic Theory 24(2), pp. 271–287.
Charness, G., Levin, D., 2009. The Origin of the Winner's Curse: A Laboratory Study. American Economic Journal:
Microeconomics 1(1), pp. 207–236.
Cooper, D. J., Fang, H., 2008. Understanding Overbidding In Second Price Auctions: An Experimental Study.
Economic Journal 118, pp. 1572–1595.
Cox, J. C., Roberson, B., Smith, V. L., 1982. Theory and Behavior of Single Object Auctions. In: V. L. Smith (ed.),
Research in Experimental Economics, Vol. 2. JAI Press, Greenwich, pp. 1–43.
Cox, J. C., Smith, V. L., Walker, J. M., 1985. Experimental Development of Sealed-Bid Auction Theory; Calibrating
Controls for Risk Aversion. American Economic Review 75(2), pp. 160–165.
Cramton, P., Ausubel, L. M., Filiz-Ozbay, E., Higgins, N., Ozbay, E., Stocking, A., 2009. Common-Value Auctions with
Liquidity Needs: An Experimental Test of a Troubled Assets Reverse Auction. Working Paper, University of
Maryland. (p. 337)
Cramton, P., Shoham, Y., Steinberg, R. (eds.), 2006. Combinatorial Auctions. Cambridge and London: MIT Press.
de Vries, S., Vohra, R., 2003. Combinatorial Auctions: A Survey. INFORMS Journal on Computing 5(3), pp. 284–309.
Dennehy, M., 2000. eBay adds “buy-it-now” feature. AuctionWatch.com.
Dewally, M., Ederington, L. H., 2004. What Attracts Bidders to Online Auctions and What is Their Incremental Price
Impact? Working Paper.
Duffy, J., Ünver, M. U., 2008. Internet Auctions with Artificial Adaptive Agents: A Study on Market Design. Journal of
Economic Behavior & Organization 67(2), pp. 394–417.
Durham, Y., Roelofs, M. R., Standifird, S. S., 2004. eBay's Buy-It-Now Function: Who, When, and How. Topics in
Economic Analysis and Policy 4(1), Article 28.
Edelman, B., Ostrovsky, M., Schwarz, M., 2007. Internet Advertising and the Generalized Second-Price Auction:
Selling Billions of Dollars Worth of Keywords. American Economic Review 97(1), pp. 242–259.
Eichberger, J., Vinogradov, D., 2008. Least Unmatched Price Auctions: A First Approach. Working Paper 0471,
University of Heidelberg.
Ely, J. C., Hossain, T., 2009. Sniping and Squatting in Auction Markets. American Economic Journal: Microeconomics
1(2), pp. 68–94.
Elyakime, B., Laffont, J. J., Loisel, P., Vuong, Q., 1994. First-Price Sealed-Bid Auctions with Secret Reservation Prices.
Annales d’Économie et de Statistique 34, pp. 115–141.
Engelberg, J., Williams, J. , 2009. eBay's Proxy System: A License to Shill. Journal of Economic Behavior and
Organization 72(1), pp. 509–526.
Engelbrecht-Wiggans, R., Katok, E., 2007. Regret in Auctions: Theory and Evidence. Economic Theory 33(1), pp.
81–101.
Engelbrecht-Wiggans, R., Katok, E., 2009. A Direct Test of Risk Aversion and Regret in First Price Sealed-Bid
Auctions. Decision Analysis 6(2), pp. 75–86.
Page 19 of 27
Internet Auctions
Fehr, E., Schmidt, K. M., 1999. A Theory Of Fairness, Competition, and Cooperation. The Quarterly Journal of
Economics 114(3), pp. 817–868.
Filiz-Ozbay, E., Ozbay, E. Y., 2007. Auctions with Anticipated Regret: Theory and Experiment. American Economic
Review 97(4), pp. 1407–1418.
Fudenberg, D., Tirole, J., 1986. A Theory of Exit in Duopoly. Econometrica 54(4), pp. 943–960.
Fukuda, E., Kamijo, Y., Takeuchi, A., Masui, M., Funaki, Y., 2010. Theoretical and experimental investigation of
performance of keyword auction mechanisms. GLOPE II Working Paper No. 33.
Füllbrunn, S., Sadrieh, A., 2012. Sudden Termination Auctions—An Experimental Study. Journal of Economics and
Management Strategy 21(2), pp. 519–540.
Gallice, A., 2009. Lowest Unique Bid Auctions with Signals, Working Paper, Carlo Alberto Notebooks 112.
Garratt, R., Walker, M., Wooders, J., 2008. Behavior in Second-Price Auctions by Highly Experienced eBay Buyers
and Sellers. Working Paper, University of Arizona.
Goeree, J., Offerman, T., 2003. Competitive Bidding in Auctions with Private and Common Values. Economic Journal
113, pp. 598–613.
Gray, S., Reiley, D., 2007. Measuring the Benefits to Sniping on eBay: Evidence from a Field Experiment. Working
Paper, University of Arizona.
Grebe, T., Ivanova-Stenzel, R., Kröger, S., 2010. Buy-It-Now Prices in eBay Auctions—The Field in the Lab. Working
Paper 294, SFB/TR 15 Governance and the Efficiency of Economic Systems. (p. 338)
Greiner, B., Ockenfels, A., 2012. Bidding in Multi-Unit eBay Auctions: A Controlled Field Experiment. Mimeo.
Grosskopf, B., Bereby-Meyer, Y., Bazerman, M., 2007. On the Robustness of the Winner's Curse Phenomenon.
Theory and Decision 63(4), pp. 389–418.
Haile, P. A., 2000. Partial Pooling at the Reserve Price in Auctions with Resale Opportunities. Games and Economic
Behavior 33(2), pp. 231–248.
Harrison, G. W., 1990. Risk Attitudes in First-Price Auction Experiments: A Bayesian Analysis. Review of Economics
and Statistics 72(3), pp. 541–546.
Hendricks, K., Onur, I., Wiseman, T., 2008. Last-Minute Bidding in Sequential Auctions with Unobserved, Stochastic
Entry. Working Paper, University of Texas at Austin.
Heyman, J. E., Orhun, Y., Ariely, D., 2004. Auction Fever: The Effects of Opponents and Quasi-Endowment on
Product Valuations. Journal of Interactive Marketing 18(4), pp. 7–21.
Hidvégi, Z., Wang, W., Whinston, A. B., 2006. Buy-Price English Auction. Journal of Economic Theory 129(1), pp.
31–56.
Holt, C., Shobe, W., Burtraw, D., Palmer, K., Goeree, J., 2007. Auction Design for Selling CO2 Emission Allowances
Under the Regional Greenhouse Gas Initiative. Final Report. Ressources for the Future (RFF).
Hoppe, T., 2008. An Experimental Analysis of Parallel Multiple Auctions. FEMM Working Paper 08031, University of
Magdeburg.
Hoppe, T., Sadrieh, A., 2007. An Experimental Assessment of Confederate Reserve Price Bids in Online Auction.
FEMM Working Paper 07011, University of Magdeburg.
Hossain, T. 2008. Learning by bidding. RAND Journal of Economics 39(2), pp. 509–529.
Hossain, T., Morgan, J., 2006. Plus Shipping and Handling: Revenue (Non)Equivalence in Field Experiments on
eBay. Advances in Economic Analysis & Policy 6(2), Article 3.
Page 20 of 27
Internet Auctions
Houser, D., Wooders, J., 2005. Hard and Soft Closes: A Field Experiment on Auction Closing Rules In: R. Zwick and
A. Rapoport (eds.), Experimental Business Research. Kluwer Academic Publishers, pp. 123–131.
Isaac, R.M., Walker, J.M., 1985. Information and conspiracy in sealed bid auctions. Journal of Economic Behavior &
Organization 6, pp. 139–159.
Izmalkov, S., 2004. Shill bidding and optimal auctions. Working Paper, Massachusetts Institute of Technology.
Jin, G. Z., Kato, A., 2006. Price, Quality, and Reputation: Evidence from an Online Field Experiment. The RAND
Journal of Economics 378(4), pp. 983–1004.
Kagel, J., 1995. Auctions: A Survey of Experimental Research. In: J. Kagel and A. Roth (eds.), The Handbook of
Experimental Economics. Princeton University Press, Princeton, pp.501–585.
Kagel, J., Harstad, R., Levin, D., 1987. Information Impact and Allocation Rules in Auctions with Affiliated Private
Values: A Laboratory Study. Econometrica 55, pp. 1275–1304.
Kagel, J., Levin, D., 1986. The Winner's Curse and Public Information in Common Value Auctions. American
Economic Review 76(5), pp. 894–920.
Kagel, J., Levin, D., 1993. Independent Private Value Auctions: Bidder Behavior in First-, Second- and Third Price
Auctions with Varying Numbers of Bidders. Economic Journal 103(419), pp. 868–879. (p. 339)
Kagel, J., Levin, D., 2002. Common Value Auctions and the Winner's Curse. Princeton University Press, Princeton.
Katkar, R., Reiley, D. H., 2006. Public Versus Secret Reserve Prices in eBay Auctions: Results from a Pokémon Field
Experiment. Advances in Economic Analysis and Policy, Volume 6(2), Article 7.
Katok, E., Kwasnica, A. M., 2008. Time is Money: The Effect of Clock Speed on Seller's Revenue in Dutch Auctions.
Experimental Economics 11(4), pp. 344–357.
Katok, E., Roth, A.E., 2004. Auctions of Homogeneous Goods with Increasing Returns: Experimental Comparison of
Alternative “Dutch” Auctions. Management Science 50(8), pp.1044–1063.
Kauffman, R. J., Wood, C. A., 2005. The Effects of Shilling on Financial Bid Prices in Online Auctions. Electronic
Commerce Research and Applications 4(1), pp. 21–34.
Kirchkamp, O., Reiß, J. P., Sadrieh, A., 2006. A pure variation of risk in first-price auctions. FEMM Working Paper
06026, University of Magdeburg
Kittsteiner, T., Ockenfels, A., 2008. On the Design of Simple Multi-Unit Online Auctions. In: H. Gimpel, N. R. Jennings,
G. E. Kersten, A. Ockenfels, C. Weinhardt (eds.), Lecture Notes in Business Information Processing. Springer, Berlin,
pp. 68–71.
Klemperer, P., 1999. Auction Theory: A Guide to the Literature. Journal of Economic Surveys 13, pp. 227–286.
Kosmopoulou, G., de Silva, D. G., 2007. The effect of shill bidding upon prices: Experimental evidence. International
Journal of Industrial Organization 25(2), pp. 291–313.
Krishna, V., 2002. Auction Theory. Academic Press, New York.
Ku, G., Malhotra, D., Murnighan, J. K., 2005. Towards A Competitive Arousal Model of Decison-Making: A Study of
Auction Fever in Live and Internet Auctions. Organizational Behavior and Human Decision Processes 96(2), pp. 89–
103.
Lee, Y. H., Malmendier, U., 2011. The Bidder's Curse. American Economic Review 101(2), pp. 749–787.
Levin, D., Smith, J. L. , 1996. Optimal Reservation Prices in Auctions. Economic Journal 106(438), pp. 1271–1282.
Li, H., Tan, G., 2000. Hidden Reserve Prices with Risk-Averse Bidders. Working Paper, University of British
Columbia.
Page 21 of 27
Internet Auctions
Lind, B., Plott, C. R., 1991. The Winner's Curse: Experiments with Buyers and with Sellers. American Economic
Review 81(1), pp. 335–346.
Lucking-Reiley, D., 1999. Using Field Experiments to Test Equivalence Between Auction Formats: Magic on the
Internet. American Economic Review 89(5), pp. 1063–1080.
Lucking-Reiley, D., 2000a. Auctions on the Internet: What's Being Auctioned, and How? Journal of Industrial
Economics 48(3), pp. 227–252.
Lucking-Reiley, D., 2000b. Vickrey Auctions in Practice: From Ninetheenth Century Philately to Twenty-FirstCentrury E-Commerce. Journal of Economic Perspectives 14(3), pp. 183–193.
Lucking-Reiley, D., Bryan, D., Prasd, N., Reeves, D., 2007. Pennies from Ebay: The Determinants of Price in Online
Auctions. Journal of Industrial Economics 55(2), pp. 223–233.
Maskin, E., Riley, J. G., 1984. Optimal Auctions with Risk Averse Buyers. Econometrica 52(6), pp. 1473–1518. (p.
340)
Mathews, T., Katzman, B., 2006. The Role of Varying Risk Attitudes in an Auction with a Buyout Option. Economic
Theory 27(3), pp. 597–613.
Matthews, S., 1987. Comparing Auctions for Risk Averse Buyers: A Buyer's Point of View. Econometrica 55(3), pp.
633–646.
McAfee, R. P., 1993. Mechanism Design by Competing Sellers. Econometrica 61(6), pp.1281–1312.
McAfee, R. P., McMillan, J., 1987. Auctions with Entry. Economic Letters 23(4), pp. 343–347.
McCart, J. A., Kayhan, V. O., Bhattacherjee, A., 2009. Cross-Bidding in Simultaneous Online Auctions.
Communications of the ACM 52(5), pp. 131–134.
Milgrom, P., 2004. Putting Auction Theory to Work. Cambridge University Press, Cambridge.
Milgrom, P., Weber, R. J., 1982. A Theory of Auctions and Competitive Bidding. Econometrica 50(5), pp. 1089–1122.
Morgan, J., Steiglitz, K., Reis, G., 2003. The Spite Motive and Equilibrium Behavior in Auctions. Contributions to
Economic Analysis & Policy 2(1), Article 5.
Mougeot, M., Naegelen, F., Pelloux, B., Rullière, J.-L., 2009. Breaking Collusion in Auctions Through Speculation: An
Experiment on CO2 Emission Permit Market. Working paper, GATE CNRS.
Myerson, R. B., 1981. Optimal Auction Design. Mathematics of Operations Research 6(1), pp. 58–73.
Myerson, R. B. , 1998. Population Uncertainty and Poisson Games. International Journal of Game Theory 27, pp.
375–392.
Nagareda, T., 2003. Announced Reserve Prices, Secret Reserve Prices, and Winner's Curse. Mimeo.
Nekipelov, D. 2007. Entry deterrence and learning prevention on eBay. Working paper, Duke University, Durham,
N.C.
Ockenfels, A., 2009. Empfehlungen fuer das Auktionsdesign fuer Emissionsberechtigungen. [Recommendations for
the design of emission allowances auctions.] Zeitschrift fuer Energiewirtschaft (2), pp. 105–114.
Ockenfels, A., Reiley, D., Sadrieh, A., 2006. Online Auctions. In: Hendershott, T. J. (ed.), Handbooks in Information
Systems I, Handbook on Economics and Information Systems, pp.571–628.
Ockenfels, A., Roth, A., 2006. Late and Multiple Bidding in Second Price Internet Auctions: Theory and Evidence
Concerning Different Rules for Ending an Auction. Games and Economic Behavior 55(2), pp. 297–320.
Ockenfels, A., Roth, A., 2010. Ending Rules in Internet Auctions: Design and Behavior. Working paper.
Page 22 of 27
Internet Auctions
Ockenfels, A., Selten, R., 2005. Impulse Balance Equilbrium and Feedback in First Price Auctions. Games and
Economic Behavior 51(1), pp. 155–179.
Ockenfels, A., Tillmann, P., Wambach, A., 2010. English All-Pay Auctions—An Empirical Investigation. Mimeo,
University of Cologne.
Östling, R., Wang, J. T., Chou, E., Camerer, C. F., 2011. Testing Game Theory in the Field: Swedish LUPI Lottery
Games. American Economic Journal: Microeconomics 3(3), pp. 1–33.
Peeters, R., Strobel, M., Vermeulen, D., Walzl, M., 2007. The impact of the irrelevant—Temporary buy-options and
bidding behavior in online auctions. Working Paper RM/07/027, Maastricht University. (p. 341)
Peters, M., Severinov, S., 2006. Internet Auctions with Many Traders. Journal of Economic Theory 130(1), pp. 220–
245.
Popkowski Leszczyc, P. T. L. , Qiu, C., He, Y., 2009. Empirical Testing of the Reference-Price Effect of Buy-Now
Prices in Internet Auctions. Journal of Retailing 85(2), pp. 211–221.
Porter, D., Rassenti, S., Shobe, W., Smith, V., Winn, A., 2009. The design, testing and implementation of Virginia's
NOx allowance auction. Journal of Economic Behavior & Organization 69(2), pp.190–200.
Rapoport, A., Otsubo, H., Kim, B., Stein, W.E., 2009. Unique Bid Auction Games. Jena Economic Research Papers
2009–005.
Rasmusen, E., 2007. Getting Carried Away in Auctions as Imperfect Value Discovery. Mimeo. Kelley School of
Business, Indiana University.
Raviv, Y., Virag, G., 2009. Gambling by auctions. International Journal of Industrial Organization 27, pp. 369–378.
Reiley, D. H., 2006. Field Experiments on the Effects of Reserve Prices in Auctions: More Magic on the Internet.
RAND Journal of Economics 37(1), pp. 195–211.
Reynolds, S. S. and Wooders, J., 2009. Auctions With a Buy Price. Economic Theory 38(1), pp. 9–39.
Riley, J. G., 1980. Strong Evolutionary Equilibrium and the War of Attrition. Journal of Theoretical Biology 82(3), pp.
383–400.
Riley, J. G., Samuelson, W. F., 1981. Optimal Auctions. American Economic Review 71(3), pp. 381–392.
Rose, S. L., Kagel, J., 2009. Almost Common Value Auctions: An Experiment. Journal of Economics & Management
Strategy 17(4), pp. 1041–1058.
Rosenkranz, S., Schmitz, P. W., 2007. Reserve Prices in Auctions as Reference Points. Economic Journal 117(520),
pp. 637–653.
Roth, A., Ockenfels, A., 2002. Last-Minute Bidding and the Rules for Ending Second-Price Auctions: Evidence from
eBay and Amazon Auctions on the Internet. American Economic Review 92(4), pp. 1093–1103.
Samuelson, W. F., 1985. Competitive Bidding with Entry Costs. Economic Letters 17(1–2), pp. 53–57.
Seifert, S., 2006. Posted Price Offers in Internet Auction Markets. Springer, Berlin, Heidelberg, New York.
Shahriar, Q., Wooders, J., 2011. An Experimental Study of Auctions with a Buy Price Under Private and Common
Values. Games and Economic Behavior 72(2), pp. 558–573.
Simonsohn, U., 2010. eBay's Crowded Evenings: Competition Neglect in Market Entry Decisions. Management
Science 56(7), pp. 1060–1073.
Simonsohn, U., Ariely, D., 2008. When rational sellers face nonrational consumers: Evidence from herding on eBay.
Management Science 54(9), pp. 1624–1637.
Page 23 of 27
Internet Auctions
Song, J., Baker, J., 2007. An Integrated Model Exploring Sellers’ Strategies in eBay Auctions. Electronic Commerce
Research 7(2), pp. 165–187.
Stern, B. B., Stafford, M. R., 2006. Individual and Social Determinants of Winning Bids in Online Auctions. Journal of
Consumer Behaviour 5(1), pp. 43–55.
Thaler, R. H., 1980. Toward a Positive Theory of Consumer Choice. Journal of Economic Behavior & Organization
1(1), pp. 39–60.
Thaler, R. H., 1988. Anomalies: The Winner's Curse. Journal of Economic Perspectives 2(1), pp. 191–202. (p. 342)
Varian, H., 2007. Position Auctions. International Journal of Industrial Organization 25(6), pp. 1163–1178.
Varian, H., 2008. Quality scores and ad auctions. Available at:
http://googleblog.blogspot.com/2008/10/quality-scores-and-ad-auctions.html
Varian, H., 2009. Online Ad Auctions. American Economic Review 99(2), pp. 430–434.
Vickrey, W., 1961. Counterspeculation Auction and Competitive Sealed Tenders. Journal of Finance 16(1), pp. 8–
37.
Vincent, D. R., 1995. Bidding Off the Wall: Why Reserve Prices May Be Kept Secret. Economic Theory 65(2), pp.
575–584.
Walker, J. M., Smith, V. L., Cox, J. C., 1990. Inducing Risk Neutral Preferences: An Examination in a Controlled
Market Environment. Journal of Risk and Uncertainty 3, pp. 5–24.
Wilcox, R. T., 2000. Experts and Amateurs: The Role of Experience in Internet Auctions. Marketing Letters 11(4),
pp. 363–374.
Wintr, L., 2008. Some Evidence on Late Bidding in eBay Auctions. Economic Inquiry 46(3), pp. 369–379.
Yin, P.-L., 2009. Information Dispersion and Auction Prices. Mimeo. Sloan School of Management, Massachusetts
Institute of Technology.
Zeithammer, R., Adams, C., 2010. The Sealed-Bid Abstraction in Online Auctions. Marketing Science 29(6), pp. 964–
987.
Notes:
(1.) Cassady (1967) surveys the history of auctions.
(2.) Auction fees are also much lower on Internet auction platforms than in traditional offline-auctions (5–10 percent
as compared to about 20 percent, respectively; Lucking-Reiley, 2000a).
(3.) Lucking-Reiley (1999) explores further advantages of Internet auctions.
(4.) Bajari and Hortasçu (2004), Lucking-Reiley (2000a), and Ockenfels et al. (2006) also provide surveys of this
research.
(5.) The research on reputation systems is closely related to the research on Internet auctions because of the
important role they play in enabling trade on these platforms. Such systems have natural ties with the design of
auctions themselves, as they influence the seller-buyer interaction. We do not discuss reputation systems in this
chapter, as they are dealt with in detail in chapter 13 of this book.
(6.) See Klemperer (1999), Krishna (2002), and Milgrom (2004), among others, for more comprehensive and formal
treatments of auction theory.
(7.) Outside economics, the term “Dutch auction” is often used differently, e.g., for multi-unit auctions.
Page 24 of 27
Internet Auctions
(8.) The affiliated values model was introduced by Milgrom and Weber (1982). A combined value model can be
found, e.g., in Goeree and Offerman (2003).
(9.) In the literature, this case is often also referred to as the “i.i.d.” model, because values are “independently and
identically distributed.”
(10.) The term “standard auction” refers to single-item auctions with symmetric private values, in which the highest
bid wins the item.
(11.) As a result, if bidders are not risk-neutral, some auction formats are more attractive than others, where the
ranking is different for sellers and buyers (Maskin and Riley, 1984; Matthews, 1987).
(12.) See Kagel (1995) for an extensive survey on auction experiments.
(13.) For example, Kagel and Levin (1993) observe overbidding in sealed-bid auctions, and Katok and Kwasnica
(2008) in open decreasing-price auctions.
(14.) Overbidding is sometimes also observed in second-price auctions, where risk aversion should not play a role,
and third-price auctions, where risk aversion even predicts underbidding (Kagel and Levin, 1993). Harrison (1990),
Cox et al. (1982), and Walker et al. (1990) find substantial overbidding, although they control for risk aversion
through either directly measuring it or by using experimental techniques to induce risk-neutral preferences. Kagel
(1995) provides a detailed discussion of the literature on risk aversion in auctions.
(15.) Bolton and Ockenfels (2010) conducted a controlled field experiment on eBay (inducing private values, etc.),
and examined to what extent both social and competitive laboratory behavior are robust to institutionally complex
Internet markets with experienced traders. Consistent with behavioral economics models of social comparison
(e.g., Fehr and Schmidt, 1999; Bolton and Ockenfels, 2000), they identify an important role of fairness in one-toone eBay interactions, but not in competitive one-to-many eBay auctions. This suggests that social concerns are at
work in Internet trading; yet the study cannot (and was not designed to) reveal a role of social utility for
overbidding.
(16.) The first database consists of 167 auctions of a particular board game. The second database consists of
1,926 auctions of very heterogeneous items.
(17.) A distinct but related phenomenon is herding on Internet auctions; see Simonsohn and Ariely (2008) and
Simonsohn (2010).
(18.) A full picture must also take into account the literature on price dispersion on the Internet (see, e.g., Baye and
Morgan, 2004, and Brynjolfsson and Smith, 2000, and the references therein).
(19.) In contrast, laboratory experiments by Kagel and Levin (1986) suggest that common value auctions with
larger numbers of bidders produce even more aggressive bidding.
(20.) As a complementary result, Jin and Kato (2006) find that sellers with better reputation profile are able to obtain
higher prices. These sellers are also less likely to default, but conditional on delivery, these sellers are not less
likely to make false claims or deliver low quality.
(21.) Similarly, Bajari and Hortaçsu (2003) observe that half of the winning bids in their sample of eBay auctions
arrive in the last 1.7 percent (~73 minutes) of the auction time, and 25 percent of the winning bids were submitted
in the last 0.2 percent of the time period.
(22.) Some authors also point out that late bidding maybe due to bidders’ learning processes. Duffy and Ünver
(2008), for example, show that finite automata playing the auctions repeatedly and updating their strategies via a
genetic algorithm also exhibit bidding behavior that is akin to the observed behavior in the field.
(23.) This result is supported by the experimental study of Füllbrunn and Sadrieh (2006), who observe substantial
late bidding in hard-close and candle auctions with a 100 percent probability of bid arrival. Interestingly, they show
that in candle auctions, i.e. auctions that have a stochastic termination time, late bidding always sets in the first
period with a positive termination probability. This indicates that bidders are fully aware of the strategic effect of the
Page 25 of 27
Internet Auctions
stochastic termination rule, but choose to delay serious bidding as long as it is possible without the threat of a
sudden “surprise” ending.
(24.) Strict incremental bidding equilibria, however, are neither observed in the field (McCart et al. 2009) nor in
laboratory experiments with competing auctions (Hoppe, 2008), even though both incremental and late bidding are
present.
(25.) E.g., Cox et al. (1982) find higher revenues in the first-price sealed-bid auction compared to a Dutch auction.
Kagel et al. (1987) observe higher revenues for the second-price auction compared to the English auction, though
the experimental results for the English auction are in accordance with the theoretical predictions.
(26.) The finding that reserve prices increase final auction prices conditional on sale is replicated in field data
analyses of Lucking-Reiley et al. (2007) and Stern and Stafford (2006).
(27.) Sometimes, however, the auction rules may actually help the prohibited shill bidding behavior. Engelberg and
Williams (2009) report a particular feature of the incrementrule on eBay that makes dynamic shill bidding almost
fool-proof. When a new bid arrives, eBay generally increases the current price by at least a full increment. If,
however, the proxy bid of the highest bidder is less than one increment above the new bid, the increment is equal
to the actual distance between the two highest bids. This feature allows sellers to submit shill bids incrementally
until the price increases by less than an increment, i.e. the highest proxy bid is reached. This “discover and stop”
strategy obviously enables sellers to extract almost the entire buyer's rent.
(28.) See, however, Bolton and Ockenfels (2010), who observed shill bidding of their subjects in a controlled field
experiment.
(29.) Bidding rights can be prepurchased: swoopo.com uses its own auction mechanism to sell bid packages.
Swoopo is an international platform, presenting the same auctions simultaneously in different countries, thereby
attracting a large number and constant stream of bidders. Bid increments in other countries are €0.01 or £0.01, for
example, and bidding fees are €0.50 or £0.50, respectively.
(30.) Swoopo is the most successful platform of penny auctions, with about 200 auctions per day (Ockenfels et al.,
2010; Augenblick, 2010) and 80,000 conducted auctions in 2008 (self-reported). Currently, the competitors of
swoopo.com, like bidstick, rockybid, gobid, bidray and zoozle, only make 7 percent of the industry profits, with the
remaining 93 percent captured by swoopo (Augenblick, 2010).
(31.) Unique bid auctions seem to be particularly popular for standard consumer goods. There also exists a variant
in which the highest unique bid wins.
(32.) Note that not bidding also has a positive probability in the mixed equilibrium strategy.
(33.) Östling et al. (2011) also find declining choice probabilities and bidder heterogeneity in a very closely related
game. They study a LUPI (lowest unique positive integer) lottery, in which participants pay for each submitted bid
(i.e., all-pay setting), but the winner receives a money prize without having to pay his bid (i.e., bids are lottery
tickets and there is no private value complication). Using the concept of Poisson-Nash equilibrium (Myerson, 1998),
Östling et al. (2011) demonstrate that their field and lab data are well in line with the fully mixed symmetric
equilibrium with declining choice probabilities, especially if they extend their theoretical analysis to encompass
behavioral aspects such as learning, depth of reasoning, and choice errors.
(34.) The same generalization holds true in the case of multiple single-unit auctions conducted simultaneously or
sequentially: Any auction winner who buys at a price higher than the lowest price among all n auctions would have
better chosen a different bidding strategy. Thus, in equilibrium, all n auctions should yield the same price, the n
bidders with the highest values should win the n units, and all auction winners should pay a price in the size of the
n+1th-highest value among the bidders.
(35.) Bapna et al. (2003) take an empirical optimization approach to analyze multi-unit auctions and find the bid
increment to have a pivotal role for the seller's revenue.
(36.) Since the search engine only earns income, when there are clicks on the search term, high per-click prices
with very little traffic can easily be dominated by low per-click prices with much more traffic.
Page 26 of 27
Internet Auctions
(37.) As long as the click-through-rates (CTR) of all ads are almost equal, the GSP will do a relatively good job in
allocating the advertising space efficiently. If, however, an ad with a high bid attracts a very little traffic (i.e., has a
small CTR), then it generate less revenue than a high-CTR ad with a lower bid.
(38.) In carbon permit auctions, a large number of perfectly substitutable identical items (permits) are sold, often in
different “vintages” (batches of permits valid over different predefined time horizons).
(39.) An underexplored topic in permit auction research is the effect of the existence of secondary markets, i.e. the
possibility of resale (Haile, 2000), which in theory turns a private-value auction into one with common (resale)
value.
Ben Greiner
Ben Greiner is Lecturer at the School of Economics at the University of New South Wales.
Axel Ockenfels
Axel Ockenfels is Professor and Chairperson of the Faculty of Management, Economics, and Social Sciences at the University of
Cologne. He models bounded rationality and social preferences and does applied work behavioral and market design economics.
Abdolkarim Sadrieh
Abdolkarim Sadrieh is Professor of Economics and Management at the University of Magdeburg.
Page 27 of 27
Reputation on the Internet
Oxford Handbooks Online
Reputation on the Internet
Luis Cabral
The Oxford Handbook of the Digital Economy
Edited by Martin Peitz and Joel Waldfogel
Print Publication Date: Aug 2012
Online Publication Date: Nov
2012
Subject: Economics and Finance, Economic Development
DOI: 10.1093/oxfordhb/9780195397840.013.0013
Abstract and Keywords
This article reports the recent, mostly empirical work on reputation on the Internet, with a particular focus on eBay's
reputation system. It specifically outlines some of the main economics issues regarding online reputation. Most of
the economics literature has focused on eBay despite the great variety of online reputation mechanisms. Buyers
are willing to pay more for items sold by sellers with a good reputation, and this is reflected in actual sale prices
and sales rates. It is also noted that reputation matters for buyers and for sellers. Regarding game theory's
contribution, it is significant to understand the precise nature of agent reputation in online platforms. Online markets
are important, in monetary terms and otherwise; and they are bound to become even more so in the future.
Keywords: online reputation, Internet, eBay, economics, buyers, sellers, online markets
1. Introduction: What is Special About Reputation on the Internet?
Economists have long studied the phenomenon of reputation, broadly defined as what agents (e.g., buyers)
believe or expect from other agents (e.g., sellers). In a recent (partial) survey of the literature (Cabral, 2005), I
listed several dozen contributions to this literature. So it is only fair to ask: What is special about reputation on the
Internet? Why should a chapter like this be worth reading?
There are several reasons that reputation on the Internet is a separate phenomenon in its own right, and one worth
studying. First, the growth of the Internet has been accompanied by the growth of formal, systematic review and
feedback systems (both of online market agents and of offline market agents who are rated online). By contrast,
the traditional research on reputation was typically motivated by real-world problems where only “soft” information
was present (for example, the reputation of an incumbent local monopolist such as American Airlines for being
“tough” in dealing with entrants into its market).
The second reason that reputation on the Internet forms a separate research area is that formal online reputation
systems generate a wealth of information hitherto unavailable to the researcher. As a result, the economic analysis
of online reputation is primarily empirical, whereas the previous literature on reputation was primarily of a
theoretical nature.
A third reason that the present endeavor is worthwhile is that online markets are important and increasingly so.
Specifically, consider the case of eBay, one of (p. 344) the main online markets and the most studied in the online
reputation literature. In 2004, more than $34.1 billion were transacted on eBay by more than one hundred million
users. Although much of the research has been focused on eBay, other online markets are also growing very
rapidly. China's taobao.com, for example, has more than 200 million registered users as of 2010.
Finally, reputation on the Internet is potentially quite important because there is fraud on Internet commerce. Here
Page 1 of 9
Reputation on the Internet
are three examples, all from 2003: (1) an eBay seller located in Utah sold about 1,000 laptop computers (for about
$1,000 each) that were never delivered; (2) a buyer purchased a “new” electrical motor which turn out to be
“quite used”; (3) a seller saw her transaction payment reversed upon receiving a “Notice of Transaction Review”
from PayPal stating that the funds used to pay for the transaction came “from an account with reports of fraudulent
bank account use.”1 While there are various mechanisms to deal with fraud, reputation is one of the best
candidates—and arguably one of the more effective ones.
In what follows, I summarize some of the main economics issues regarding online reputation. My purpose is not to
provide a systematic, comprehensive survey of the literature, which has increased exponentially in recent years.
Rather, I point to what I think are the main problems, the main stylized facts, and the main areas for future
research.
2. Online Reputation Systems: Ebay and Others
As I mentioned earlier, one of the distinguishing features of online reputation is the existence of formal feedback
and reputation mechanisms; within these the eBay feedback system is particularly important, both for the dollar
value that it represents and for the amount of research it has induced. A brief description of eBay and its feedback
system is therefore in order.
Since its launch in 1995, eBay has become the dominant online auction site, with millions of items changing hands
every day. eBay does not deliver goods: it acts purely as an intermediary through which sellers can post auctions
and buyers bid. eBay obtains its revenue from seller fees, based on a complex schedule that includes fees for
starting an auction and fees on successfully completed auctions. Most important, to enable reputation to regulate
trade, eBay uses an innovative feedback system.2 Originally, after the completion of an auction, eBay allowed both
the buyer and the seller to give the other party a grade of +1 (positive), 0 (neutral), or –1 (negative), along with
any textual comments. There have been several changes on eBay regarding how these ratings can be given by
the users. For example, since 1999 each grade or comment has to be linked to a particular transaction on eBay.
Further changes were introduced in 2008, when eBay revoked the ability (p. 345) of the seller to give buyers a
negative or neutral grade (sellers could choose to leave positive or no feedback for buyers). However, the core of
the feedback system has remained the same.3
Based on the feedback provided by agents (buyers and sellers), eBay displays several aggregates corresponding
to a seller's reputation, including (1) the difference between the number of positive and negative feedback ratings;
(2) the percentage of positive feedback ratings; (3) the date when the seller registered with eBay; and (4) a
summary of the most recent feedback received by the seller.4 Finally, eBay provides a complete record of the
comments received by each seller, starting with the most recent ones. All of the information regarding each seller
is publicly available; in particular it is available to any potential buyer.
While this chapter focuses primarily on literature regarding the eBay platform, eBay is by no means the only online
feedback and reputation system. Amazon.com offers a system of customer reviews whereby buyers can rate both
the product itself and, if buying from a seller other than Amazon, the seller. Although Amazon's seller review
system is quite similar to that of eBay, its product review system is somewhat more complex, as reviewers can rate
other reviews. Unlike eBay, Amazon's seller reputation system is one-sided: buyers review sellers but sellers do
not review buyers (eBay still offers sellers the option to rate buyers positively). Of course eBay is not the only twosided system. For example, at couchsurfing.net, where travelers find free places to stay while traveling, both hosts
and travelers can rate each other.
Despite the great variety of online reputation mechanisms, most of the economics literature has focused on eBay.
This is partly justified by the economic significance of eBay, the size of which dwarfs almost all other online trade
platforms, and by the fact that a considerable amount of data is available for eBay. Accordingly, most of the
evidence presented in this chapter corresponds to eBay.
3. Do Buyers Care About Online Reputations?
The fact that there exists a feedback reputation system in a given online market does not imply that such system
Page 2 of 9
Reputation on the Internet
matters, that is, that it has any practical significance. In principle, it is possible that agents (buyers and sellers)
ignore the reputation levels generated by the system. If that were the case, there would be little incentive to
provide feedback, which in turn would justify the agents’ not caring for the system in the first place. More generally,
many if not most games of information transmission (such as feedback and rating systems) admit “babbling
equilibria,” that is, equilibria where agents provide feedback and ratings in a random way (or simply don’t provide
any feedback), and, consistently, agents ignore the information generated by the system.
(p. 346) For this reason, an important preliminary question in dealing with online reputation systems is, Does the
system matter at all? In other words, do the reputations derived from formal feedback systems have any bite? One
first step in answering this question is to determine whether buyers’ actions (whether to purchase; how much to
bid, in case of an auction; and so forth) depend on seller's reputation.
At the most basic level, we would expect a better seller reputation to influence the price paid for an otherwise
identical object. Many studies attempt to estimate the following equation: sale price as a dependent variable and
seller reputation as an independent variable (along with other independent variables). Alternative left-hand side
variables include the number of bids (in the case of an auction) or the likelihood the item in question is sold.
These studies typically find a weak relation between reputation and price. However, as is frequently the case with
cross-section regressions, there are several problems one must take into account. First, unobserved heterogeneity
across sellers and sold items may introduce noise in the estimates, possibly reducing the estimates’ statistically
significance. Conversely, as first pointed out by Resnick et al. (2003), several unobservable confounding factors
may lead to correlations for which there is no underlying causality relation. For example, sellers with better
reputation measures may also be much better at providing accurate and clear descriptions of the items they are
selling, which in turn attract more bidders; hence their writing ability, not their reputation, may be underlying cause
of the higher prices they receive. (In fact, there is some evidence that spellings mistakes in item listings are
correlated with lower sale prices; and that “spelling arbitrage” is a profitable activity—that is, buying items with
misspelled listings and selling them with correctly spelled listings.)
Such caveats notwithstanding, a series of authors have addressed the basic question of the effect of reputation on
sales rates and sales price by taking the crosssection regression approach. The list includes Cabral and Hortacsu
(2010), Dewan and Hsu (2004), Eaton (2005), Ederington and Dewally (2003), Houser and Wooders (2005),
Kalyanam and McIntyre (2001), Livingston (2005), Lucking-Reiley, Bryan, Prasad, and Reeves (2006), McDonald
and Slawson (2002), Melnik and Alm (2002), and Resnick and Zeckhauser (2002).5 For example, Cabral and
Hortacsu (2010) find that a 1 percent level increase in the fraction of negative feedback is correlated with a 7.5
percent decrease in price, though the level of statistical significance is relatively low. These results are
comparable to other studies, both in terms of coefficient size and in terms of statistical significance.
One way to control for seller heterogeneity is to go beyond cross-section regression and estimate the effects of
reputation based on panel data. From a practical point of view, creating a panel data of online sellers is much more
difficult than creating a cross-section. For some items on eBay, it suffices to collect data for a few days in order
obtain hundreds if not thousands of observations from different sellers. By contrast, creating a panel of histories of
a given set of sellers takes time (or money, if one is to purchase an existing data set).6
Cabral and Hortacsu (2010) propose a strategy for studying eBay seller reputation. At any moment in time, eBay
posts data on a seller's complete feedback history. (p. 347) Although there is no information regarding past
transactions’ prices, the available data allows for the estimation of some seller reputation effects. Specifically,
Cabral and Hortacsu (2010) propose the following working assumptions (and provide statistical evidence that they
are reasonable working assumptions): (1) the frequency of buyer feedback is a good proxy for the frequency of
actual transactions; (2) the nature of the feedback is a good proxy for the degree of buyer satisfaction. Based on
these assumptions, they collect a series of seller feedback histories and estimate the effect of reputation on sales
rate. They find that when a seller first receives negative feedback, his weekly sales growth rate drops from a
positive 5 percent to a negative 8 percent. (A disadvantage of using seller feedback histories is that one does not
obtain price effects, only quantity effects.)
As an alternative to panel data, one strategy for controlling for omitted-variable biases is to perform a controlled
field experiment. Resnick, Zeckhauser, Swanson, and Lockwood (2006) do precisely that: they perform a series of
sales of identical items (collector's postcards) alternatively using a seasoned seller's name and an assumed name
Page 3 of 9
Reputation on the Internet
with little reputation history. They estimate an 8 percent premium to having 2,000 positive feedbacks and 1
negative over a feedback profile with 10 positive comments and no negatives. In a related research effort, Jin and
Kato (2005) assess whether the reputation mechanism is able to combat fraud by purchasing ungraded baseball
cards with seller-reported grades, and having them evaluated by the official grading agency. They report that while
having a better seller reputation is a positive indicator of honesty, reputation premia or discounts in the market do
not fully compensate for expected losses due to seller dishonesty.
A related, alternative strategy consists of laboratory experiments. Ba and Pavlou (2002) conduct a laboratory
experiment in which subjects are asked to declare their valuations for experimenter generated profiles, and find a
positive response to better profiles. As every other research method, laboratory experiments have advantages
and disadvantages. On the plus side, they allow the researcher to create a very tightly controlled experiment,
changing exactly one parameter at a time; they are the closest method to that of the physical sciences. On the
minus side, a major drawback is that the economics laboratory does not necessarily reflect the features of a realworld market.
In summary, while different studies and different research methodologies arrive at different numbers, a common
feature is that seller reputation does have an effect on buyer behavior: buyers are willing to pay more for items
sold by sellers with a good reputation, and this is reflected on actual sale prices and sales rates. As I mentioned
already, from a theoretical point of view there could exist an equilibrium where buyers ignore seller reputation
(consistently with the belief that feedback is given in a random manner); and seller feedback is given in a random
manner (consistently with the fact that seller reputation is ignored by buyers). The empirical evidence seems to
reject this theoretical possibility.
Finally, I should mention that buyer feedback and seller reputation are not the only instrument to prevent
opportunistic behavior on the seller's part. In traditional markets, warranties play an important role in protecting
buyers. What (p. 348) role can warranties play in online markets? Roberts (2010) addresses this issue by
examining the impact of the introduction of eBay's buyer protection program, a warranty system. He shows that,
under the new system, the relation between seller reputation and price becomes “flatter.” This suggests that
warranties and seller reputation are (partial) substitute instruments to protect consumers in a situation of
asymmetric information in the provision of quality products.
4. Do Sellers Care About Online Reputations?
Given that reputations do have a bite, in the sense that buyers care about it, a natural next question is, What do
sellers do about it? Specifically, interesting questions include, How do sellers build a reputation? How do sellers
react to negative feedback: by increasing or by decreasing effort to provide quality? Do sellers use or squander
their reputation by willfully cheating buyers?
Cabral and Hortacsu (2010), in the essay introduced earlier, address some of these questions. As previously
mentioned, following the first negative feedback received by a seller, the sales growth rate drops from a positive 5
percent to a negative –8 percent. This suggests that buyers care about seller reputation. In addition, Cabral and
Hortacsu also show that following the first negative feedback given to the seller, subsequent negative feedback
ratings arrive 25 percent more frequently and don’t have nearly as much impact as the first one. This change, they
argue, is due to a shift in seller behavior. Intuitively, while a seller's reputation is very high, the incentives to invest
on such reputation are also high. By contrast, when a perfect record is stained by a first negative, sellers are less
keen on making sure buyer satisfaction is maximized; and as a result negative feedback is received more
frequently.7
Cabral and Hortacsu (2010) also find that a typical seller starts his career with a substantially higher fraction of
transactions as a buyer relative to later stages of his career as an eBay trader. This suggests that sellers invest in
building a reputation as a buyer and then use that reputation as a seller. Moreover, a seller is more likely to exit the
lower his reputation is; and just before exiting sellers receive more negative feedback than their lifetime average.
Note that the “end of life” evidence is consistent with two different stories. First, it might be that exit is planned by
the seller and his reputation is “milked down” during the last transactions. Alternatively, it might be that the seller
was hit by some exogenous shock (he was sick for a month and could not process sales during that period), which
Page 4 of 9
Reputation on the Internet
led to a series of negative feedbacks; and, given the sudden drop in reputation, the seller decides that it is better
to exit. Additional evidence is required to choose between these two stories. Anecdotal evidence suggests that the
(p. 349) former plays an important role. For example, in their study of sales of baseball cards on eBay Jin and
Kato (2005) report that they “encountered two fraudulent sellers who intentionally built up positive ratings,
committed a series of defaults, received over 20 complaints, and abandoned the accounts soon afterward” (p.
985).
In sum, the evidence suggests that reputation matters not only for buyers but also for sellers. In particular, sellers’
actions too are influenced by reputation considerations.
5. The Feedback Game
User feedback is the backbone of online reputations. How and why do feedback mechanisms work? To an
economist following the classical homo economicus model, the answer is not obvious. Giving feedback takes time.
Moreover, to the extent that feedback can be given by both parties to a transaction, giving feedback may also
influence the other party's decision to give feedback (and the nature of such feedback). One must therefore be
careful about measuring the costs and benefits of feedback-giving.
Formal feedback and review mechanisms induce relatively well-defined extensive form games: each agent must
decide, according to some rules, if and when to send messages; and what kind of messages to send.
Understanding the equilibrium of these games is an important step towards evaluating the performance of
reputation mechanisms, both for online and offline markets.
Empirically, one noticeable feature of the eBay feedback mechanism is that there is a very high correlation
between the events of a buyer providing feedback to seller and a seller providing feedback to buyer. Jian et al.
(2010) argue that eBay sellers use a “reciprocate only” strategy about 20 to 23 percent of the time. Bolton et al.
(2009) also provide information that supports the reciprocal nature of feedback giving. For example, they show that
the timing of feedback-giving by buyer and seller is highly correlated.
The reciprocal nature of feedback also seems consistent with another important stylized fact from the eBay system
(and possibly from other two-way feedback systems): the extremely low prevalence of negative feedback. Klein et
al. (2006) and Li (2010) present confirmatory evidence. They show that a substantial fraction of the (little) negative
feedback that is given takes place very late during the feedback window, presumably in a way that reduces the
likelihood of retaliation. Conversely, positive feedback is given very early on, presumably as a way to encourage
reciprocal behavior.
Given the strategic nature of feedback giving, several proposals have been made to make the process more
transparent and closer to truth telling. For example, Bolton et al. (2009) propose that feedback be given
simultaneously, thus preventing one party from reacting to the other. They also present evidence from Brazilian (p.
350) MercadoLivre which suggests that such a system induces more “sincere” feedback-giving without
diminishing the overall frequency of feedback (one of the concerns with simultaneous feedback). Additional
evidence is obtained by considering RentACoder.com, a s
Download