In recent years a fundamental solution to the 'Schumpeterian

advertisement
Technical Compatibility Standards and the
Co-Ordination of the Industrial and International
Division of Labour
W. Edward Steinmueller
SPRU – Science and Technology Policy Research
University of Sussex
Prepared for the Conference:
Advancing Knowledge and the Knowledge Economy
National Academies
21st and Constitution Ave.
Washington, DC
10-11 January 2005
Sponsored by:
U.S. National Science Foundation
Organisation for Economic Cooperation and Development
Research Directorate-General, European Commission
Information Society Directorate-General, European Commission
U.S. Interagency Working Group on IT R&D
University of Michigan
1 Introduction
Modern economics makes its contribution to understanding a complex social world
through the use of simplifying assumptions, which smooth the wrinkles produced by
the operation of real world markets. For economists interested in the study of
knowledge and technology, one of the more interesting of these simplifying
assumptions to analyse, and to modify, is that information about available production
and exchange possibilities is widely dispersed among economic agents.1 In recent
years, economists interested in examination of the ‘widespread knowledge’
assumption have employed two different approaches. The first approach analyses the
implications of assuming that economic agents have asymmetric endowments of
‘information’ (which is equated with knowledge). The second approach involves
replacing the ‘widespread knowledge’ assumption with alternative assumptions
regarding how knowledge might be generated, reproduced and exchanged if
knowledge is not equivalent to information. Both of these approaches offer important
insights for a central theme of modern economics –the inter-firm and international
division of labour. This theme is central to concerns about the nature and
sustainability of industrial structures that involve the creation of products and services
employing complex systems, the outsourcing of component and subsystem
production, and the generation and exchange of knowledge about the design and
operation of such systems.
Both approaches to the economics of information and knowledge indicate that it is the
nature of the asymmetries that matter for market outcomes –by considering specific
technologies it is possible to examine how asymmetries are generated and reduced.
Both theoretical approaches to the issues of information and knowledge recognise that
the institutions (rules, norms and standards) that operate in parallel with market
exchange processes serve to regulate the economic exploitation of asymmetries. At
first glance, this is nothing more than an application of the observation that markets
and institutions are inextricably linked (Williamson 1975; North 1990). However, if
institutions regulating market exchange (the law of contracts and so forth) have to be
supplemented by other institutions – those considered in this paper governing
knowledge exchange, and also, institutions governing financial relationships
(relationships of ownership and control), which are not considered here – it seems
warranted to pay greater attention to institutions in efforts to understand the structure
and dynamics of modern economies. In focussing on the role of institutions related to
knowledge exchange in shaping the structure and dynamics of modern economies,
this paper attempts to partially illuminate that part of the larger canvas of inter-firm
and international division of labour where institutions may influence economic
outcomes as much, or more than, productive efficiency or innovation pursued within
single firms.
In the context of this larger canvas, this paper focuses upon the role of technical
compatibility standards that are published by standards organisations, and those
designed to govern inter-firm arrangements for component and subsystem production.
The textbook assumption is ‘perfect information,’ i.e. that all economic agents have access to
the full range of production and exchange possibilities, which is a useful filter for separating students
willing to make strong economic assumptions from those who are not. It is quite sufficient to assume
that those with the potential to take advantage of information have ready access to it.
1
1
This specific focus provides a means to ground the general theory of the division of
labour in empirical observations about some of the regularities in the co-ordination
arrangements involved in creating products and systems employing complex systems.
The aim of this paper is to identify and discuss the implications of both positive (how
does it work?) and normative (how should it work?) features of knowledge exchange
related to the construction of product and service ‘platforms’ comprised of
components that are mostly produced by other companies. The term ‘platform’ is used
to suggest some degree of flexibility in the combination of these components and to
convey the interdependence between the ‘platform integrator’, who usually bears the
principal responsibility for the market promotion of the platform, and the suppliers of
components that can be integrated in a platform.2 Examples of platforms include
automobiles, personal computers, aeroplanes, buildings, travel and tourism services,
and large retail companies. The first three examples are systems that principally
involve the integration of physical components, while building construction involves
the integration of physical components with a collection of service components.
Although physical artefacts are used in the last two example platforms, the integration
of service components is of central importance to their success. The variety of the
examples cited highlights the idea of the platform employed in this paper –the
platforms to be discussed do not only involve systems with ‘closely coupled’
components whose integration requires technical compatibility standards at the
interfaces between component and subsystems, but also includes more loosely
coupled systems whose integration involves a broader set of understandings of
‘compatibilities’ between the components employed, and which is informed by
knowledge about customer needs, supplier capabilities, and competing platforms. A
prime concern in this discussion is with the asymmetry in knowledge between the
creator of the platform and the suppliers of components (e.g. the large retail
company’s relations with its suppliers)
Finally, the issue of product and service platforms has an historical context. While in
some industries it is possible to extend the idea of platforms backwards in time, the
contemporary context is particularly relevant for two reasons. First, many of the
means for integrating the components into the platform rely upon a particular
configuration of information and communication technologies – the interactive and
integrated data network, which is an information infrastructure that only gathered
momentum over the past decade.3 Second, many of the techniques that are used for
the integration of the components of a platform employ relatively recent
developments in information technology – particularly the software applications
related to computer aided design and engineering, and the broad class of applications
associated with enterprise resource planning. These software applications provide a
precise means for specifying the relationship between the components to be integrated
on a platform; in some cases, they also provide means for simulating the performance
The term ‘component’is employed since the English language does not have a proper term for
a collection of services capable of being integrated into a larger service package. Thus, even though
the connotation of the term ‘component’ is usually physical, it is used here to refer to intangible
components of platforms such as services.
3
Network data communication networks have been growing over the past forty years.
However, the complexities of harmonising applications and middleware both limited the scope of
applications and created barriers to expanding or re-configuring inter-organisational networks. While
these problems have not disappeared with the past decade’s expansion of the Internet, a wide range of
options are now available for resolving them.
2
2
or other features of the results of integration. The growing momentum of these
network and information technology developments is a primary reason for analysing
the idea and implications of platforms more closely.
This paper is organised in two major sections – the first is concerned with analysing
the nature of platforms and the role of technical compatibility standards and related
institutions in enabling the creation and extension of platforms. A particular focus of
this section is on the limits to growth of ‘modular’ architectures, and the related
industrial structures where compatibility standards are ‘emergent’ provide a basis for
competition between multiple suppliers, and, therefore, support industrial structures
where system integrators may have less control. The second major section addresses
issues of the governance of platforms, from the viewpoints of both the platform
creator or system integrator, and the public interest (or social welfare). This section
highlights the possibilities for the emergence of market power in the production of
platforms and the ways that this market power may be limited through market
mechanisms or government intervention. The conclusion highlights those areas where
public policies may make a contribution, and includes policies on the priorities
assigned to research on the issues raised in this paper (and related literature).
3
2 Building Platforms
The study of the complex products and systems that first emerged in the 20th century
and that now constitute a major share of economic output in industrialised nations is a
relatively recent development. A point of origin for these studies was the recognition
that the production of ‘systems’ with heterogeneous components required innovation
in organisational and management techniques, particularly in the means by which
knowledge could be accumulated by the co-ordinators or integrators of such systems
despite the dispersed origins of the knowledge about the components comprising such
systems.4
Thomas Hughes’s examinations of several of the largest integrated systems, including
power and telecommunication networks and integrated military command systems,
highlighted the complexity of management problems involved, and the specific role of
the system integrator (Hughes 1993; 2000). As Hughes observed, the complexities in
constructing such systems challenged the ‘command and control’ structures that have
been used through much of human history in large public works (Mumford 1934) and
also departed from the mechanisms of control that proved necessary to manage large
scale manufacturing (Hounshell 1984; Hughes 1989) or provided the means for coordinating transport networks such as railroads (Beniger 1986). A key feature of the
new methods of control is the explicit specification of organisational and
technological ‘interfaces’ that establish the relationships between the suppliers of
system components, and the components that are employed in systems.
The attention to ‘interfaces’ allowed the development of a ‘modular’ approach to
systems integration. The value of modularity was clearly recognised at an early stage
by those involved with the computational sciences (Simon 1969; 1996). In this
context, modularity was of fundamental importance in ‘localising’ defects or
undesirable behaviour in large systems. Modularity also provided a framework for the
division of labour in the construction of large systems and hence a ‘discipline of
design’ that could be extended from computers and information systems to other
system contexts.
Simon’s approach to modularity draws heavily upon experience in the design of
computers in which the explicit definition of interfaces supports inter-operability
(compatibility) between the components of a system. The effectiveness of interoperability standards allowed the ‘modular’ approach to be extended from large
computer design to a vast array of other electronic designs. By the mid-1970s, the
technical literature was explicitly suggesting that electronic designers should embrace
technical compatibility standards as the way forward to meet the demands for variety
that they were facing – standard integrated circuits could be viewed as ‘general
purpose’ logic devices from which a wide variety of electronic systems could be
created (Blakeslee 1975). This approach relied in a fundamental way, however, on the
steep trajectory of improvement in the complexity of integrated circuits described by
Moore’s law (1965), which has served as a map of technological opportunity for the
4
Specific company histories such those of science and engineering at AT&T, e.g. (Millman
1983; 1984) provide important source material for identifying the emergence of these issues, but give
little insight into how they were resolved.
4
integrated circuit and electronics industry for almost four decades. Its promise of the
increasing number of transistors that could be simultaneously fabricated encouraged
ever more expansive visions of ‘systems on a chip’ beginning with the microprocessor
(1971) and embodying visions of such systems as general-purpose logic devices.
System modularity employing microprocessor components has steadily expanded
through a process of creating ever more capable ‘modules’ and the proliferation of
systems designs implementing increasingly complex networks of these modules.
This rapid sketch of progress in the electronics industry does not, itself, provide the
basis for the larger claim that this industry’s experience has a more general relevance.
This larger claim has been developed by Baldwin and Clark (1997; 2000). In their
approach, modularity provides vast new opportunities for the division of labour across
organisational boundaries because, where technological interfaces can be effectively
defined, opportunities for the entry and specialisation of producers create a powerful
engine for innovation and technological advance. While drawing heavily on the
experience of the computer industry, the vision presented by Baldwin and Clark
extends to other industries and offers a ‘paradigm’ or way of thinking about how to
improve upon the performance of any industry in which ‘systems’ play an important
role. This message is further amplified by the influence of ‘systems analysis’ in
modern management – the analytical processes employed to diagnose and suggest
improvements in companies’ operations often involve conceptualising companies as
‘systems’ comprised of processes with interfaces and flows of information and
knowledge. Only a small step is therefore needed to conclude that modularity can be
employed as a general theory for improving company performance and structuring the
division of labour within industries.
This is a powerful vision deserving careful critical scrutiny. The method employed in
this paper involves two features. First, the potential for modular approaches is taken
to be broad and deep, it is assumed to be relevant to a large array of industries in
which systems play a prominent role. As suggested in the introduction (Section 1),
one may begin with the idea that configurations of knowledge and productive
capabilities define product and service ‘platforms’ comprised of components, which
are often produced by different companies. From this perspective, modularity is a
specific configuration of knowledge and industrial structure– it involves distributed
knowledge, a precise definition of physical interfaces between components in a
system, and an industrial structure with considerable division of labour and
specialisation. This provides a basis for comparative assessment. Why are modular
architectures not more broadly deployed in the construction of platforms? One answer
is that it is a matter of time – the processes of adjustment and re-structuring are timeconsuming, and challenge existing approaches thus provoking resistance and
obstruction. This answer is not entirely satisfactory since it suggests that assessment
should be postponed until better evidence has been accumulated and begs the question
of what prevented other industries from learning more from the electronics industry in
past decades. An alternative answer is that there may be important and persistent
roadblocks to achieving higher degrees of modularity. Identifying some of these
roadblocks provides some basis for assessing their persistence or strength. However,
this answer is not a final one either. The methods by which the roadblocks may be
cleared or bypassed cannot be fully anticipated. Nonetheless, this answer provides
better information about the factors influencing the rate and direction of change in
achieving the potentials of modular approaches. Rather than making a systematic
5
comparative assessment, a series of observations that are the first steps towards such
an assessment is made in passing throughout the paper.
Second, experience in the electronics industry is employed to identify a set of
persistent problems in the realisation of the full potentials of modularity. All of these
problems relate to the interaction between knowledge and the organisational and
technological ‘interfaces’ discussed above. The technical features of these problems
are introduced in the following paragraphs and discussed in sub-sections 2.1 to2.5 of
this section (Section 2) while the economic and organisational features are discussed
in Section 3. In both sections, the specific role of information and knowledge
asymmetries is the hub around which the argument revolves.
Despite the enormous progress that has been made in expanding the frontier for
application of modular systems in the electronics industry, there is a series of issues
and problems that have persistently emerged in implementation of modular
approaches. For a variety of reasons, many of them connected with the sustained
expansion of technological opportunity, the impact of these issues has been blunted or
deferred in the electronics industry. Even within this industry, however, there are
examples where progress has been blocked or diverted. Considering examples from
the electronics industry, which is a ‘friendly’ context for the implementation of
modularity, provides some insight into other industries where modular approaches
have continued to be problematic. The following five issues are the basis for the
ensuing discussion:
1. Despite the comparative difficult of making an a priori definition of a viable
platform, in negotiating the creation of co-specialised assets required to
implement a platform, and maintaining the control of a successful platform in
the face of rival platform producers, positive network externalities can provide
major benefits for the platform creator as well as enlarging and accelerating
market developments. The problem is that it is difficult to successfully define
a platform that generates these benefits.
2. The problem of control in system integration and the accompanying issues of
‘finger pointing’ which highlight the possibility of ‘institutional failure’ in
governing compatibility standards.
3. The emergence of unanticipated behaviours in modular systems, which raises
issues of the identification of ‘emergent complexity’ and emphasises the role
of the systems integrator in managing the knowledge concerning system
performance.
4. The difficulty (and progress) in effectively simulating the operation of
complex modular systems and the related cognitive challenges presented by
these issues.
5. The dialectic between ‘asset specificity’ associated with proprietary
implementations of systems and platform interfaces, and ‘ruinous competition’
associated with generic implementations of these interfaces.
2.1 Specifying the Platform
The introduction (Section 1) defined product and service platforms in terms of a
multiplicity of suppliers and the generation and exchange of knowledge. The same
6
definition could be employed to define markets if the term platform is defined
functionally and generically – e.g. there is a market for food processors with a variety
of suppliers, and involves the generation and exchange of knowledge (in the
production and use of food processors). Delimiting the definition further is necessary.
The simplest way to do this is to associate ‘platform’ with ‘brand.’ Within this
definition platforms are, at minimum, sponsored and often ‘authored’ or ‘specified’
by specific companies. However, we may wish to recognise a broader scope for
technological substitution. For example, a WINTEL personal computer (WINdowsbased operating system employing an inTEL-compatible microprocessor) involves a
specific sponsored component, the Microsoft Windows operating system and a
sponsored but somewhat more generic microprocessor. In this case, however, the
‘platform’ may involve an emergent process of authorship or specification with a
variety of individual sponsors promoting specific brands. From the customer’s
viewpoint, the choice to be taken involves a decision to commit to the WINTEL
platform and then to a specific branded implementation of this platform.
This ambiguity in the practical meaning of product and service platforms reflects the
fact that the standards defining modular platform architectures may be either
sponsored or emergent – i.e. they may be in the control of a single company sponsor
or they may emerge from competition between brands, each sponsored by a specific
company. Returning to the example of food processors, it is the absence of technical
compatibility in the components offered by different sponsored brands of food
processor that makes brand and platform synonymous in this market. For personal
computers, the authorship or specification of the standards defining the platform is
dispersed and a particular brand may include unique or proprietary features that
differentiate it, but that are insufficient to remove it from a broad interdependence
with some key components (i.e. the Windows operating system and the Intelcompatible microprocessor). Defining a platform is therefore simultaneously a
problem of market definition in demand (e.g. what is the cross-elasticity of demand
between different brands?) and supply (e.g. can the component of a particular supplier
be integrated into the platform?). How these problems are resolved depends upon the
purpose of the analysis.
The key point, however, is that technical compatibility standards allow a platform to
be defined more broadly than just a sponsored brand and these standards may either
be controlled by a single company or may emerge through the (largely uncoordinated
or market coordinated) initiatives of a group of companies. This dichotomy ignores a
third category of processes through which platform-defining standards may emerge.
In many complex product industries, the system integrator plays a key role in
codifying, managing and certifying the knowledge necessary to assure compatibility
between components (Steinmueller 2003). In these industries (e.g. building
construction or aircraft and aerospace) it is common, however, for a considerable
amount of the knowledge to be dispersed among component producers, and for the
compatibility standards to be negotiated between the system integrator and the
component producers. In other words, it is possible for technical standards to be
neither emergent in the sense of successful standards being broadly adopted, nor
sponsored in the sense of being under the exclusive control of the system integrator.
These three categories of compatibility standards-defining processes – sponsored,
negotiated and emergent – have different implications for the processes of generating
7
and exchanging the knowledge necessary to specify a platform. Sponsored standards,
by definition, involve the system integrator taking principal responsibility for
generating the knowledge necessary to set standards. This can be a demanding
responsibility if the system integrator is not actively involved in the co-production of
the component for which standards are to be defined. Knowledge about the
technological opportunities and costs of alternatives must be exchanged with potential
suppliers who may have their own preferences for the deployment of their resources.
By moving towards a negotiated standards-defining process, companies may be able
to achieve a richer flow of knowledge concerning technological possibility and cost,
although in so doing they must invest in the partnership that is being negotiated.
From the supplier’s perspective, it is not obvious which of these arrangements is to be
preferred. In the case of sponsored standards it is possible that the supplier has
asymmetric knowledge regarding cost and technological opportunity that provides the
basis for profiting from the system integrator’s demands for a specific component.
The possibility that this is true rises with the extent of co-specialisation of assets (the
degree to which the supplier is producing only for a single system integrator) (Teece
1986). Greater co-specialisation, however, increases the system integrator’s
bargaining power and provides an incentive for the supplier to agree to a negotiated
standard, which will reduce the asymmetry of information between the system
integrator and the component supplier.
In the case of emergent standards, market opportunities open to suppliers provide the
principal incentive for knowledge investment. This can raise a distinct set of problems
(explored in more detail below) in which system integrators are insulated from
knowledge about technological opportunity and become reliant upon standards that
may be superseded by supplier initiatives that are taken up by competing platform
system integrators. In other words, system integrators will have an incentive to invest
in knowledge about emergent standards in order not to be surprised by developments.
This discussion suggests a basic framework for considering the knowledge and
organisational configurations involved in defining a platform. However, it only
provides an incomplete guide regarding the appropriate strategy for the system
integrator to ‘specify’ (design or architect) a platform. The previous discussion
focuses exclusively on the supply side, but it is the demand for the platform that will
ultimately determine the platform’s success. The analysis of demand can be
partitioned into two parts. The first relates to the overall appeal or ‘vision’ of the
platform – what value will it have for customers, what efforts will it elicit from
suppliers, and what problems of integration will emerge in its implementation? These
risks are particularly difficult to assess and may be an important source of the
resistance to adopting a greater degree of modularity in a variety of products and
services. Nonetheless, little can be said about them except that new platforms are very
much like new innovations – their eventual evolution and application is probably not
well predicted by their first implementation.5
The second part of the analysis of demand is relevant to platforms where the viability
of the platform has been established and choices at the margin aimed at influencing
demand come into play. What value will customers place on the variety of options
5
Rosenberg 1976.
8
that might be available in choosing whether to make standards more or less inclusive
of suppliers? Answering this question involves both static (time invariant) and
dynamic (time varying) elements.
The static (or time invariant) element of the choice made about inclusiveness of
standards reflects the value that customer place on variety. The demand for variety
can be considered as a fundamental feature of consumers’ preferences and thereby
their demand for goods and services (Lancaster 1979). Variety, however, also raises
customer costs in evaluating alternatives and specifying the best arrangement of
components for a particular application. The trade-off between the benefits and costs
of variety provides a basic guide to the static choice of how inclusive to make a
platform. Cable or satellite television operators, editors of magazines and newspapers,
department store owners and bookshop proprietors address this problem on a daily
basis, along with companies defining newer platforms such as mobile e-commerce, ebook publication, or e-stores. In these cases, better or worse decisions may be made
about the specific components included in the platform, but the decisions that are
made are shaped by physical constraints on the ‘space’ available or the problems of
congestion or search costs (the difficulty of finding what one wants in a location that
is too crowded).
The more difficult problems, however, occur in cases where the preferences of
customers change over time or are influenced by the choices of others. Many changes
in preference over time are essentially unpredictable and are simply a part of the risk
that every platform producer faces. The one area where there is some basis for
claiming foreknowledge is changes in preferences that occur as the result of greater
experience. ‘Lead users’ accumulate experience earlier than others and thereby
provide some indication of the changes in other user’s’ preferences over time.6 It is
important to note, however, that the evolution of the preferences of lead users may
follow systematically different paths those of later adopters. When the choices of
users are interdependent, there is the possibility of positive network externalities. In
the case of network externalities, getting ahead of rivals in user adoption means
staying ahead because later adopters follow the choices made by the majority of
former adopters. Conversely, once adoptions of a particular platform begin to fall, the
decline may be sustained because later adopters will choose rival platforms. These
effects accelerate the rate at which new platforms are introduced and provide a bias
towards more inclusive standards in cases where customers prefer a higher degree of
variety because of the possibility of generating a virtuous circle in adoption. Supply
side dynamic effects, such as learning and scale economies, may further augment
positive externalities in demand and accelerate the decline of incumbent platforms
because of they provide rivals with a stronger dynamic position.
A short example that illustrates many of the problems of platform definition is the
case of the IBM personal computer. The hardware interface standards defining the
IBM personal computer platform involved ‘card slots,’ a parallel printer port, two
serial ports, and a video output port. Devices that observed the technical standards for
these interfaces could be electronically connected to the IBM PC and, with
appropriate software, it was possible to receive and transmit data through these
interfaces for purposes such as data capture, communication and display. The IBM PC
6
Steinmueller 2000.
9
proved to be a highly viable platform ‘vision’ and created strong network externalities
in adoption, and corresponding augmentation of learning and economies of scale on
the supply side encouraged the entry of producers of compatible components.
One of the most remarkable features of this platform definition was its degree of
‘inclusivity’ or ‘openness’ – several of the interfaces (e.g. the parallel and serial port
definitions) were based upon industry standards that were available for general use,
and for the interfaces that IBM did control, such as the expansion card connectors,
IBM did not initially adopt an exclusive strategy. Moreover, even though proprietary,
the definition of the software for manipulating these communication interfaces and
communicating data within the IBM (the BIOS) was possible to imitate. In other
words, IBM had created a platform that rival platform producers could duplicate or
‘clone’ and chose the path of subjecting itself to vigorous competition. Eventually
IBM did introduce a proprietary standard for the ‘card slots’ called the ‘micro-channel
architecture’.7 However, by this time (1987), producers of IBM PC ‘clones’ had
become strong enough to define their own expansion card interface standard. The
resulting ‘standards battle’ was inconclusive and a third standard, defined by Intel, the
PCI (Peripheral Component Interconnect) expansion card bus standard came into use
and has remained dominant.
Although the example of the IBM PC is sometimes used as an illustration of the
danger of ‘losing control’ of the standards for platform definition, it can also be
argued that IBM’s strategy greatly accelerated the rate of adoption of personal
computers, benefited substantially from the resulting market expansion, and created
increased demand for all sorts of other hardware and software that it produced as
complementary components in larger networked information systems in which
individual personal computers became user network interfaces. In this process, IBM
nearly extinguished the primary rival platform producer (Apple Computer Inc.),
which has eventually regained some of its position (with the help of Microsoft) by
adopting a platform compatibility strategy.
7
The microchannel bus introduced in higher end PS/2 models beginning in 1987.
10
2.2 Maintaining the Platform
If the problems of specifying a viable platform can be overcome, the next series of
problems that the platform producer faces involve maintaining the platform. One of
the more important issues in maintaining the platform, the problem of harnessing
network externalities has already been considered as a feature of platform
specification. Maintaining a platform, particularly in technologically dynamic
industries, does require a continuous process of platform re-definition. This subsection and the next, however, consider the difficulties of maintaining compatibility
between platform components. There are two aspects to this problem. The first relates
to the commercial damage or legal liability that the platform producer may face as the
result of the decisions made by components producers and is dealt with in this subsection (2.2). The second set of issues relates to the technological problems of
maintaining compatibility in complex systems where interface standards are
incomplete or inadequate for the task of maintaining the platform, and is considered in
the next sub-section (2.3)
Compared to many other industries, the electronics industry, and the personal
computer industry in particular, appears to have a guardian angel protecting it from
the perils of product liability litigation. Personal computers and personal computer
software are sold without warranties regarding their fitness for any purpose and few
legislatures have even attempted to grapple with whether such warranties should be
imposed in the interests of consumer protection.
Nonetheless, all of the information and communication technology industries do
suffer from problems of ‘commercial damage’ (damage to reputation, reluctance to
purchase based upon negative experiences in the installation or use of the product,
etc.) stemming from problems with the inter-operability of components. In plain
language, commercial damage is likely to arise when a customer is unable to properly
install or use a component that is supposed ‘work with’ a particular platform.8 Whose
fault is it? A company with a successful platform is partially defended from these
problems by the network externalities that are intrinsic to their success. The customer
is likely to assign blame to the component supplier, rather than the platform.
From the component supplier’s viewpoint, however, the problem may be more
complex. The platform producer may have provided a specification for the interface
that is defective or incomplete, or there may be interaction effects with the products of
other component suppliers. Even if willing to take responsibility for the problem, the
component producer is likely to be at the short end of a case of asymmetric
knowledge. While it is possible to recheck the technical compatibility rulebooks, what
is the next step? The possibility for ‘finger pointing’ – where the component maker
blames the platform producer, who blames either the complaining component
producer or another component producer, who in turn blames…is likely to arise.
Resolving this sort of problem has some value for even the successful platform
8
Such damage may occur even when there have been no claims of compatibility, a frequent
source of cartoons and jokes in the computer community.
11
producers and provides a basis for investment in the knowledge necessary to provide
technical support to the suppliers of components.
When the context of platforms is broadened, some of inter-dependencies present in
the case of personal computer hardware and software systems may be weakened, but
similar problems are apparent. The difficulty is that almost no matter how loosely
coupled a system is, there are possibilities for interactions between components in a
platform that are unanticipated or undiscovered. It is simply not practical to develop
‘testbeds’ that reflect all of the possible states of even a modestly complex system.
Knowledge is generated in use and one of the problems that platform producers face
is how to capture and manage this knowledge in a productive way.
Managing this sort of knowledge is not straightforward for most platforms because of
the problems of ‘magical thinking’ – the spurious assignment of blame and the
cognitive differences between experienced users of a technology and the technology’s
designers. These problems serve to limit the extent to which effective ‘feedback’ can
be supplied on interoperability. In many industries, the absence of effective feedback
is coupled with a lack of investment in technical compatibility standards testing and
quality assurance.
Thus, outside the personal computer and software domain, where implicit product
liability standards such as for fitness of purpose apply, platform producers face
important legal liability risks in choosing a strategy of inclusive standards for their
platforms. These problems are amplified in jurisdictions where ‘deep pocket’
principles (the possibility of assessing disproportionate shares of an award in
situations of joint liability based upon the ability-to-pay criterion) exist.
These problems would not exist if it were possible to define technical compatibility in
a complete and unambiguous way or to effectively divide product liability risk
between platform and component producers. Neither is a realistic possibility and the
maintainability of platforms is reduced. Instead of emergent standards, platform
producers are likely to choose sponsored or negotiated standards where the
commercial relationship between platform producer and component supplier can
explicitly deal with the assignment of risk for product liability and take account of the
commercial damage that may ensue when technical compatibility standards fail to
work as advertised.
12
2.3 Dealing with Complexity
As observed in the previous sub-section (2.2), the number of hazards facing platform
producers would be reduced if it were possible to develop complete specifications for
component compatibilities used on a particular platform. The very rapid technical
progress associated with integrated circuits and networks of microprocessor-based
sub-systems described in the introduction to this paper suggest that the complexity of
artefacts in everyday use, such as the personal computer as well as a growing array of
industrial, medical, and transport equipment, is increasing. Increases in complexity
make a robust separation between interfaces necessary if the platform is to avoid the
sorts of problems identified in sub-section 2.2. Moreover, increasing complexity
requires increased capacity of systems to recover from errors since the greater number
of components in such systems increases the probability that one or more of the
components will fail or produce errors in its expected operation at every unit of time.
There has been considerable concern about issues of system reliability, particularly in
the case of systems with embedded software such as automobile antilock braking
systems (ABS) or medical devices that employ electromagnetic radiation or particle
emission (Lee 1999; 2000). A basic approach to the design of such systems is,
wherever possible, not to automate those operations that pose even a remote threat to
operators or to other people. There are, however, limits to this design approach – it
may be necessary, for example, to automate certain potentially hazardous operations
because ‘manual’ approaches are ineffective or, as in the case of ABS, because the
whole point of the system is to bypass manual control. Complex systems with
embedded software are also likely to involve a host of electronic sensors, data
communication circuits, memory, and process elements. Each of these elements is
subject to hard errors (defects in fabrication or design that occur when a system
arrives at a particular state that may be rare and hence may elude testing procedures)
and soft errors (transitory errors arising from voltage spikes or cosmic rays)
The discussion in the two previous sub-sections (2.1 and 2.2) suggests some of the
reasons for the persistence of these problems. As noted in sub-section 2.1, timing is of
key importance in the success of a platform. Earlier introduction may be essential
where there are substantial positive network externality effects in the adoption of the
platform. Moreover, as indicated in sub-section 2.2, the discovery of incompatibilities
within complex systems often involves the accumulation of real-world experience,
particularly in platforms where new and different components are integrated.
Together, these incentives provide for a hazardous outcome in which systems may not
receive adequate testing or where information about system problems may ‘fall
between the cracks’ between the various organisations responsible for the platform
and its components.
The assignment of product liability does provide an incentive for limiting these effects
although it may also create some perverse incentives. If, for example, an investment is
made in gathering information about platform failures or anomalies, this store of
information or specific excerpts from it can become the basis for arguing that an
organisation had ‘prior knowledge’ of a problem. It may therefore be in a company’s
13
interest not to gather such information despite its potential value in resolving
incompatibilities and averting their potentially disastrous consequences.
The central issue here, as with the discussion in the preceding sub-section (2.2), is the
‘robustness’ of compatibility standards. While it is possible to imagine from an ideal
perspective that compatibility standards would permit a complete modularity in which
failures or defects were detected and either resolved automatically or used to trigger
shutdown of the system that would limit or completely avoid any negative
consequences. The cycle of detection and resolution is most obvious in the case of
‘real time’ systems such as those involving embedded software, but it is also relevant
to more loosely coupled systems in which compatibility standards are employed. In
these cases, the actions to be undertaken involve recalling products, providing
‘patches’ or ‘fixes’ to components within the platform, or issuing advice about
hazards, or conditions of use that may produce problems. The effectiveness of these
actions may vary considerably depending on the nature of the problem and the scale
of resources applied in notifying customers of such problems. For example, in the
case of systems embedded in automobiles, service and maintenance networks are
directly linked to the platform (automobile) manufacturer and information about
product safety hazards flows on a regular basis. The implementation of these
networks partially reflects the history of this industry with its large product liability
claims. As noted in sub-section 2.2, however, other industries such as the personal
computer industry have had little experience with product liability claims and,
perhaps partly due to this inexperience, have less extensive and well developed
networks for communicating such information.
Platform industries also differ in their rates of technical obsolescence. In those
industries where obsolescence is more rapid, the time during which problems may be
identified and resolved is reduced, as are the incentives to resolve the problem. This
argument suggests that one approach to the problems raised by the lack of robustness
of complexity systems is to raise the rate at which systems become obsolete. If errors
are not propagated from one generation to another, they may be ‘buried’ by the
obsolescence of the earlier generation. There are, however, problems with this. The
absence of problem detection and resolution leads to an absence of learning about
problem detection and resolution as well as little pressure to improve the design
methods to avoid problems.
The consequences of the complexity problem for the standards method chosen are
similar to those identified in the previous sub-section (2.2). Because the risks of
incompatibility create externalities for all of the players involved with the platform,
there are incentives to choose sponsored or negotiated standards approaches where
such risks can be addressed in the contractual arrangements between the platform
producer and the component suppliers. With emergent standards, the customer for the
platform becomes, implicitly or explicitly, a more important partner in the effort to
detect and resolve problems arising from complexity. Despite the perverse incentives
created in some industries by product liability considerations, the development of
better channels of communication between customers and platform and component
producers provides an important means for dealing with complexity problems.
14
2.4 Simulating the Platform
A general definition of a platform has been employed in this paper in order to widen
the scope for comparative study of system interdependencies and to broaden the
definition of ‘compatibility’ in the direction of the economic idea of complementarity.
The last two sub-sections focussed on tightly coupled systems in which these wider
and broader features of the platform definition were of less importance. This subsection returns to a consideration of the organisational and technological methods
used to co-ordinate component suppliers for a broader range of platforms. The
modelling and simulation of platforms and the various supply chains and knowledge
exchanges that contribute to their integration are of increasing significance for many
industries in addition to those where such modelling and simulation are indispensable
aspects of platform creation and maintenance. Thus, while it would be completely
impossible to design an integrated circuit containing more than a million transistors
without the aid of computers and software, it is becoming almost as difficult to create
and maintain a large retail store network or a large construction project without
similar tools.
The modelling and simulation of platform products and services is a central purpose
of some of the world’s largest software companies such as Oracle, SAP, or Microsoft.
These companies’ applications of enterprise resource planning systems and the supply
of project management, of computer aided design and engineering, and of other more
specialised software are of central importance to the possibilities for platforms. They
provide the basis for ‘virtual’ models of the platform that are informed by a flood of
real world data acquired from the growing array of inventory, point-of-sale, and
ordering terminals.
One of the major drivers of platform organisation is the ability to represent the various
knowledge flows involved in constructing the platform, co-ordinating suppliers, and
delivering the platform. The means of knowledge representation are heavily reliant
not only on advances in information and communication technologies, but also on the
design of software information systems and the collection of organisational
procedures that support the gathering and updating of data.
Ironically, in platforms with more loosely coupled components the profitability and
productivity gains derived from using these processes may be higher than in the
platforms where they have long been indispensable. Historically, industries as diverse
as aircraft manufacture, financial services, and travel services have had to produce
integrated information systems in order to survive and these systems have become
part of doing business in these industries. The delay in adoption of such methods by
other industries appears to be related to the scale of organisational change required
and the modest gains that could be achieved through any individual step. It is still
true, for example, that many service companies are implementing customer relations
management systems in a ‘piecemeal’ manner, with fairly modest expectations about
their eventual financial return, but in recognition that adoption of such systems will
eventually be necessary to maintain competitive position (Steinmueller 2003). In
other words, virtual models of the platform may be particularly useful in industries
that have not previously made the organisational changes necessary or created a
complete network environment for linking the virtual model to the real world.
15
There are limits, however, both to the improvements that can be expected from such
virtual models and to their ability to substitute for other forms of knowledge
generation and exchange. A principal limitation is that achieving a relative
competitive gain over rivals from the use of such models is difficult. As new
applications come into service and demonstrate their value, rival firms adopt the same
techniques. After a round of adoptions has occurred, relative competitive position will
depend upon how other aspects of the business are managed. It is only in cases where
leadership in successive waves of improved systems can be made part of the business
model that such models are likely to have a sustained competitive impact.9
Assessing the potential for virtual models of platforms to substitute for other forms of
knowledge generation and exchange is more complicated. It is certainly true that these
models allow for more realistic simulations and therefore provide the capability to
experiment with different arrangements of the components of the platform or the
means by which it is integrated. Whether these experiments will provide better
outcomes than those based on localised knowledge and decision-making is not so
clear. For example, despite their widespread promotion, it is difficult to find
independent assessments of the bottom line contribution of ‘data warehouses.’ Even
more detailed levels of modelling and simulation such as employing details about the
engineering of products involves the problem of product obsolescence noted earlier –
there would be little value in having the millions of pages of information that were
written about the Apple II personal computer in a ‘data warehouse’, and detailed
information about contemporaneous products may have as little value in only a few
years.
Despite these limitations, modelling and simulation methods are becoming more
important in relation to individual aspects of platform design and integration.
Computer aided design and engineering techniques are now used in almost all
manufacturing industries, enterprise resource planning systems are employed in both
manufacturing and services, and the use of project management software is
ubiquitous. The consequence of these developments is that an ever-growing stream of
data is being generated with much of it exchanged between companies to support the
co-ordination and maintenance of component supply chains. While, in principle,
modelling and simulation techniques provide a framework for integrating these data,
enormous cognitive and organisational problems remain in the translation of these
data into useful knowledge. The extent of these problems within organisations has
been documented in a very rapidly expanding literature of which D’Adderio (2001;
2004) are examples.
In principle, many of the issues identified in this section could be resolved if virtual
models and simulations of the platform could be successfully engineered. The data
that are increasingly generated as a by-product of the operations of component and
platform producers’ activities could, in theory, be integrated into these models and
provide the basis for platform specification, maintenance, and identification and
resolution of the problems of incompatibility and complexity. However, this point of
9
It is possible, for example, that the e-commerce companies Amazon and e-Bay, or the
information services company Google, may be sponsors of platforms that are able to achieve this type
of continuous change. It is too early, however, to distinguish these companies’ recent success from the
general expansion in Internet use in which they were among the first movers in online service
offerings.
16
development has not yet been achieved. More coherent models and simulations are
likely to arise from more closely co-ordinated and monitored relationships between
platform producers and component suppliers. In addition to the greater degree of codevelopment that is likely to exist in these relationships, the need to monitor
adherence to compatibility standards and, indeed, the very definition of compatibility,
are likely to be more detailed than for the looser structures suggested by emergent
standards. Thus, again, sponsored and negotiated standards making processes have an
advantage, even though it is now foreseeable that modelling and simulation
techniques may provide greater scope for emergent standards in the future.
2.5 Asset Specificity
Each of the previous sections has suggested constraints to modularity – reasons that
sponsored or negotiated processes are likely to prevail over emergent ones – based
upon technical considerations. In sub-section 2.1 the role of asset specificity in
sponsored and negotiated standards setting was mentioned. In that sub-section the
concern was with the threshold or hurdle to the creation of platforms created by asset
specificity that might be imposed on the component producer. Here, the issue of asset
specificity is applied to the institutions employed for setting compatibility standards.
Emergent standards offer the important social welfare advantage of creating a more
competitive market. Even in the case where these standards are proprietary, their
visibility provides an incentive to create competing compatibility standards. If these
standards support the emergence of a successful platform, there is the further
advantage of market expansion or earlier development of the market. Finally,
emergent standards reduce or eliminate the problem of asset specificity. They
eliminate the problem if there is only a single platform based upon the emergent
standards – component production proceeds in tandem with platform adoption. They
reduce the problem if there are competing platforms based upon distinct emergent
standards in which case the component producer must choose to develop knowledge
and capabilities for one or more of the platforms. The risk lies in picking the wrong
platform or dissipating resources by backing several platforms that prove
unsuccessful.
The downside or negative feature of emergent standards, however, is synonymous
with the public welfare benefit. Rather than being able to generate producer surplus
through restrictions to entry that are likely to occur in the case of sponsored or
negotiated standards, competition delivers this surplus to the consumer. The only way
to avoid this is for the component producer to have a sustained technological
advantage over rivals in producing the component dictated by the emergent standard.
This is possible of course. For example, 3Com has remained a relatively strong
company in the production of Ethernet local area networking components that it
pioneered. In many more cases, however, first mover advantage has not been a
sufficient shield for pioneering companies and they have either exited the market or
become commodity producers.
From the component supplier’s viewpoint sponsored standards create a monopsonistic
market for the platform producer and further risks that the platform producer will
want to create barriers to entry to other platform producers, either imitating their
17
sponsored standards or developing rival ones. The social welfare implications of this
structure are also unfavourable as discussed below in Section 3, and the possibility of
social welfare loss is determined by the effectiveness of competition between
platforms.
Negotiated standards create the textbook economic case of bilateral monopoly and
what Teece (1986) refers to as co-specialised assets (the platform producer also
becomes dependent upon the component supplier as the result of the negotiation). The
textbook solution to bilateral monopoly, market power of both the buyer and seller, is
vertical integration. Vertical integration is a more profitable joint profit maximisation
strategy for the merged enterprise than the outcome, which occurs when both buyer
and seller exercise their market power. The problem of co-specialised assets seems
likely to be an important source of instability in the use of negotiated standards. The
continued use of negotiated standards suggests that the textbook solution is missing
something. What is being overlooked is the role of knowledge related to specialisation
that supports this arrangement. By acquiring or merging with the component supplier,
the platform producer has to extend the scope of the enterprise to the formerly
independent companies. While the tensions of negotiation may have provided a useful
discipline for each of the companies, this discipline is likely to disappear in the
merged entity. And, finally, the costs of ending the relationship with the components
supplier are much higher if vertical integration has occurred. All of these reasons
indicate why negotiated standards persist without vertical integration, but none of
them eliminate the basic problems of the social welfare effects of the bilateral
monopoly.
2.6 Summary
This section has examined the process of platform definition and maintenance from a
‘positive’ technological and economic perspective and has undertaken the task of
assessing the roles of sponsored, negotiated and emergent compatibility standards
setting processes. Since modularity may be strongly associated with emergent
standards, a by-product of the analysis is a critical review of the limits of modularity
as a means for designing and maintaining platforms, and as an industrial structure.
The conclusions are relatively straightforward. The issues chosen for examination
were based upon experience in the information and communication technology
industries where modular approaches are widespread and powerful. Nonetheless, in
these industries a series of problems persists – problems of defining platforms,
maintaining them in the face of uncertain compatibility, dealing with growing
complexity, developing better means of modelling and simulating components and
platforms (or in the ICT context, systems), and problems of asset specificity. In the
case of asset specificity, social welfare favours emergent standards and a modular
platform design and industrial structure. There are, however, incentive problems for
component suppliers with each of the standards making institutions. In all four of the
other issues examined sponsored or negotiated standards appear to have advantages
over emergent standards. In most cases, and for the same reasons, the industrial
structures of monopsony associated with sponsored standards and bilateral monopoly
associated with negotiated standards appear to have advantages for at least one of the
18
actors compared to the more competitive industrial structure associated with
modularity.
This analysis should not be taken as a criticism either of modularity or emergent
standards. Where they can be employed, there is little question that modularity and
emergent standards offer superior social welfare outcomes. In addition, in areas where
they have deficiencies, such as the management of knowledge with regard to system
defects, it is relatively straightforward to craft interventions that would improve
performance and these are discussed in the next section (Section 3). The difficulty is
that the persistent problems with modularity that have been experienced in the
information and communications technology industries are mostly amplified when we
consider other industries. Although the case has certainly not been proven, there is
considerable basis for suspecting that the application of modularity in other industries
faces severe problems that may be mitigated in the future only by the emerging
potentials being offered by the further development of modelling and simulation
techniques.
19
3 Governing Platforms
Section 2 considered how platforms are created and maintained and, in the process,
considered the institutional framework for standard setting governing the relationship
between platform producers and component suppliers. This discussion identified three
institutional configurations for standards setting – sponsored, negotiated and emergent
standards – and associated each of these with an industrial structure – monopsony
(platform producer market power), bilateral monopoly (simultaneous market power
by component producer and platform producer), and an industrial structure with
‘greater competition’ (although how ‘great’ remains to be examined).
This section relies heavily on the discussion in the previous section to make a series
of observations concerning the strategy of the players, platform producers and
component suppliers, and the possibilities for third party intervention by government
or by the concerted actions of platform customers. As in the previous section, there is
an incidental ‘target’ of the analysis and that is the current enthusiasm for ‘open
standards,’ which may be taken as a special case of the emergent standards identified
in the previous section. While emergent standards were taken to be generally
available to entrants, there was no assumption made concerning the terms under
which they might be available.
In this section we consider two varieties of emergent standards – licensable standards
and open standards. The somewhat awkward term ‘licensable standards’ refers to
proprietary standards that can be licensed to control who may participate in the
platform, or employed to transfer revenue from component suppliers to the owner of
the proprietary content (typically a patent or copyright) of the licensable standard.
Although, in principle, this proprietary content might be owned by either the platform
producer or one of the component suppliers, the analysis pursued here will be based
on platform producer ownership (the case of component supplier ownership is worth
considering, and corresponds to some real world cases such as mobile telephony, but
is left for another paper).10
This section is organised in three subsections. The first (sub-section 3.1) introduces a
simple typology in which the ‘newness’ or maturity of a platform is compared with
two different levels of technological opportunity in order to provide insight into the
conditions under which open standards are jointly in the interest of platform producers
and component suppliers. The second (sub-section 3.2) examines strategic options for
platform producers and component suppliers to exploit the standards making process
to create market power, and the arguments that have been offered concerning the role
of ‘open standards’ in mitigating or eliminating this market power. The third (subsection 3.3) examines means by which by users through concerted action or through
government mediation may intervene in these processes to achieve better outcomes
than the markets might deliver.
10
The following sub-section (3.1) draws heavily upon Institute for Prospective Technological Studies
(2001), Chapter 2, for which the author of this paper was principally responsible.
20
3.1 Standards in Relation to Technological Opportunity and
Platform Maturity
Licensable standards become part of platform producers’ specifications once they
believe that it will be possible to successfully promote the platform despite the
reduction (or elimination) of competition in some or all of the components. At the
extreme a single company may serve as gatekeeper for to participation in the
platform, i.e. a single company decides who will be licensed. For example, video
game producers who make the interface between the software cartridge and the game
console (the platform) proprietary are attempting a ‘complete control’ or monopolistic
strategy. Alternatively, the licence fee may be used to generate further revenues for
promoting the platform or investing in its further development. In either case, this
strategy provides a means to differentiate the platform from rivals and is a bid to
retain control of the platform compared with the alternative of an open compatibility
standard. Open standards are employed when the benefits of market expansion and/or
resistance of customers to a ‘controlled platform’ make this approach a superior
economic decision for producers. An intermediate strategy may involve a licensable
standard that only involves control of who may participate – licence fees are nominal.
This case, approximating an open standard, arises when a group of companies believe
that it is in their collective interest to have a common standard in order to compete
with (typically larger) sponsors of licensable standards and the ‘club’ that is formed
by their licensees.
The ability to establish and maintain ‘open standards’ depends upon the stage of
development of the platform and upon the nature of technological opportunities. The
following table (Table 1) summarises the possibilities for establishing and
maintaining open standards in the four cases arising from considering two states of
platform development and two states of technological opportunity. This analysis is
particularly relevant to markets where there are ‘adoption externalities,’ in which the
value of a standard is increased by the number of users.
21
Table 1 Establishing Open Standards for Platforms
Technological
Opportunity
Stage of Platform
Development
New
Mature
Low
High
Establishing: Relatively easy
Caveat: Establishing an early
standard can be regarded as
costly when compared with
superior alternatives that later
emerge. However, the same
early standard may provide a
basis for first mover
advantages in other markets.
For example, the US NTSC
standard was inferior to later
standards, but allowed colour
television programming to
develop in the US at an earlier
date.
Establishing: Difficult
1) Incumbents have a vested
interest in maintaining
product differentiation
2) Existing users will be
‘orphaned’
Maintaining: Easy
Caveat: It is not certain that
open standards will be procompetitive under these
conditions.
Establishing: Very Difficult
1) Incentives are needed to resolve
technological and market
uncertainties.
2) Mandating a particular open standard
without resolving technological and
market uncertainties may result
either in alternative proprietary
standards (e.g. voice modems vs
ISDN) or lock in to inferior standards
(e.g. analogue mobile telephony)
Caveat: Technological uncertainties can
be reduced by appropriate RTD policy
(see sub-section 3.3)
Establishing: Uncertain
1) Depends upon the distribution of
gains to incumbents (Microsoft
adopted Internet browser standards
because it increased the value of their
mature operating system but is
unlikely to create a Linux version of
Word because of the losses to
revenues in its operating system
market)
2) Also depends upon the extent of
heterogeneity in user needs. A
broader distribution of needs makes
open standards less effective.
Maintaining: Difficult
Microsoft and others are altering the
HTML standards, for competitive
advantage but are being resisted.
Caveat: Establishing an open standard
may reduce the incentives for market
entry against entrenched incumbents
because it reduces the profitability of
technological differentiation.
The simplest case is mature platforms with a low level of technological opportunity.
In these platform markets, the prevalence of licensable standards can be largely
attributed to the value of these standards to incumbent platform producers in
defending their market share against other incumbents or entrants. Advocates of open
standards would maintain that the principal advantages to users of more open
standards would be enhanced price competition. For example, mandating open
standards would bring the incumbent companies into direct competition. It is
important, however, to consider the costs to users orphaned by a change in standards
and the reduction of competition between platforms (David 1986). Licensable
standards do provide the opportunity for higher than normal profits. However, when
22
they exist these ‘extra’ profits may be re-invested in efforts to gain market share by
actions such as educating and supporting users – an instance of competition between
platforms. It is, therefore, not appropriate to conclude that an enhancement in price
competition will necessarily result in higher levels of customer benefit. Maintaining
already-established open standards in mature platform markets with low technological
opportunities is relatively easy because entrant platform creators will have few
opportunities for creating superior product offerings in a bid to win market share from
incumbents.
Mature platform markets with high technological opportunities are a bit anomalous
since high technological opportunities tend to support the creation of new platform
markets. Nonetheless, there are examples of such markets. Although the proposition is
arguable, the personal computer application software market, in general, remains
immature with substantial opportunities for establishing new platforms. Within this
market, however, there are segments that have reached maturity, or at least have
achieved a temporary stability. One example is the small producer-good segment
represented by installer utilities – the software that installs new applications on a
user’s personal computer. In this market, over one hundred of the leading packaged
software companies use InstallShield products for installing new software.
InstallShield is an independent privately owned company that was founded when
Microsoft launched Windows 3. It is a ‘licensable standard’ of the hybrid type – it is
available to all for a fee. InstallShield’s market position exists not because there is one
best way to install software on Windows machines, but because there is considerable
benefit to both producers and users in the familiarity of the product – a network
externality created by reputation effects. In this case, there is a ‘positive sum’ game
between InstallShield and its customers, the leading packaged software companies.
Although all of these companies could produce their own installation interface (a
licensable platform) for their software products, it is beneficial for them to adopt the
InstallShield solution. The fact that all the parties benefit from this arrangement
creates stability for the InstallShield solution.11 In this case, network externality
effects arising from trust combined with a first mover advantage serve to create a
stable and effectively open standard for application software installers.
Cases of new platforms with low technological opportunity produce relatively
predictable outcomes. In the very simplest of these cases, there is one or only a few
best ways to design something, which then becomes the standard. Initially, this
standard may be licensable, but whatever proprietary content protection it has
eventually expires and a common open standard prevails. In more complex cases, the
actors collectively may have an interest in an open standard from the outset. The
major problem then is to choose the best compromise among the available or soon to
be available technologies, since whatever standard is chosen is likely to be
synonymous with the definition of the platform. The US adoption of the NTSC
standard for broadcast television, a compromise among available and soon to be
available technologies in the light of frequency allocation policies, produced a
markedly inferior standard to the PAL and SECAM standards later adopted in Europe
and in many other parts of the world. The message from this experience is simple.
Achieving an open standard in new markets with low technological opportunity is a
11
The very largest software companies do, however, maintain their own platforms for software
installation.
23
matter of design choice, but it is a design choice that is likely to stick as widespread
adoption occurs, particularly if regulatory practice does not allow for the potential of
upgrading. There is, however, a trade-off between platform market growth and the
early adoption of an open standard.
Early adoption of an open standard will support the development of the platform. In
the case of the US NTSC television broadcast standard, earlier entry into colour
television programming may have created persistent advantages in the content market,
even if later standards offered superior features. More generally, the dynamic
development of platform markets may reinforce the strength of commitment to the
standard first adopted, creating a barrier to improvement and requiring a more major
‘step’ in performance or features to encourage users to adopt new standards and new
technologies.
Finally, there is the case of open standards in new platform markets with high
technological opportunities. A high level of technological opportunity suggests
competition between licensable standards. Open standards may, however, prevail if
they materialise in a timely way and users favour them. This is particularly true in
platform markets with high levels of network externality – an early lead by a timely
open standard or a set of incrementally improved open standards is likely to persist. A
low degree of heterogeneity among users, e.g. the early days of InstallShield’s life
when application software companies were seeking to outsource the installation
process, is one example of the conditions supporting user interest in an open standard.
A higher level of user heterogeneity not only makes concerted action to advocate an
open standard more difficult, it also provides greater entry opportunities for licensable
standards.
It is particularly tempting to try to intervene in this case in order to achieve more rapid
market development through the promotion of open standards. Under these
conditions, policy-makers must consider the possible consequences before intervening
to promote open standards. Several negative outcomes are possible. These include the
orphaning of users whose choice of the open standard is bypassed by the emergence
of technologically superior alternatives or the massive adoption of standards that are
perceived as clearly inferior to what should have been implemented. Substituting
economic commitments from actors with political decisions raises the question of
accountability. If the decision is political, who will be held responsible for its
outcomes? Private sector decision-making is also imperfect since the costs imposed
on users cannot effectively be recouped from the producer. The market solution does,
however, have the advantage that the company committing users to a bad standard is
driven out of the market, a limited but often effective form of accountability.
By considering the joint effect of the maturity of the platform market and
technological opportunities it is possible to conclude that there are some conditions
under which open standards have a better chance of arising spontaneously because
they are in the interests of all of the market actors. The following table (Table 2)
summarises these prospects. The most troublesome feature of this table is the
difficulty of achieving open standards in new platforms with high technological
opportunities since failing to do so may reduce the rate of market growth, lower social
welfare, and lead to persistent market power of the licensable standards as the
platform matures. In the case that maturation involves a ‘drying up’ of technological
24
opportunities it is difficult to replace a licensable standard that was established
earlier. If a high level of technological opportunity persists in the mature platform
market, there are likely to be significant problems in aligning the interests of the
market players to adopt a new open standard.
Table 2 Summary of the Prospects for Open Standards
Technological
Opportunity
Stage of Platform
Development
New
Mature
Low
High
Market enlargement effect
favours open standards but
concerted action may be difficult
to mobilise.
If the market enlargement effect
did not lead to an open standard
when the platform was new, it is
difficult to establish one later. The
proprietary content of the
licensable standard will, however,
eventually expire creating an open
standard for the prevailing design.
Open standards are very difficult to
achieve as licensable standards
provide more resources to explore
technological opportunities
The prospects for open standards are
uncertain as they depend on finding
an alignment between producer and
user interests regarding how to
explore technological opportunities.
Network externality effects may
nonetheless favour open standards.
These conclusions leave two major questions. The first concerns the social welfare
effects of licensable standards. The most important issue here is whether it is possible
for the licensable standard to be exploited to extend control of the market and achieve
a higher degree of market power. The possibilities and limits to this strategy are
considered in the next sub-section (3.2). The second question is what interventions
might offer the prospect of a better social welfare outcome than market solutions that
create a high level of market power. This question is addressed in sub-section 3.3.
25
3.2 Exploiting Standards to Build Market Power
The risks that licensable platform standards may increase market power and diminish
social welfare are twofold. One hazard is that licensable standards may allow the
platform producer to capture the definition of the technological trajectory, the
direction and, to some degree, the rate at which future technological progress is made.
This possibility is examined in the first half of this sub-section. The second risk is that
the platform producer may be able to extend the market power offered by licensable
standards vertically into the market for platform components. This risk is considered
in the second half of this section.
In platform markets with rapid technological progress, the extent of advantage
available to the first mover, the company that pioneers the development of the
platform and its associated innovations, or to other companies that quickly follow the
first mover, reduces the market for additional or later innovations. The key issue is
whether there are under-exploited technological opportunities that would support the
entry and growth of new firms or the readjustment of market position among current
players. With high levels of technological opportunity these under-exploited
opportunities are likely to persist and foster competition between platforms producing
a higher level of social welfare. The relative difficulty of achieving a persistent
technological advantage in information and communication technologies markets
provides an incentive for firms to adopt strategies that aim at shaping the rate and
direction of technological progress. Many of these strategies are discussed by Shapiro
and Varian (1998) and need not be replicated here.
The basic point of these strategies is to provide platform producers with an advantage
over rivals in defining and implementing the next incremental step in technological
advance. The strategy must be chosen so that the incremental benefits are sufficient to
encourage an economically significant number of users to upgrade to the new
‘generation’ of technological solution after which network externalities may reinforce
the position of the incumbent platform producer. To implement such strategies,
platform producers need to take account of user costs in making the adoption decision
or take actions that will reduce the costs of learning and adaptation relative to rivals.
In some circumstances, the incumbent platform producer may be able to sequence the
introduction of new products with sufficient regularity to reduce the ‘space’ available
for competitive entry, in effect defining the ‘dominant design’ in a dynamic fashion,
based upon the firm’s next product offering. Under these conditions, it does not make
a great deal of difference whether the associated technical compatibility standards are
‘proprietary’ or ‘open.’ In either case, the dominant firm is likely to retain its position
because of its ability to define how the next generation of the product will be
extended and developed.
In this situation of ‘dynamic market control,’ it is tempting to suggest that earlier
disclosure of the intentions of the dominant firm might extend competition.
Specifying an effective ‘rule’ for achieving this objective, however, has cost and other
consequences. Early disclosure would, in effect, commit the firm to particular design
choices, while if these choices were under its sole control, it might decide to make last
minute changes. This flexibility would allow last minute changes to act as a strategic
tool for raising rivals’ costs. Rendering it impossible for last minute changes to be
26
made may reduce the rate of technical progress. Methods for dealing with this
possible source of market power are considered in the next sub-section (3.3)
Can a licensable standard owner extend its control of the standard to vertical markets?
Vertical markets are either upstream, and involve the supply of inputs, or downstream
to include products or services that must be added to the ‘standardised’ product to
create the platform good or service that is sold to the end user. If such vertical control
is possible, is it in the interest of the standard promoter to exercise this control? In this
context, control is meant to connote with the ability to choose among suppliers,
excluding those who would compete with ‘favoured’ suppliers who comply with the
dominant firm’s terms and conditions. Although the anti-competitive intent of such
control is highlighted, it is important to emphasise that the incentives for monopolistic
behaviours are not necessarily increased by vertical control. For example, this
situation may be better from a social welfare viewpoint than the existence of ‘bilateral
monopoly’ that arises in the case of negotiated standards.
The question of the feasibility of such vertical control depends upon how the
‘standardised’ product or service is combined with other inputs to produce a final
product. A platform producer that selects open standards benefits from its network of
suppliers of related products and services. The exercise of vertical control over these
firms may increase the possibility that they would defect to an alternative supplier,
deposing the dominant platform and the dominance of its promoter. The management
of a dominant position involves the selective exercise of control, only excluding those
firms that are likely to create a viable coalition in favour of a rival standard and
entrant. This can be a difficult task to achieve. Both the upstream and downstream
suppliers will want assurances of market growth, and can become dissatisfied about
sharing it with other suppliers.
Nonetheless, it is possible for such vertical control strategies to succeed for two
reasons. First, the process of creating a viable alternative network of suppliers is
costly and time-consuming. Efforts to do so will meet with competitive responses
from the dominant firm or firms, so it is necessary to sustain the commitment from
participants who will be individually approached by the dominant firm. Second, in
many markets, the suppliers may have relatively specialised capabilities and/or be
relatively young firms in no position to defect from the dominant coalition.
These two reasons have somewhat different implications for policy. They are both
varieties of co-ordination failure where the costs of negotiating and providing
assurances may be too high to achieve the objective. In the first case, dissatisfaction
with the standards promoter may simply reflect greed, and it would be unwarranted to
presume that the firm might do better under some other arrangement. In this case, a
policy intervention that would encourage competition may result in an artificial
fragmentation of the market and the loss of incentives for some types of innovation or
other investment in market development. A similar logic applies in the second case
but, here, it is more likely that market changes will result from policy intervention. If,
for example, a credible rule prohibiting undue termination of supply relationships
could be implemented, the supplier firms would have far less to lose from entertaining
a different standards sponsor, and potential suppliers would have greater opportunities
to enter the market. (The incumbent would have less incentive to reward its existing
suppliers by excluding competitors.) The same conclusion applies to the first case if it
27
is not only greed, but also coalition formation problems that are responsible for this
situation.
Under these conditions, it will not only be feasible, but also desirable for the standards
sponsor to exercise vertical control. As noted earlier, doing so does not have a shortterm social welfare effect greater than that stemming from the market power of
controlling the standard. For the dominant competitor or competitors, however,
exercising vertical control can consolidate or strengthen the barriers to entry and thus,
over time, increase the economic value of these firms’ positions and the potential for
loss of social welfare.
3.3 Intervening
In considering intervention, it is important to note at the outset that market led
solutions in which users exert concerted action may play a vital role in defining open
standards and overcoming or mitigating the problems of market power arising from
licensable standards that are used to create positions of market power either by
controlling technological progress or extending vertical control. Users may either
produce or endorse common ‘open standards.’ The relative infrequency of this
occurring is an indication of the difficulty of mobilising users, which is often further
complicated by their heterogeneous needs. Developing better methods for mobilising
users would be an important alternative model to sponsored standards and this, while
relatively limited in scope of application so far, has much greater potential for
development.
While network effects increase the number of information and communication
technology markets in which ‘dynamic market control’ is a real possibility, many
other markets continue to be characterised by large incremental technological steps
that are accompanied by substantial uncertainty for both incumbents and new entrants.
Under these conditions, efforts to achieve this control can be countered by the
commitment of rival firms to open standards and to a relatively free flow of
knowledge exchange during the refinement of the technology. This strategic
possibility has been a major force favouring the adoption of ‘open standards’
strategies in several important markets such as workstations (Sun Microsystems) and
Internet Protocol (IP) routers (Cisco). In the two examples cited, the firms hold
dominant positions, but have committed to a technological race in continuing to offer
products at the frontier.
Markets where technological change occurs in large incremental steps with substantial
uncertainty provide an opportunity for proactive government policy to accelerate the
rate of market development. There are three possible policy instruments for doing
this.
First, it is possible to support research and development efforts in the area aimed at
creating a larger body of technical knowledge relevant to supply side innovation. The
availability of such knowledge allows both producers and users to better gauge or
anticipate broad features in future market developments and to begin exploiting their
potential earlier. The formation of expectations about the future development of
technologies is an important influence in achieving market co-ordination, and
28
research that results in credible ‘visions’ or ‘scenarios’ concerning future
developments can help align the investment behaviours of the private sector. This
knowledge also provides an opportunity for entrepreneurial firms to identify enabling
or complementary technological developments that will be needed as the market
develops. The historical (1950-1970) role in the US of military and space research
was to uncover information about technological opportunities and thus to encourage
earlier market development than would have occurred solely through commercial
exploration of (more immediate) frontier opportunities.
Second, policies supporting an increase in the publicly available knowledge about
likely trajectories or paths of future development can hasten the development of
provisional standards aimed at earlier deployment of technologies. When
technological advances are large, both producers and users have an incentive to wait
for further developments to emerge from the research laboratory. If laboratory
developments are further accelerated, however, it becomes possible for both
producers and users to anticipate their being ‘reduced to practice’ (developed and
deployed). The availability of ‘advance information’ reduces suppliers’ apprehensions
that they will commit themselves to a technology that will become obsolete before it
can be deployed. The Winchester hard disk market is an example of such a process,
albeit one where a large firm (IBM) rather than public policy produced the ‘advance
knowledge’. IBM’s Almaden laboratory in San Jose, California, accelerated the
development of magnetic drives with the aim of strengthening IBM’s competitive
advantage. Some of the basic principles of the Winchester disk became general
industry knowledge. By identifying the key bottlenecks and constraints to
technological advance, the Almaden laboratory encouraged entrepreneurs (in
particular Alan Shugart, one of the researchers) to reduce the technology to practice
sooner rather than later. In particular, the market for personal computer hard disks
proved to be far larger than for the most advanced and highest performance drives.
Although both markets were profitable, the pace of technological developments
stemming from ‘non-frontier’ market developments became faster than those based on
the ‘high performance’ research trajectory. As a result the disk drive market became
much more competitive and open to entry.
Third, in some circumstances public policy can accelerate market development
through procurement policy. Government purchase of actual products encourages the
full reduction to practice of research developments and, thus, can ‘launch’ a market
earlier than it would develop solely through market forces. Procurement policy is an
expensive and uncertain method for accelerating market development. A principal
example of successful procurement policy is integrated circuits, which were produced
for US military purchasers and whose initial prices were entirely uncompetitive with
those for discrete transistors. The early development of the technology provided the
US with a decade of commercial advantage in the market. However, it would have
been very difficult to commit public funds for this procurement for any purpose than
national defence. Moreover, government procurement of other advanced technologies
such as breeder reactors and coal gasification plants has led to the squandering of
huge amounts of public resources, an illustration of the risks of procurement policies.
The acceleration of market development does not, however, ensure that a competitive
market structure will emerge. In markets with ‘open standards,’ it is possible for a
dominant competitor to endorse ‘open standards’ without endangering its competitive
29
position. A major reason for this is that such markets are characterised by the supply
of a substantial array of complementary products and services that form a supporting
‘network’ with associated network effects. The implicit and explicit endorsement of
these complementary product and service suppliers serve to support the claims of the
dominant firm that it offers the best implementation of the open standard in terms of
price, technical features, and other criteria. Thus, even if open standards operate to
reduce entry barriers, the network effects of complementary product and service
suppliers, may buttress the position of the dominant firm. In effect, the fact that the
standard is open has little influence on the market outcome, under ordinary
circumstances. It is important, however, to note that open standards provide a basis
for rapidly reconstructing a market if the dominant firm should falter in its efforts to
maintain the pace of technological advance. This is the principal reason that, other
things being equal, it is a desirable outcome from a social welfare viewpoint to favour
the creation of open standards. That is, while one cannot be sure that open standards
will increase competition in the short run, they do provide long run assurance that
market development will not falter as a consequence of mistakes made by the
dominant firm.
30
4 Conclusion
The vision of an economic world governed by ‘open standards’ supporting the
modularity of product architectures and an inclusive division of labour embracing
entrants with a better idea or a higher level of efficiency is extremely attractive. It
offers a new and inclusive image of the international division of labour, a formula for
preserving competition in industries where increasing returns might otherwise limit
market competition, and a means to reduce the wasted time and deferred benefits of
markets that do not develop because they are fragmented by standards wars.
This paper has explored the limits of this vision by considering the problems with
modularity that have persisted in the information and communication technology
industries where technological and other factors favour its adoption. It finds that many
of the same limits that have constrained modularity in the information and
communication technology sector are amplified when we consider other industries. In
addition, many of these other industries suffer from severe knowledge co-ordination
problems due to the asymmetries in knowledge between component suppliers and
system producers. A principal reason that modularity has been so effective in the
electronics industry and the growing part of the industry that involves data
communication is the fortuitous combination of rapid market expansion and enormous
technological opportunity. Duplicating these conditions in other industries would be
very difficult
There is, nonetheless, some prospect for the spread of modularity to other industries
due to the market opportunities offered by the international division of labour. The
lessons that have been learned from the electronics industry are that modularity is
possible so long as well-defined compatibility standards for interfaces can be created.
It is a bigger step to achieve modularity through open standards.
In defining the standards that govern product or service platforms, there continues to
be a variety of incentives for establishing and maintaining sponsored or negotiated
standards – in some cases there is little choice but to employ these methods of
standards setting despite their potential for reducing competition and, ultimately,
social welfare. For open standard to prevail through market processes, they must be in
the interest of market players, a situation which is most likely to be found in markets
with rapid potential for growth. Such markets, however, are also likely to be ones
where technological opportunities remain high and are relatively unexplored. It is
difficult to make a convincing case for the superiority of open or emergent standards
as a mechanism for knowledge generation and exchange in such circumstances.
The dangers of intervention to achieve what the market may fail to deliver due to coordination failure or the market power inherited by successful first movers are
significant. Mandating open standards or outlawing the proprietary restriction of
standards by sponsors or coalitions of platform producers and component suppliers
creates unacceptable risks of producing technologically laggard standards or platform
markets that fail to engage the commitment of the market players.
In the fortuitous case where the benefits of market expansion through open standards
align the incentives of all of the actors, open standards may be expected to prevail
31
along with a modular approach to platform architecture and a highly innovative
competitive platform market. Even if first mover advantage creates a dominant
competitor, the discipline provided by open standards ensures that the rate of market
growth through innovation will reduce or eliminate the costs to social welfare of the
existence of a dominant competitor.
Some of these same advantages may be found in markets where sponsored or
negotiated standards prevail. The market power available in these markets may be
offset by the competition between platforms and the common need of rivals to
innovate to maintain competitive position.
This possibility does not, however, eliminate the risk that sponsored or negotiated
standards making processes may eventually lead to social welfare losses through
control of the evolution of technology or the extension of market power into other
markets. These possibilities should be taken seriously and actions taken to reduce
their likelihood. Such actions involve working with rather than intervening in the
market by providing better information about emerging technological opportunities
and accelerating the pace at which emergent technologies are deployed in the market.
The role of public policy in supporting improved knowledge can also play a
fundamental role in overcoming some of the persistent problems with modular
approaches to platform design. In particular, better means for modelling and
simulating the operation of product and service platforms are likely to emerge if the
development of the research base for these techniques is accelerated.
Government research policy might also commit to an improvement in the research
foundations for understanding the role of information and communication
technologies in creating the conditions necessary for modularity and open standards.
Re-allocating a large fraction of the immense resources that have been devoted to
exploring the technological frontier to the examination of the social and economic
frameworks into which these new technologies must be integrated, would create the
basis for new innovations and the more rapid dissemination of new and better ideas
arising from practice.
32
References
Baldwin, C. Y. and K. B. Clark (1997). "Managing in an Age of Modularity."
Harvard Business Review 75(5): 84-93.
Baldwin, C. Y. and K. B. Clark (2000). Design Rules: Volume 1 The Power of
Modularity. Cambridge, Massachusetts, MIT Press.
Beniger, J. R. (1986). The Control Revolution: Technological and Economic Origins
of the Information Society. Cambridge MA, Harvard University Press.
Blakeslee, T. R. (1975). Digital Design with Standard MSI and LSI. New York, John
Wiley and Sons.
D’Adderio, L. (2001). "Crafting the Virtual Prototype: How Firms Integrate
Knowledge and Capabilities Across Organisational Boundaries." Research
Policy 30(9): 1409-24.
D’Adderio, L. (2004). Inside the Virtual Product: How Organizations Create
Knowledge through Software. Cheltenham, Edward Elgar.
David, P. A. (1986). Narrow windows, blind giants and angry orphans: The dynamics
of systems rivalries and dilemmas of technology policy. Stanford Institute for
Economic Policy Research, Technological Innovation Project, Working Paper
nos. 10.
Hounshell, D. A. (1984). From the American System to Mass Production, 1800-1932.
Baltimore, MD, Johns Hopkins University Press.
Hughes, T. P. (1989). American Genesis. New York, Viking.
Hughes, T. P. (1993). Networks of Power: Electrification in Western Society, 18801930. Baltimore, MD, Johns Hopkins University Press.
Hughes, T. P. (2000). Rescuing Prometheus. New York, Vintage Books.
Institute for Prospective Technological Studies (2001). Future Bottlenecks in the
Information Society: Report to the European Parliament, Committee on
Industry, External Trade, Research and Energy (ITRE). Seville, IPTS.
Lancaster, K. J. (1979). Variety, Equity and Efficiency: Product Variety in an
Industrial Society. New York, Columbia University Press.
Lee, E. A. (1999). Embedded Software — An Agenda for Research. Berkeley, CA,
Electronics Research Laboratory, University of California, Berkeley.
Lee, E. A. (2000). "What’s Ahead for Embedded Software?" IEEE
Computer(September): 18-26.
Millman, S., Ed. (1983). A History of Engineering and Science in the Bell System:
Physical Sciences, 1925 - 1980, AT&T Bell Laboratories.
Millman, S., Ed. (1984). A History of Engineering and Science in the Bell System:
Communication Sciences, 1925 - 1980, AT&T Bell Laboratories.
Mumford, L. (1934). Technics and Civilization. New York, Harcourt, Brace.
North, D. C. (1990). Institutions, Institutional Change, and Economic
Performance, Cambridge University Press.
Rosenberg, N. (1976). Factors Affecting the Diffusion of Technology. Perspectives on
Technology, Cambridge University Press: 189-210.
Shapiro, C. and H. Varian (1998). Information Rules: A Strategic Guide to the
Network Economy. Cambridge, Harvard Business School Press.
Simon, H. A. (1969). The Sciences of the Artificial. Cambridge, MA, MIT Press.
Simon, H. A. (1996). The Sciences of the Artificial - 3rd Edition. Cambridge, MA,
MIT Press.
Steinmueller, W. E. (2000). Paths to Convergence: The Roles of Popularisation,
Virtualisation and Intermediation. Convergence in Communications and
33
Beyond. A. L. a. B. T. E. Bohlin (with K. Brodin. Amsterdam, Elsevier
Science: 383-396.
Steinmueller, W. E. (2003). Assessing European Developments in Electronic
Customer Relations Management in the Wake of the dot.com Bust. Industrial
Dynamics of the New Digital Economy. J. F. Christensen and P. Maskell.
Cheltenham, Edward Elgar: 233-262.
Steinmueller, W. E. (2003). The Role of Technical Standards in Coordinating the
Division of Labour in Complex System Industries. The Business of System
Integration. A. D. Andrea Prencipe, and Michael Hobday, Oxford University
Press: 133-151.
Teece, D. J. (1986). "Profiting from Technological Innovation: Implications for
Integration, Collaboration, Licensing and Public Policy." Research Policy 15:
285-305.
Williamson, O. E. (1975). Markets and Hierarchies: Analysis and Antitrust
Implications. New York, The Free Press.
34
Download