Scientific Structures and Research Programs in Digital Communication 1. Introduction Paal Engelstad

advertisement
Scientific Structures and Research Programs in Digital Communication
Paal Engelstad
University of Oslo, Norway
October 2000
1. Introduction
Philosophy of Science has many times proved its ability to provide profound and valuable understanding of
scientific fields. This article explores digital communication research in a philosophical perspective.
The field of digital communication includes two major traditional research areas: computer communication and
telecommunication. Despite many overlapping goals and characteristics, there are still elements that form a
fundamental distinction between the two traditions. We will show that each tradition constitutes a different scientific
structure.
The analyses in this paper are mainly based on Kuhn’s theory of scientific paradigms (Kuhn, 1970) and on
Lakatos’ theory of research programs (Lakatos, 1970). Chalmers (1999) shows that although Lakatos’ perspective
differs in a number of ways significantly from that of Kuhn, Lakatos’ research programs also share many similarities
with Kuhn's paradigms. They are both complex scientific structures of internally consistent theories.
Egyedi (1994) uses the term paradigms to characterize the two frameworks widely deployed within
telecommunication and computer communication. In her article “the extent to which technological paradigms
explain the outcome of committee processes for telematic services is explored. Three cases of Open System
Interconnection (OSI) standardization are studied in the period 1979-1988.” She presents some paradigmatic
elements that “offer clues to which factors may be relevant in the cases described”. However, she restricts the scope
of her analyses to the paradigmatic elements and ignores the importance of the scientific structures that these
elements form: “In order to avoid an unproductive theoretical discussion on paradigm identification, I focus on
paradigmatic elements.”
In contrast to Egyedi, we believe that such a theoretical discussion could be valuable, and we will explore how
these basic elements are interrelated to form scientific structures. Our article investigate to what extent the
frameworks of digital communication represent scientific structures, according to either Kuhnian scientific
paradigms or Lakatosian research programs, and try to depict the inner workings of these structures. Unlike Egyedi,
we will focus on the scientific community of digital communication in general and not only on the standardization
processes, albeit such processes are important activities within the community. The basic elements (e.g. fundamental
principles, assumptions, values, shared beliefs, and hypotheses) that cause incommensurability between the two
scientific structures are emphasized.
2. Terminology
It is assumed that the readers are familiar with the fundamentals of computer communication (or ‘computer
networking’) and telecommunication, and know the basic terminology that is established within these areas. Readers
are referred to Tanenbaum (1996), Comer (2000), Stevens (1994), or Peterson and Davie (2000) for a thorough
introduction to the field.
1
3. Scientific Research Related to Digital Communication
Within the field of digital communication it is difficult to draw a distinct line between science and technology,
since the scientific research fields are closely related to the technology at hand. Moreover, one of the main objectives
of the research undertaken is technological deployment.
Much of the research within the field of digital communication is directed towards the development of
frameworks, general principles, abstractions, architectural models, terminology, principles, mechanisms and
algorithms. Some of this work is clearly similar to the “puzzle-solving” that Kuhn claims typifies the nature of
science (Kuhn, 1970, pp.35-42).
Obviously, the “puzzle-solving” characteristic is not sufficient to determine whether the activities are of a
scientific nature. Kuhn claims that a scientific community can be identified by the existence of a value system that
“the scientific group deploys in periods of crisis and decision” (Kuhn, 1970, p.209). The value system of the digital
communication community is explored in this article.
According to Kuhn, a scientific community can also be identified by the fact that its members “provide the only
audience and the only judges of that community’s work” (Kuhn, 1970, p.209). It would be difficult to claim that no
part of the research within digital communication is scientific of nature, even if there is so much commercial interest
around the field that the industry and society act as secondary audiences.
It is worthwhile to note that Kuhn in his work is very preoccupied with physics as the model of natural sciences,
and that most of his examples are taken from physics. Within physics the result of “puzzle solving” is evaluated with
respect to how well it corresponds to scientific observations of nature, while in digital communication science a
proposed solution is judged rather by how effectively that solution works by some predetermined standard or value
system.
Although we have argued that much of the research within computer communication and telecommunication is
scientific by nature, the truth of this proposition is not a prerequisite for the applicability of Kuhn’s or Lakatos’
theories. As Kuhn (1970, p.208) points out, the main theses are applicable to many other fields as well. Naturally, the
same goes for the theses of Lakatos.
G. Dosi (1982) has augmented Kuhn’s theses to the fields of pure technology - be it science or not - and has
introduced the notion of “technological paradigms”. By the same token, the following analyses of the paradigmatic
elements of computer communication and telecommunication should be valuable even to those who do not agree that
these fields are of a scientific nature.
4. The History of the Digital Communication Community
4.1 How to locate responsible groups
Telecommunication and computer communication represent two distinct traditions within the field of digital
communications. The groups of practitioners within either of the two traditions form what we will refer to as the
computer communication community and the telecommunication community. Hence, the digital communication
community refers to the group of practitioners that spans both the telecommunication and computer communication
communities.
According to Kuhn (1970, pp. 179-180), any “study of paradigm-directed or of paradigm-shattering research
must begin by location of the responsible group or groups.” One “must first unravel the changing community
structure of the sciences over time.” Based on this advice, we will summarize the historical development of the two
traditions of the digital communication community. The following overview is based mainly on Egyedi (1999),
Yates (1989) and Abbate (1994).
http://folk.uio.no/paalee
2
4.2 A historical overview of telecommunication
The field of telecommunication dates back to 1844 when the telegraph was introduced in America. For the first
time, information could travel close to the speed of light. The telephone was invented in 1876 and the telephone
exchange in 1877-1878. Soon the Public Telephone Operators (PTOs) dominated the field by means of national
monopolies, and national telephone networks grew in size and number of subscriptions. National monopolies
facilitated international cooperation, and made it easier for national PTOs to inter-network to offer international
telephony services to the public.
The fact that telephony services were provided by national monopolies influenced the operational principles of
telecommunications. Operators naturally preferred centralized network control. This enabled them to offer reliable
transmission and to maintain the integrity of the network. The monopoly situation facilitated this. A hierarchical
architecture was soon imposed on the network since centralized control otherwise would have become an
unmanageable task as telephony networks grew.
The technological implementation of the telephony services was based on chosen design principles: The
terminals (i.e. phone sets) and the telecom service provision (i.e. voice transmission) are dedicated to one purpose
only (i.e. speech interaction). Since the (uncompressed) traffic flow of such dedicated services is often quite
predictable, a service session (i.e. a telephone call) consumes a fixed overhead and is allocated a fixed bandwidth.
These requirements could easily be met by circuit-switched and connection-oriented transmission based on physical
analogue connections according to the technology at hand in those days. Packet switching was not possible before
the advent of digital transmission technology.
The most cost-efficient way of providing a dedicated service, such as telephony, was to locate the necessary
resources in the centrally controlled network. The service was developed and implemented by skilled operators and
the complexity was hidden from the cheap and “dumb” user terminals.
Other telecommunication services, such as telex, telefax and early computer communication, as well as 1st and 2nd
generation mobile telephony, were introduced later. It is important to note that operational and technical principles
for telephony, as outlined above, were passed on to these newer areas of telecommunications. Thus, telephony is at
the heart of the telecommunication paradigm.
The success of the computer industry and computer networking was a driving force in the process of digitizing
the telephone networks. Computing and digital technologies were deployed in switches and other network
equipment. Later digital encoding of speech was standardized. During the 1990s many national telephone networks
had been fully digitized. Although networks were no longer based on analogue technology and the provisioning of
dedicated services, many of the operational and technological principles remain, and still they form an important part
of the telecommunication framework.
The advances in computing led to new opportunities and new demands. The Wide Area Networks for computer
communication first used leased telephone lines to provide some of their earliest telematic services. PTOs wanted to
expand further into the market for data transmission. The X.25 standard for packet-switched networks was developed
in the mid-1970s in CCITT, which is a standardization body of the PTOs. The effort was based on virtual circuits
that kept track of each individual connection. The resulting behavior was similar to that of the traditional telephone
networks, despite the introduction of packet switching. The reliability support and the processing power were added
to the network. Simplicity and efficiency arguments led to the requirement that the operations of the subnets were
based on identical technology, which was feasible due to the monopolies in telecommunication. The complexity was
hidden in the network, and generic services were offered to the “dumb” terminals through predefined interfaces.
These operating principles are in line with the operating principles of the centrally controlled telephone networks
outlined above. Hence, the X.25 technology - and later the Asynchronous Transfer Mode (ATM) technology represent a prolongation of the telecommunication community’s perception, which has been heavily influenced by
telephony.
4.3 A historical overview of computer communication
The computer communication society dates back to when the focus of computer research shifted from analogue
to digital computers. An early example of computer communication was when mainframes were extended with
terminals and remote batch stations. Some mainframe could also be accessible from outside the organization via
3
telephone lines. It was the single-vendor mainframes that centrally provided the processing power and it was the
front-end processors that dealt with the communication aspects.
Advances in microelectronics increased processing and memory capacity and reduced production costs, and the
market demand shifted from mainframes, to mini- and micro- computers. With a hardware lifecycle of a few years,
software design focused on reusability of modules as well as on upward and downward compatibility rather than on
durability of the whole applications.
Workstations and personal computers were introduced, and were soon interconnected via high speed Local Area
Networks (LANs). Computing and communication facilities were now distributed in these multi-vendor subnets. The
origin of computer communication was the exchange of data within these organizational LAN domains (while the
origin of telecommunication, in contrast, was telephony in the national network domains). The scope of computer
communication was later extended across organizational boundaries, by the interconnection of different LANs.
The LAN works as a shared media link between interconnected hosts. To better utilize the shared link, hosts are
permitted to occupy all the link bandwidth as long as no other host wants to transmit simultaneously. This is in
contrast to circuit-switch technology where bandwidth is allocated to hosts in fixed partitions, independent of the
actual bandwidth requirements of the hosts at the time. The maximum packet length defined by the subnet limits the
transmission time of any one host. After that period, the sending host must compete with other waiting hosts (or
follow a similar link sharing scheme) for the opportunity to send the next packet. This way of sharing the
transmission resources is convenient because the traffic characteristics depend on the communicating applications
run at the host terminals, and they are difficult to predict.
During the 1970s the LANs worked as testbeds and scientific playgrounds for the computer communication
proponents. They gained experience with - and confidence in -packet switching technology.
In 1967 – 1972 the U.S Defense Advanced Research Project Agency (ARPA or DARPA) developed the
ARPANET, a nation-wide packet switched computer network linking ARPA research contractors. In order to
connect its various networking projects, like the packet radio network (PRNET) and the satellite network (SATNET),
ARPA began its Internet program in 1973.
The program wanted to interconnect existing networks rather than to build a new one. To accommodate diverse
subnet technologies of the different interconnected networks, spanning both packet switched and circuit switched
networks, ARPA decided that their Internet system should assume little about the underlying technology. Few
requirements were imposed on the network and most of the intelligence was put in the host terminals instead.
Nothing more than a minimum best-effort forwarding service of datagrams (i.e. packets) was required. The subnets
should also support the simple Internet Protocol (IP), which mainly supplies the datagram forwarding service with a
uniform global method of addressing among the connected networks.
While X.25 guaranteed error-free delivery of packets, the Internet was designed to offer high-quality service
independent of the quality of the underlying networks. The network should be fault tolerant and continue to work,
even under hostile conditions of war. End-to-end reliability is supported by the Transport Control Protocol (TCP),
which runs on top of IP in the hosts. This is in contrast to X.25, where these functions are implemented in the
network. The processing requirement of end-to-end connections was placed on the terminals (i.e. computers) – and
expert users were required to take on the additional complexity. Only minimum requirements were put on the
interconnected subnets.
4.4 Two conflicting approaches to networking
As we see, the technical and operating principles of independent autonomous subnets of computer
communication, outlined in Chapter 4.3 are very different from the principles of the telephony networks, outlined in
Chapter 4.2.
Until the late 1980s there were ongoing discussions between the telecommunication and computer
communication communities about the merits of X.25 and ATM on the one hand and TCP/IP on the other. Abbate
(1994) made the following remark:
Reading these arguments [between the X.25 and TCP/IP proponents in articles of professional journals] can
be a disorienting experience, as the same terms (such as internetworking) are often used to indicate radically
different concepts. Arguments tend to be framed as the evaluation of trade-offs between different technical costs
4
and benefits, yet the assessment of these trade-offs depends on tacit assumptions about the goals and operating
environment of the networks. (…) The arguments for X.25 and TCP/IP both make technical sense given a
particular world-view, but neither side spent much time discussing the assumptions upon which they based their
claims.
There is obviously an incommensurability of frameworks between the perspectives of the two parties. It seems
like proponents of a telecommunication paradigm or research program are arguing with proponents of a computer
communication paradigm or research program.
The International Standards Organization (ISO) attempted to unify the conflicting perspectives on digital
communication by developing the Open Standard Interconnection (OSI) model and by announcing it an International
Standard in 1983. Although the OSI model provides a common framework for discussing protocols, it has failed to
resolve conflicting approaches to networking (Abbate, 1994).
5. Paradigms in Digital Communication
5.1 Paradigmatic elements
Some of the important elements of the scientific structures of computer communication and telecommunication
areas are treated in the historical overview above, as well as in the work of Egyedi (1999). These paradigmatic (or
science-structural) elements are summarized in Table 1 below.
Table 1. Paradigmatic elements within two scientific structures of digital communication
Paradigmatic elements
Telecommunication
Computer communication
Management domain
Location of control
Network design
Product focus
Location of transport support
(reliability and flow control)
Product location
National monopolies of PTOs
Central / Centralized
Hierarchical
Services
Additional resourced located in the
network
Services offered through network
interfaces (SAP/NAP)
Long durability of services
Vertical optimization (dedicated lines
and terminals)
Fixed resource requirements
Circuits / Virtual Circuits
Connection-oriented
Integrity and reliability of network
High
Long-term
Distance and duration
CCITT, ITU-T, ETSI, 3GPP,… etc.
Private organizations
Decentralized / Distributed
Symmetrical
Applications
Additional resources located in the hosts
Product life-cycle
Stack optimization
Flow characteristics
Switching principles
Connection principle in the network
Operational priority
Required network reliability
Development horizon
Tariff principles
Standards for a
Services offered by host-based applications
Short durability of services
Horizontal optimization (multi-purpose
terminals and transmission)
Versatile resource requirements
Packets-switching / datagram
Connection-less
Flexibility for users
Low
Short-term
Connectivity and volume of flow
ISO, IETF, W3C,…etc.
5.2 Depicting the paradigms of digital communication
In a Kuhnian perspective the paradigmatic elements should be mapped to scientific paradigms. According to the
1969 postscript of Kuhn (1970, pp. 114 - 210), a paradigm is best described as a disciplinary matrix.
The first component of the matrix is the symbolic generalizations. They usually have a formal form (like
Newton’s third law f=ma), and they often serve as laws or definitions. The framework of the OSI model could be
regarded as a symbolic generalization of digital communications.
The second component of the matrix is the exemplars. These are examples of how a generalization (e.g. the OSI
model) could be applied to a specific situation (e.g. an implemented network architecture with all corresponding
protocols). A student of computer networking learns how to apply the generalization by solving the problems at the
end of the chapter in her textbook or by carrying out the programming implementations (e.g. of a simple
communication protocol) that are included in a course she is attending.
5
The third element of the matrix consists of the shared beliefs. For example, the message-passing model where
messages are passed from node to node (i.e. from router to router) along the arches (i.e. links) of the network is an
abstraction that may serve as a shared belief. The Shannon and Weaver model of communication may serve as
another example of how the practitioners perceive ‘communication’.
The fourth element includes the scientific values that the paradigm is founded on. Carpenter (1996) elucidates the
values of the Internet community: “the community believes that the goal is connectivity, the tool is the Internet
Protocol, and the intelligence is end to end rather than hidden in the network.” Survivable connectivity is rated as
more important than transmission reliability. While the practitioners in the telecommunication community may have
a preference for centralized network control structures, Carpenter says: “Fortunately, nobody owns the Internet, there
is no centralized control…”
Carpenter’s work gives an overview of other values that should serve as measures of how scientific proposals
should be evaluated by the community members. Scalability is of utmost importance: “All design must scale readily
to very many nodes per site and to many millions of sites.” Simplicity and modularity are also highly regarded:
“When in doubt of design, chose the simplest solution. (…) If you can keep things separate, do so.” Finally, an
implementation-based bottom-up approach is preferred to a top-down one: “In many cases it is better to adopt an
almost complete solution now, rather than to wait until a perfect solution can be found. (…) Nothing gets
standardized until there are multiple instances of running code.” (Carpenter, 1996)
5.3 Shortcomings of the paradigmatic approach
We will not perform a complete mapping of the paradigmatic elements into two disciplinary matrixes, although
we have depicted above how this could be done. One problem of such mappings is that most of the paradigmatic
elements in Table 1 above can be mapped to values, while it is more difficult to map the elements to symbolic
generalizations, exemplars, and shared beliefs. This shortcoming of using the approach of Kuhn may stem from the
fact that Kuhn’s theses are based upon his own experiences as a physicist. He is using physics as an ideal model of a
science, and this model may not fit well to the area of digital communications.
The approach of Lakatos might serve our purpose better: Lakatos simply underscores that some laws, principles,
assumptions and values (that form the hard core) are more fundamental than others (that form the protective belt).
We will explore the scientific structure of digital communication in terms of Lakatosian research programs in the
following.
6. Research Programs in Digital Communication
6.1 Characteristics of research programs
A key difference between the scientific structures introduced by Lakatos and by Kuhn is that Lakatos saw the
history of science as more evolutionary than revolutionary. Kuhn, on the other hand, described an overthrow of the
older paradigm in a brief and violent revolution. (Geelan, 2000)
In the historical overview in Chapter 4, we saw that there was a great degree of continuous evolution within the
two traditions, and the activities of each community in the 1970s and 1980s started competing with one another for
markets and resources. Thus, instead of describing them in terms of two paradigms, the historical development is
better described as the evolution of two different research programs that are competing for resources.
According to Lakatos (1970), a research program consists of a hard core of fundamental principles that defines
the characteristics of the program. The hard core is augmented with a protective belt of assumptions, hypotheses and
principles that are regarded as less fundamental than the fundamental principles. The supplements that form the
protective belt may be modified to protect the hard core from falsification.
The negative heuristic of a research program defines what the researchers are not advised to do, such as
questioning the hard core of the program. The positive heuristic specifies what the researchers are advised to do. It
“…consists of a partially articulated set of suggestions or hints on how to change, develop, the ‘refutable variants’ of
the research program, how to modify, sophisticate, the ‘refutable’ protective belt” (Lakatos, 1970, p. 135).
6
6.2 The research program of telecommunication
It is important to note that the development of the telecommunication research program has been driven by – and
been under tight control of - the national PTOs. Technology that has served the business interests and business
objectives of the PTOs has been implemented, while it has been difficult for researchers with conflicting technology
to get their inventions implemented in the networks of the PTOs. Moreover, many researchers in the field have been
employed by PTOs.
Dosi(1982) stressed the importance of economic considerations in his identification of technological paradigms,
and the same argument applies to Lakatosian research programs. Thus, the business objectives of the PTOs have
strongly influenced the hard core of the telecommunication research program.
We suggest that the main hard core principle is that the network should provide centralized provision of
network services at the granularity of each user session. It is the network services (including telephony services and
X.25 network services) that have generated the income of the PTOs. It has been strategically important for the PTOs
to keep a centralized control with the service provisioning, in order to retain their positions as telecommunication
monopolists.
The main principle in the protective belt of the research program is that the network should be able to provide
each user a minimum quality of the communication service the user is utilizing. This Quality of Service (QoS) may
be related to a reliability guarantee or a guarantee of minimum bandwidth or delay.
This principle may be hard to refute for certain services; e.g. telephone users today are accustomed to a certain
speech quality (of approximately 64kbps bandwidth of PCM-coded speech, of less than 100ms unidirectional delay,
and of less than 1 % probability to be blocked when initiating the call). However, to a certain degree these
requirements depend on user expectations: Traditional telephone users normally accept noticeable and often
annoying delay on international calls, while IP-telephony users seem to have lower expectations of the QoS that this
technology offers per se around year 2000 (Black and Franz, 2000).
The protective belt also contains an assumption that additional resources must be placed in the network to
accommodate the necessary QoS. The resources are supporting the services and add value to each service that a user
pays for. In this way both the assumption and principle mentioned above support the hard core’s principle of PTOcontrolled networks.
It is also assumed that resources must be allocated to each session’s communication flow along the flow’s path
through the network, in order to guarantee the QoS at the granularity of each user session. This may require some
coordination between the various subnets that the path will cross. The assumption thus supports the requirement of a
centrally controlled network.
The positive heuristic includes tacit rules on how the research and development should proceed. One of these
rules may try to direct the research efforts at solving the shortcomings of connection-oriented and circuit-based
technology (e.g. virtual circuit switching) which is implemented in both X.25 and ATM. This could include the
problems of scalability and resource under-utilization.
During the history of telecommunication we sense that necessary adjustments may have occurred in the
protective belt. The conversion from circuit switching to packet based virtual circuit switching is an example of such
an adjustment.
Another modification to the protective belt may be the ongoing opening of network service interfaces. This is a
result of the fact that it is not only long-cycled and durable services that the PTOs want to offer. Because the PTOs
have limited human resources, they are not able to develop and implement all the different short-cycled applicationbased services that started appearing on the computer communication market in a tremendous rate during the 1990s.
To keep pace with the this market, many PTOs have decided to open up the service interfaces to their network, for
instance by developing Open Services Architecture (OSA) and similar constructs. In this way the PTOs can take
advantage of the human resources and the inventions of third party developers. While the PTOs formerly claimed
that the service development should be undertaken by the PTOs to shield the technology-unaware users from the
complexity of the network, this principle may no longer be a part of the protective belt.
7
6.3 The research program of computer communication
Different parties have promoted the data communication research program: The U.S Defense wanted a robust and
decentralized network structure; the universities wanted to interconnect some of their applications; and vendors of
software and hardware wanted to extend the scope of communication beyond the LAN. Nevertheless, they all seem
to be in line with the hard core of the research program.
We suggest that the main principle of the hard core is that the network should accommodate connectivity
between host-based applications. The network should provide simple transmission services to the terminal based
applications that offer the added-value services to the users. The main resources are therefore located in the
terminals. The principle is thus in line with the business model, where the sales of host-based hardware and software
generate the main income of the vendors.
Another fundamental principle of the hard core is that the network should span autonomous subnets and
heterogeneous subnet technologies. This principle was stated as “the top level goal for the DARPA Internet
Architecture”(Clark, 1988), and initially it was maybe only a part of the protective belt. The Internet today spans a
large number of different autonomous and heterogeneous subnets, and the principle is so fundamental in the Internet
architecture that it qualifies as a hard core principle.
According to Carpenter (1996): “Heterogeneity is inevitable and must be supported by design.” Hence, in order
to include all possible network technologies, the only requirement on the resources of the connected subnet is that
they should support the simple IP protocol, and the corresponding IP routing and control protocols. The subnet is
required to use its best efforts to forward any received packet (i.e. “best-effort” connectivity).
Since each subnet is autonomous, its owners will naturally be reluctant to invest in more resources than required
to support flows that traverse the subnet, because these investments will not pay off. Hence, a user cannot expect
more from the network than (“best-effort”) connectivity, which is in line with the hard core. Strict Quality of Service
guarantees will not easily be supported.
An important principle of the protective belt is that the user sessions (i.e. conversations) should survive partial
failures of the network. This principle was stated as the most important second level goal of the ARPA architecture,
and led to the decision to store state information of a conversation in the terminal software (Clark, 1988). Hence, the
resources to support the state information must be located in the user terminal. This is in line with the main principle
of the hard core.
The protective belt includes another guideline: “The principle, called the end-to-end argument, suggests that
functions placed at low levels of a system may be redundant or of little value when compared with the cost of
providing them at that low level” (Saltzer, Reed and Clark, 1984). This argument emphasizes the importance of
flexibility and extensibility to accommodate support for various current and future applications with different
communication requirements, all at the lowest cost. System designers are advised to place the required functionality
at the transport level or higher. All these layers are located in the host-based terminals, because they rely on the
transport layer, which is host-based according to the arguments in the previous paragraph. The end-to-end argument
thus acts as a reinforcement of the main principle of the hard core. Its importance is emphasized by Carpenter (1996).
The de-centralized, network-symmetric and distributed approach that follows from the principles of the hard core,
has some drawbacks, and the positive heuristic includes how to address these shortcomings. First of all, the
minimum “best-effort” guarantee of computer communication turns out to be one of the major obstacles for
supporting traditional interactive real-time services (such as telephony). In telecommunication, on the other hand,
such services are supported by static Quality of Service guarantees (e.g. minimum bandwidth and minimum delay
guarantees) that are implicitly provided by the service networks. One realizes that the computer networks’ inability
to offer strict Quality of Service guarantees represents one of the major shortcomings of the computer
communication research program.
Secondly, the de-centralized approach makes it difficult to implement functionality that requires widespread
coordination throughout the whole network. This includes policy, security and accounting management. Such
functionality is successfully implemented and deployed in telecommunication networks.
8
It is and outspoken goal of the Internet community is to solve these shortcomings. Huston (2000) has expressed
these concerns clearly on behalf of the board (IAB) of the Internet Engineering Task Force (IETF), which is one of
the leading standardization bodies in the Internet community.
7. Anticipated future development
7.1 Development within telecommunication
Despite the tremendous success of the Internet, the telecommunication approach still plays a big role in digital
communication. The amount of information transfer (in terms of bits transferred) of computer communication has
recently grown larger than that of telephony, but the value that are generated by digital transfer services are still
dominated by telecommunication (Odlyzko, 2000).
Mobile communication has played an important role in telecommunication over the last decade. During the 1980s
the IPOs developed 2nd generation mobile cellular telephone systems (including GSM in Europe) that were based on
traditional principles of telecommunication, such as circuit switching. The successful growth in terms of deployment
during the 1990s is comparable to the success of the Internet during the same period.
The telecommunication community is now developing UMTS, which is a 3rd generation mobile system, through
the 3rd Generation Partnership Project (3GPP) joint effort project. UMTS will accommodate packet switched
communication, and the interest group 3GPP-IP (3GIP) has promoted the use of IP for this purpose. However,
UMTS will also run circuit-switched communication in parallel: It seems to be difficult to let go of the well-known
telecommunication principles and base the solution on a pure Internet architecture. To which extent UMTS will
apply the principles of the computer communication research program is still an open question.
7.2 Development within computer communication
IETF has formed many working groups to address the shortcomings of computer communication, when
compared to telecommunication. QoS issues was handled by The Integrated Services Working Group (IETF IntServ
WG, 2000) which focused on how to handle resources on a per-flow basis, and the Resource Reservation Setup
Protocol (IETF RSVP WG, 2000) was developed to handle resource negotiation, reservation and allocation. Later,
Differentiated Services (IETF DiffServ WG, 2000) was developed to complement the Integrated Services with an
approach that scales better in the backbone network.
Policy control and accounting must be implemented in networks that offer services with differentiated service
quality, because the access to high quality services must be controlled. The COPS (Boyle et al., 2000) policy effort,
as well as the contributions from the Authorization, Authentication and Accounting Working Group (IETF AAA
WG, 2000) address this shortcoming of computer communication, as compared to telecommunication. Moreover, the
MPLS (Davie and Rekhter, 2000) label switching effort enables IP networks to incorporate virtual circuit
mechanisms into its structure. This may form a basis for QoS support.
Finally, the Mobile IP work group (IETF MobileIP WG, 2000) has developed a protocol to support mobility. The
protocol may provide mobility between 3rd generation UMTS deployments. One should note that neither the hard
core of telecommunication nor the hard core of computer communication prohibits mobility, and mobility has been
incorporated in both research programs. The same does not go for QoS.
Further development is needed within the computer communication research program to meet the major QoS
concerns that the proponents of the telecommunication research program raise. Until then, QoS issues, including
policy, security and accounting management, will represent one of the biggest challenges of this research program.
7.3 Towards a merge of two scientific structures?
As in the example of the early development of Newton’s gravitational theory (Lakatos, 1970), the Lakatosian
research programs are often thought of as branching out. “Here, the positive heuristic involved the idea than one
should start with simple, idealised cases, and then, having mastered them, proceed to more complicated realistic
ones” (Chalmers, 1999, p.133). The hard core of the research program is initially simple, and the theory is branched
out into unknown, complex areas that correspond well with the hard core of the program.
9
However, as we have seen above, this is not the situation in digital communication. What we see is two
competing research programs that probably will merge during the coming years.
The evolution involves not only developments of the protective belt; the hard cores might also be modified in this
merging process. The various QoS efforts of the computer communication community often assumes that a networkcentric architecture (e.g. related to policy and accounting) finally will be imposed on the network as a whole and that
more logic and functionality must be pushed from the ends and back into the network. Moreover, the MPLS effort
within the computer communication community indicates a willingness to incorporate virtual-circuits into the
framework. The telecommunication community, on the other hand, tries to incorporate the Internet architecture into
the UMTS networks.
In a Kuhnian perspective, the scientific evolution is driven by revolutions; a scientific crisis leads to a
revolutionary paradigm shift. However, in digital communication it is more descriptive to use the term ‘paradigm
merge’, rather than a paradigm shift, when referring to the anticipated development within the field.
8. Conclusion
We have suggested how the evolution of telecommunication and computer communication can be described in
terms of scientific structures. When the scientific structures of digital communication are treated, we claim that the
research program of Lakatos is a better abstraction than the paradigm of Kuhn.
We have shown that decisions concerning network control localization, resource placement and QoS support
represent some of the major differences between the two research programs. Although the computer communication
research program has experienced an enormous success and wide acceptance during the last 15 years, it is not
expected to supplant the telecommunication research program until the Internet community has solved the problems
associated with QoS.
Convergence between the evolution of the telecommunication and computer communication research programs is
expected in the coming future. In a Lakatosian perspective, the research programs are expected to merge, rather than
to branch out in different directions. In a Kuhnian perspective, one could call the anticipated development a
‘paradigm merge’ rather than a paradigmatic shift.
To the author’s knowledge, the merging of research programs or paradigms has not been thoroughly studied in
the literature. This is left for future work.
Acknowledgements
Thanks to Erik Oxaal and Kirsten Engelstad for general comments, to Deborah Oughton for comments
concerning philosophy of science and to my scientific supervisor, Prof. Paal Spilling, for comments related to digital
communication.
References
[1]
[2]
[3]
Abbate, J. Changing large technical systems, Westview Press, 1994, pp. 193-210.
Black, U.D, Franz, M. (ed.), Voice over IP, Prentice Hall PTR, Eaglewood Cliff, NJ, 2000.
Boyle, J., Cohen, R., Durham, D., Herzog, S., Rajan, R., Sastry, A., The COPS (Common Open Policy Service) Protocol,
Request For Comments 2748, http://www.ietf.org/rfc/rfc2748.txt, Jan. 2000.
[4]
Carpenter, B. (ed.), Architectural Principles of the Internet, Request For Comments 1958,
http://www.ietf.org/rfc/rfc1958.txt, June 1996.
[5]
Chalmers, A.F., What is this thing called Science, Open University Press, Buckingham, 3rd ed., 1999.
[6]
Clark, D.D, The Design Philosophy of the DARPA Internet Protocols, Proceedings of SIGCOMM ’88, Computer
Communication Review 18 , 4 (Aug. 1988) 106-114.
[7]
Comer, D.E., Internetworking with TCP/IP. Volume1: Principles, Protocols, and Architectures, Prentice Hall, Eaglewood
Cliff, NJ, 4th ed., 2000.
[8]
Davie, B., Rekhter, Y., MPLS. Technology and Applications, Morgan Kaufman Publishers, San Francisco, CA, 2000.
[9]
Dosi, G., Technological paradigms and technological trajectories, Research Policy 11 (1982) 147–162.
[ 1 0 ] Egyedi, T.M., Examining the relevance of paradigms to base OSI standardisation, Computer Standards and Interfaces 20
(1999) 355-374.
10
[ 1 1 ] Geelan, D.R., Sketching Some Postmodern Alternatives: Beyond Paradigms and Research Programs as Referents for
Science Education, http://student.curting.edu.au/~rgeeland/sced516.htm, 2000.
[ 1 2 ] Huston, G., Next Steps for the IP QoS Architecture, Internet Draft, IETF, http://search.ietf.org/internet-drafts/draft-iabqos-02.txt, Aug. 2000.
[ 1 3 ] IETF AAA WG (Internet Engineering Task Force: Authentication, Authorization and Accounting Working Group),
http://www.ietf.org/html.charters/aaa-charter.html, 2000.
[ 1 4 ] IETF DiffServ WG (Internet Engineering Task Force: Differentiated Services Working Group),
http://www.ietf.org/html.charters/diffserv-charter.html, 2000.
[ 1 5 ] IETF IntServ WG (Internet Engineering Task Force: Integrated Services Working Group),
http://www.ietf.org/html.charters/intserv-charter.html, 2000.
[ 1 6 ] IETF Mobile IP WG (Internet Engineering Task Force: Mobile IP Working Group),
http://www.ietf.org/html.charters/mobileip-charter.html, 2000.
[ 1 7 ] IETF RSVP WG (Internet Engineering Task Force: Resource Reservation Setup Protocol Working Group),
http://www.ietf.org/html.charters/rsvp-charter.html, 2000.
[ 1 8 ] Kuhn, T.S., The Structure of Scientific Revolutions. The University of Chicago Press, 2nd ed., 1970 (origin. 1962).
[ 1 9 ] Lakatos, I., Falsification and the Methodology of Scientific Research Programs in Lakatos and Musgrave (1970), pp. 91 –
196
[ 2 0 ] Lakatos, I., Musgrave, A. (ed.), Criticism and the Growth of Knowledge, Cambridge University Press, Cambridge, 1970.
[ 2 1 ] Odlyzko, A., Content is not King, http://www.research.att.com/~amo, 2000.
[ 2 2 ] Peterson, L.L., Davie, B.S., Computer Network a Systems Approach, Morgan Kaufman Publishers, San Francisco, CA, 2nd
ed., 2000.
[ 2 3 ] Saltzer, J.H, Reed, D.P., Clark, D.D, End-to-End Arguments in System Design, ACM Transactions in Computer Systems
2, 4 (Nov. 1984) 277-288.
[ 2 4 ] Solomon, R.J., New paradigms for future standards, Communication and Strategies 2 (2) (1991) 51–90.
[ 2 5 ] Stevens, W.R., TCP/IP Illustrated. Volume 1: The Protocols, Addison Wesley, Reading, MA, 1995.
[ 2 6 ] Tanenbaum, A.S., Computer Networks, Prentice Hall, Eaglewood Cliff, NJ, 3rd ed., 1996.
[ 2 7 ] Yates, J., Control through communication: the rise of system in American management. John Hopkins University Press,
1989, pp. 21-64.
11
Download