Randy H. Katz - BNRG - University of California, Berkeley

advertisement
Beyond Third Generation Telecommunications Architectures:
The Convergence of Internet Technology and Cellular Telephony1
Randy H. Katz
EECS Department
University of California, Berkeley
Berkeley, CA 94720-1776
“Just as the market share of centralized computers dropped from nearly 100 percentage in 1977 to less than 1% in 1987, our research
… suggests that the share of analog voice in the global telecom market will drop from more than 80% in 1994 to less than 1% in 2004.
Thousands of ISPs, with WorldCom at the helm, will prevail, with ever-spreading networks of fiber, optimized for data.”
“The most common PC of the next decade will be a digital cellular phone, tied to the Net and based on a single chip with integrated
analog and digital circuitry. As powerful as a supercomputer of old, it will be as mobile as your watch, and as personal as your
wallet.” George Gilder, Futurist Economist, in Wired Magazine, January 1998, page 42.
Abstract: The two most significant trends in today’s telecommunications industry are the significant growth
of the cellular network and the rapid rise of the Internet. The rise of mobility and data-oriented applications
are fundamentally changing the very nature of the telecommunications network. These new developments
present significant challenges for the industry and its incumbent infrastructure equipment providers. Voice
will continue to be an important application, but its dominant position will diminish in the future network.
These future networks will be optimized for data, not for voice. In this paper, we make the case that future
telecommunications infrastructures will be heavily based on the Internet Protocol (IP) and that an
Integrated Services Packet Network (ISPN) will form the basis for multimedia services in the future
telecommunications network.
1. Introduction
The telephony network has evolved over the last 120 years into one of the most ubiquitous and complex
technological systems known to humankind. Over one billion of the five billion inhabitants of the planet
have made a telephone call, an astounding number that continues to grow as the developing world increases
its investment in telecommunications infrastructure.
Yet at the end of the 20th Century, the engineering design principles that underlie this great technical
achievement are being challenged. The fastest growing sector of the telecommunications industry is
cellular services, introducing new demands for user and handset mobility in the future network. In contrast
to wireline telephones, cellular handsets are sophisticated electronic devices able to perform advanced
digital signal processing, extensive user interface support, and ultimately provision of many personal
information management functions (address book, electronic mail, personal messaging, etc.).
Following a similar explosive growth trend is the number of Internet subscribers. Today, their dominant
application is data, with emerging capabilities for telephony services within the context of the Internet.
Internet usage counters many of the engineering assumptions about the telephone network, which has been
carefully engineered to deliver outstanding performance for real-time voice. The rise of handheld devices,
Personal Digital Assistants, integrating ever more extensive computing and communications capabilities in
a small formfactor, will further increase the importance of data applications within the network.
The rest of this paper is organized as follows. The next section compares and contrasts the
telecommunications and data communications industries. Section 3 examines the rise of data applications
and speculates on whether data will come to dominate voice in the future network. Section 4 compares
Internet technology with the technology of the telephone network and the emerging ATM technology. We
also describe the rapid rise of Internet telephony and support for real-time streams in the Internet’s packet
This paper is based on the keynote address given by the author at the ACM Mobicom’97 Conference,
Budapest, Hungary, (September 1997).
1
1
switching architecture. We argue that the evolution of Internet technology, in the form of Integrated Service
Packet Networks, will come to dominate the telecommunications infrastructure in the 21 st Century. In
Section 5, we describe one possible vision for how this might become about, particularly in the context of
cellular networks support wide-area and local-area data. Section 6 gives our summary and conclusions.
2. Comparison of Telecommunications and Data Communications Industries
2.1. The Telecommunications Industry: Challenges and Opportunities
Telecommunications is perhaps the largest service industry in the world. The total worldwide
telecommunications revenue was approximately $900 billion at the end 1996. This is expected to grow by
50% to $1300 billion by the year 2000.
The vast majority of the revenue in this industry is local phone services provided by the
telecommunications operators. The figure below captures the global trade in telecommunications
equipment and services. The market for telecommunications equipment providers was $69 billion in 1996.
The market for international long distance services was $30 billion in international telecommunications
settlements. The remaining $17 billion is in other rendered services, such as consulting. 2 While revenue
growth from international calls has been growing slowly, equipment sales worldwide are expanding,
especially in the developing world.
140
Global Telecommunications Trade
120
100
Other
Net Settlements
Equipment
80
60
40
20
0
1990
1991
1992
1993
1994
1995
1996
Cellular telephony remains the hottest and fastest growing sector of the telecommunications market. In the
Nordic countries, ten mobile telephones are being added for each wireline phone. This trend is being
experienced, not only in the developed world but also in the developing world. Wireless communications
can be very cost attractive. It achieves this by avoiding the expensive deployment of local “wired” access
technologies.
In the United States, the largest worldwide market for cellular services, subscriber growth has been and
remains on an exponential curve. At the end of the 1996, the number of U.S. subscribers grew to 44
million.3
2
3
International Telecommunications Union, quoted in The Economist Magazine, 8 March 1997.
CTIA Web Page
2
45000
Cellular Subscriber Growth
In the United States
40000
35000
30000
25000
20000
15000
10000
5000
0
85
87
89
91
93
95
As an indication of what these means for infrastructure providers, the number of cellular sites in the U.S.
grew by 25% in one year, to 30,000 cell sites. The deployment of cell sites (and associated infrastructure) is
growing at the same exponential rate as the number of subscribers. 4
35000
Number of Cell Sites in the
United States
30000
25000
20000
15000
10000
5000
0
85
87
89
91
93
95
It is predicted that by the Year 2001, the number of worldwide mobile telephone users will have grown to
600 million. By the same time, the number of Internet users will have grown to 400 million. Both trends
remain on the exponential growth curve well into the next century 5.
4
5
CTIA Web Page
Information from Ericsson Radio Systems.
3
700
600
Extrapolated Growth in Mobile
Telephone Users and Internet
Users
Mobile Telephone Users
500
400
300
200
100
Internet Users
0
1993 1994 1995 1996 1997 1998 1999 2000 2001
The rapid rise of digital cellular systems is an important related trend. Digital technology not only enables
high quality voice and more efficient spectrum utilization. It provides a critical foundation in the wide-area
for ubiquitous data-oriented applications as well.6
600
500
Analog vs. Digital Cellular
Telephones
400
Internet Users
300
Digital
200
100
Analog
0
1993
1994
1995
1996
1997
1998
1999
2000
2001
This dramatic growth in cellular demand is an international phenomenon. By the year 2000, an astounding
one in three telephones is expected to be a mobile telephone. 7 This widespread access to mobile
communications will very likely have a profound impact on our lifestyle in the next century.
6
7
Information from Ericsson Radio Systems.
The Economist Magazine, 4 May 1996.
4
45
Wireless Phone Lines as a
Percentage of Fixed Line Phones
40
35
30
25
Europe
Unites States
Japan
20
15
10
5
0
1990
1995
2000
The International Telecommunications Union reports that 20% of telecommunications revenue comes from
mobile subscriptions, 10% from international phone calls, 60% from local calls, and 10% from all other
services. More interesting is the relative revenue growth associated with each of these. In the developed
world, local services will undergo only limited additional expansion, and is unlikely to serve as a major
source of revenue growth. Since 1990, revenues from international calls have increased only about 50%.
On the other hand, revenues from mobile telephony have increased 8 fold (800%) in the same time frame! 8
900
Relative Growth in
Telecommunications Revenue
800
700
600
Mobile
500
International
400
300
200
100
90
8
91
92
93
94
95
96
International Telecommunications Union, quoted in The Economist Magazine, 12 September 1997.
5
It is not surprising that equipment for the mobile network infrastructure represents the greatest revenue
growth opportunity. Even the developing world sees the advantages of a more rapidly deployed cellular
infrastructure. In countries like India, it is much faster, albeit more expensive, to obtain a cellular phone
than to wait the many months for a wireline installation!
In a recent market survey of cellular subscribers, performed by Peter D. Hart Research Associates in March
1997 and quoted on the CTIA Web Page, data-oriented applications represent a considerable percentage of
new services they desire. When asked which services they would like beyond basic wireless telephony,
they responded as follows:
Call Forwarding
Paging
Internet/E-Mail
Traffic/Weather
Conference Calling
News
37%
33%
24%
15%
13%
3%
The services in italic represent data-oriented services. It is interesting to note that one in four cellular
subscribers have requested access to the Internet from their cellular phones.
To summarize, the rapid rise in cellular telephone networks, in terms of subscribers, revenue, and
opportunity to install new networks, represents a major opportunity for the traditional telecommunications
industry. The deployment of digital systems, and the increasing demand for data-oriented services, is the
challenge. The key issue is what will be the appropriate architecture of the future network.
2.2. Data Communications Industry: Opportunities and Challenges
The portable computer market is the most rapidly growing sector of the personal computer market. By the
year 2000, it is expected that this will represent a $50 billion market 9 (interestingly enough, of the same
order of magnitude as global trade in telecommunications equipment!). Telecommunications equipment
providers are increasingly aware of the rising importance of data services. Nokia has already stated that it
expects 20-30% of its revenue to come from data-oriented equipment and services by the end of the
century.
60
$ Billion
50
40
Global Markets for
Portable Computers
30
20
9
Arthur D. Little, Industry Estimates, quoted in The Economist Magazine, 14 May 1994.
6
2000
99
98
97
96
95
94
93
92
91
0
90
10
An emerging market is for computing devices that have been called Personal Digital Assistants (PDAs).
These are small “palmtop” devices that integrate many personal information management functions, like a
personal calendar, notes, and an address book. The U.S. Robotics PalmPilot is the first such device to
become a commercial success, selling over one million units since its initial product launch in 1996. The
PalmPilot Professional executes the TCP/IP network stack, thus making such devices full fledged nodes on
the Internet one a dial-up connection is established. The popularity of this device indicates that a market
exists for integrated computing and communications capabilities that can fit in a pocket. This could be the
shape of things to come for future mobile telephones or computer-telephone combinations.
In terms of yearly revenue, the telecommunications industry is considerably larger than the data
communications industry. Part of the imbalance is due to relative size of service revenue compared with
equipment revenue (the data communications industry consists of equipment vendors; the
telecommunications industry is a mixture of both operators and equipment providers). Nevertheless, as
indicated by predicted future growth, the latter is growing much more rapidly.
The telecommunications industry is expected to grow from about $900 billion in 1996 to $1300 billion by
the year 2000, a growth rate of 50%. In the same time period, the data communications industry is expected
to grow from $29 billion in revenue to $72 billion, a growth rate of 300%.
The stock market’s capitalization of corporations represents the investors’ view of future corporate growth.
Under this admittedly imprecise metric, investors believe that in the near future, Cisco Systems, a maker of
Internet switching equipment, will have approximately the same revenues as Lucent Technologies, a broadbased vendor of telecommunications infrastructure equipment. The following table captures the perceived
market value and annual sales, as of August 1997.
Data
Communications
Cisco Systems
3Com/US Robotics
Ascend/Cascade
Newbridge
Market Cap
($Billions)
$46
$16.1
$7.7
$7.4
Annual Sales
($Billions)
$6
$5.5
$1.3
$0.9
Telecommunications
AT&T
Lucent
GTE
Bell Atlantic
Market Cap
($Billions)
$57
$48
$42.3
$33.3
Annual Sales
($Billions)
$52
$24
$21.7
$13.3
To summarize, the data communications is small compared with the size of the telecommunications
industry, but is growing quickly. By the end of the century, in terms of equipment sales revenue, the two
industries will be of comparable size. The introduction of small handheld computing devices able to
execute the Internet’s protocols presents new opportunities for mobile telephone-computer integration with
the Internet. The challenge is whether the existing telephone networks, wireline and cellular, will be able to
provide the required services for this emerging class of users and applications.
3. Is the Future of the Network Voice or Data?
Beyond cellular telephony, the second trend affecting future telecommunications infrastructures is the
dramatic rise of the Internet and the World Wide Web. The precise number of Internet users is not known,
but it is generally believed to be somewhat fewer than the number of cellular subscribers (approximately
200 million). For example, Reuters reports that that there are 82 million PCs connected to the Internet
today, and they expect this number to grow to 286 million by the Year 2001. CommerceNet’s Internet
Demographic Survey reports 55 million Internet users in the United States and Canada alone. It is
reasonable to double this number to represent a worldwide Internet population of about 100 million users.
Thus, the number of Internet users is perhaps 10% of the 1 billion strong worldwide telephone population.
The accelerating use of the Internet is changing the nature of interpersonal interaction as well as people’s
lifestyles. An Internet FIND/SVP Survey performed during November-December 1995 found that 25% of
Internet users make fewer long distance phone calls. In the U.S., many parents obtain electronic mail
accounts as soon as their first child goes to college. 32% of Internet users report watching less television.
7
From the viewpoint of sizing the network, Internet access changes the fundamental assumptions about the
underlying traffic patterns. It has been reported that more than 50% of the telecommunications traffic in the
San Francisco Bay Area is already data traffic, not voice traffic. This is perhaps not too surprising, given
the high density of computer and Internet-usage in the region. The much longer connection times
associated with data calls have overloaded the local telephone switching infrastructure, causing excessive
call blocking during peak usage periods. The network was provisioned for short duration voice
conversations. It does not represent the best design for computer sessions. For data-oriented applications
like Internet access, the appropriate network design metric is data throughput, not voice quality.
This development is supported from data collected by the U.S. Federal Communications Commission, as
quoted in The Economist Magazine of 13 September 1997. The duration of a call to an Internet Service
Provider (ISP) is typically an order of magnitude longer than the average of all calls. (Bell Atlantic, U.S.
West, Pacific Bell, and SBC are regional telephone operating companies covering the Northeast, Rocky
Mountain, West, and Southwest parts of the United States respectively).
Bell Atlantic
U.S. West
Pacific Bell
SBC
Average Peak-Hour Local Circuit Use
(Network Usage, Minutes)
All Circuits
Circuits to ISPs
5.0
43-47
5.0
45.0
6.7
31.7
6.7
52.3
Average Residential Call Length
(Network Usage, Minutes)
All Circuits
Circuits to ISPs
4-5
17.7
2-4
14
3.8
20.8
na
na
Clearly, data support within the network is becoming increasingly important. The question is whether this
trend is likely to accelerate. Technology in general, and communications technology in particular, has
always been characterized by a process of “creative destruction.” That is, successful technologies are
replaced by newer, even better technologies. For example, the telegraph was dealt a death blow by the
invention of the telephone in 1876, though it took many decades for the telegraph system to completely
disappear. On hearing about the invention of the telephone, Sir William Preece, Chief of the British Postal
System, was quoted as saying “The Americans may have need of the telephone, but we do not. We have
plenty of messenger boys.” Nobody now working in the telecommunications industry would dismiss the
rise of Internet-based technology so readily.
The way in which Post Office and postal services are changing in the information age presents an
interesting case study in creative destruction. Telephones have largely replaced the personal letter, except
for special occasions like Mothers’ Day (though this is traditionally the busiest day of the year for the
telephone network!). Posts are mainly used today for business-oriented correspondence, though this is
rapidly being replaced by alternatives to plain old mail service. Business documents are now delivered,
overnight, by Federal Express. Bills are increasingly paid by direct debit and electronic commerce systems.
Advertising is now being replaced by user pull for information from the World Wide Web. UPS delivers
the physical merchandise we purchase over the network. The role of the post office is becoming more
marginalized.
Will the same thing happen to the voice-based telephone network? Electronic mail certainly reduces the
number of telephone conversations. The Internet and the World Wide Web represent new paradigms for
information access. In the past, if you wanted to find out if an airplane flight was on time for arrival at your
local airport, you would have to first call the airline. Then you would wade through a voice menu, select
the appropriate option for flight information, wait in another queue for an available operator, and finally
speak to a human. All this operator did was to type the flight number on a computer screen to access the
appropriate flight information from the airline’s operational database. The World Wide Web allows the end
user to circumvent this mediated access. He or she can access the database directly from a web browser,
dramatically reducing the time to acquire the needed information (even with a heavily congested Internet!).
UPS and Federal Express web-based package tracking offer yet another example of the elimination of the
need to make a phone call to extract much needed information from human mediators to databases. The
successful electronic commerce sites of Amazon.com and CDNow.com offer further indications of a rise in
8
the feasibility of web-based commerce. It has been claimed that Internet commerce reached the $1 billion
mark this Christmas season.
The consulting firm of McKinsey performed a study of the efficiencies of interaction using the telephone,
paper mail, newspaper, electronic mail, electronic data interchange (EDI), the World Wide Web (WWW),
and the WWW enhanced with agent technology. The result of their investigations indicated that the time to
perform common information-intensive tasks, such as searching, coordinating, or monitoring, was greatly
reduced when using the new technologies. 10
Search: Finding a High-Rate Certificate of Deposit
Via Telephone
25.0 minutes
WWW
10.0 minutes
-60% of previous
WWW with Agent
1.0 minute
-90% of previous
Mail
E-Mail
EDI
Coordination: Reordering an Inventory Item
3.7 minutes
1.6 minutes
-57% of previous
0.3 minutes
-81% of previous
Monitoring: Updating an Equity Portfolio
Newspaper
5.1 minutes
WWW
1.8 minutes
-65% of previous
WWW with Agent
0.5 minutes
-72% of previous
In summary, data applications are becoming increasing important, as the convergence of computing and
communications technology makes it possible for the “person in the street” to access the information they
need directly. This has been enabled by the World Wide Web, which provides the common protocols and
the easy-to-use interfaces.
4. A Comparison of Internet and Telephone Technology
4.1. Definition of the Internet
On 24 October 1995, the Federal Networking Council, a group of representatives from the U.S. government
agencies that support the science and technology use of the Internet by U.S. researchers (e.g., the National
Science Foundation, the Defense Advanced Research Projects Agency, the Department of Energy, the
National Aeronautics and Space Agency, the National Institute of Health, etc.), defined the Internet as
follows:
“Internet” refers to the global information system that: (i) is logically linked together by a globally unique
address space based on the Internet Protocol (IP) or its subsequent extensions/follow-ons; (ii) is able to support
communications using the Transmission Control Protocol/Internet Protocol (TCP/IP) Suite or its subsequent
extensions/follow-ons, and/or other IP-compatible protocols; and (iii) provides, uses or makes accessible, either
publicly or privately, high level services layered on the communications and related infrastructure described
herein.
They define the Internet not so much as a physical network, but rather as any subnetwork that executes the
Internet protocol suite and related services. One of the great strengths of the Internet is that it spans highly
diverse physical link technologies: twisted pair, coax, optical fiber, microwave, radio frequency, satellite,
and so on.
In the rest of this section, we will review the three primary competing technologies for future
telecommunications networks: the Internet, the existing telephony network, and the proposed grand
10
Quoted in The Economist Magazine on 13 September 1997.
9
unification of packet and circuit switching in the form of ATM (Asynchronous Transfer Mode) technology.
This analysis is heavily based on the presentation in [Keshav97].
4.2. Strengths and Weaknesses of Internet Technology
Strengths
A key underlying assumption of the Internet is that end nodes are intelligent and have the ability to execute
the TCP/IP protocol stack. Until recently, this implied a fairly complex (and expensive) end device, such as
a laptop computer. Today’s personal digital assistants, costing a few hundred dollars, are now sufficiently
powerful to execute these protocols. While existing telephone handsets are quite a bit dumber (and less
expensive) than this, it should be noted that cellular telephones do possess embedded microprocessors and
have the capability to run more sophisticated software than is currently the case for the telephone network.
Several of the key design principles that underlie Internet technology come from its origin as a proposed
architecture for a survivable command and control system in the event of a nuclear war. The Internet
achieves its robust communications through packet switching and store-and-forward routing. It is not
necessary to create a circuit between end points before communications can commence. Information is sent
in small units, packets, which may be routed differently from each other, and which may arrive at their
destination out of order. This is sometimes called a datagram service (TCP, a transport layer protocol,
insures that data arrives without errors and in the proper sequence). The routing infrastructure gracefully
adapts to the addition or loss of nodes. But the network does assume that all routers will cooperate with
each other to insure that the packets eventually reach their destination.
The first design principle is that there is no state in the network, in particular, there is no connection state in
the switches. This has the positive effect of making the network highly robust. A switch can fail, and since
it contained no critical state, the network can adapt to the loss by rerouting the packet stream around the
lost switch.
A variation on this is called the end-to-end principle. It is better to control the connection between two
points in the network from the ends rather than build in support on a hop-by-hop basis. The standard
example of the end-to-end principle is error control, which can be done on a link by link basis, but still
must be done on an end-to-end basis to insure proper functioning of the connection.
A second principle is the Internet’s highly decentralized control. Routing information is distributed among
the nodes, with no centralized node controlling critical functions such as routing. This helps enhance the
network’s resilience to failure.
One the great successes of the Internet is its ability to operate over an extremely heterogeneous collection
of access technologies, despite large variations in bandwidth, latency, and error behavior. The key
advantage this provides is that the network needs to make few assumptions about the underlying link
technologies.
Weaknesses
The Internet also has some serious weaknesses. First, it provides no differential service. All packets are
treated the same. Should the network become congested, arbitrary packets will be lost. There is no easy
way to distinguish between important traffic and less important traffic, or real-time traffic that must get
through versus best effort traffic that can try again later. Note that the existing protocols do have the ability
to support priority packets, but these require cooperation that the priority bit is set only for the appropriate
traffic flows.
Second, there are no control mechanisms for managing bottleneck links. This is related to the first
weakness, in that the Internet has no obvious strategy for scheduling packets across the bottleneck. Recent
work by Floyd and Jacobson has developed a scheme called class-based queuing, which does provide a
10
mechanism for dealing with this problem across bottlenecks like the “long, thin pipe” of the Internet
between North America and Europe.
The third weakness lies in one of the Internet’s strengths: store-and-forward routing. The queuing nature of
store-and-forward networks introduces variable delay in end-to-end performance, making it difficult to
guarantee or even predict performance. While this is not a problem for best effort traffic, it does represent
challenges for supporting real-time connections.
A fourth weakness arises from another one of the Internet’s strengths: decentralized control. In this case, it
is very difficult to introduce new protocols or functions into the network, since it is difficult to upgrade all
end nodes and switches. It will be interesting to see how rapidly IPv6, the latest generation of the routing
protocols, is disseminated through the Internet. A partial solution is to retain backward compatibility with
existing protocols, though this weighs down the new approaches with the old. A way to circumvent this
limitation is to provide new services at the applications level, not inside the network. We will discuss the
client-proxy-server architecture in Section 4.5.
The last in our list of weaknesses comes from the Internet’s assumption of a cooperative routing
infrastructure. Without a truly trusted infrastructure, the Internet as it now exists suffers from well known
security problems. Several solutions are on the horizon. One is end-to-end encryption, which protects the
information content of packets from interception. A second is policy-based routing, which limits packet
flows to particularly trusted subnetworks. For example, using policy-based routing, it is possible to keep an
packetized audio stream within the subnetworks of well-known ISPs and network providers, as opposed to
arbitrary subnetworks.
4.3. Strengths and Weaknesses of Telephone Technology
Strengths
A fundamental assumption of the telephone network is that the end nodes require little or no intelligence,
and need adhere to nothing more than a well-specified electrical signaling protocol. This makes it possible
for the network to support a diverse collection of low cost end devices, ranging from telephone handsets,
cordless telephones, headsets, and fax machines. With the deregulation of the phone network in the United
States, it is possible to purchase inexpensive telephones for under $20.
The telephone network has been highly optimized to provide excellent performance for voice. It performs
this job extremely well, even in the face of large latencies associated with international telephony. It
provides end-to-end performance guarantees through a well-defined signaling layer (Signaling System 7:
SS7) to the underlying switching function.
Finally, the network is a true utility. It has demonstrated its ability to operate successfully through power
outages (the standard end nodes, wireline handsets, are powered from the network; devices like TTYs and
fax machines do require their own power) and in the face of natural disasters. This is achieved, in part,
through the sophisticated, robust, and hierarchically arranged switches controlled and managed by the
service providers.
Weaknesses
The phone network achieves much of its advantageous voice performance through a significant
overallocation of bandwidth resources. For example, the modest 3.4 KHz audio voice band signal is
converted to a 64 kbps pulse code modulation (PCM) digital voice encoding. Internet audio applications
and cellular telephony have demonstrated more sophisticated encoding formats that operate at rates as low
as 8 kbps, with even lower rates becoming available. Unfortunately, this representation for digitized voice
permeates the entire telephone network. Even data bandwidth is provided in 64 kbps increments!
The vagaries of the voice data type have influenced the entire network design. For example, some bits in
the audio encoding are more important than others, and thus are more carefully protected. All bits in data
11
are equally important, yielding a complete different strategy for error protection. Another example comes
from interleaving schemes within the voice signal. Acceptable latencies in human speech (approximately
250 ms) have led to channel interleave schemes that create serious delay problems for data transmission,
especially at the transport layer. If the in-order delivery of data is delayed too long, it can force gratuitous
retransmissions from the sender. Or if losses do occur, it can take a significant amount of time for the
sender to recover and to retransmit the missing data.
A second weakness is that the telephone network’s switching infrastructure has been determined by the
statistics of voice call traffic. As pointed out above, circuit switched data “calls” typically last an order of
magnitude longer than voice calls. As the frequency of long duration data calls increase, the network
operators have been discovering the inadequacy of their existing network design.
A third weakness is the difficulty of introducing new services into the so-called “Intelligent Network,”
which is the advanced service architecture of the existing phone network. The problem arises from complex
service interactions.
Some strengths also lead to weaknesses when viewed from a different perspective. The phone network’s
approach to robustness, through highly engineered, fault tolerant switches, is rather expensive. This is in
contrast to the Internet’s approach of using inexpensive switches combined with adaptive routing to insure
robust, yet low cost operation.
4.4. ATM: The Grand Convergence?
Asynchronous transfer mode (ATM) technology has been trumpeted as a Grand Unification, combining the
best attributes of packet-switching technology with the best attributes of circuit-switched technology. ATM
divides communications streams into small-sized “cells” that are then time multiplexed over
communications channels. It is possible to achieve very high-speed transmission using small cells
combined with circuit establishment before the transmission commences. Therein lies ATM’s ability to
support integrated services. The network can support either real-time data like voice through its finegrained multiplexing capability, or more time-insensitive data transmissions.
The controversy surrounding ATM is whether it will become a dominant end-to-end technology, or one
high speed point-to-point technology among others. In the LAN environment, ATM has not as yet been
successful, and appears unlikely to be in the foreseeable future.
Strengths
One of ATM’s key strengths is its virtual circuit concept, with call set-up in advance of data transmission.
This is critical for ATM’s ability to manage scarce resources and achieve its evolving model of quality of
service (QoS) guarantees. Call set-up makes it possible to capture expected traffic characteristics of the
connection, and to use this information as the basis of admission control. If insufficient resources are
available to support the connection, it is blocked from access to the network.
A second strength is its use of fixed, small size “cells” to enable fast switching. Such an organization
makes it possible to build switches that can quickly examine incoming streams and dispatch them to the
appropriate output port. Given the increasing processing power of today’s personal computers, it is not
obvious that this is a sustainable advantage in the long term.
A third strength is the incorporation of sophisticated statistical multiplexing mechanisms to support a
variety of traffic models. This is critical if high utilization is to be achieved in the network.
Weaknesses
The fundamental weakness of ATM is its strong connection-orientation, inherited from its telephony
ancestors. Every cell in a transmission stream must follow the same path, in strict sequence, and this path
much be established in advance of the first cell transmission. Thus, even short communications sequences
12
require an end-to-end set-up, adding to communications latency. If a switch fails, the connection must be
torn down and re-established, implying that either switches must be made very robust or the network will
not perform well in the event of switch failures. Furthermore, providing adequate support for end-node
mobility in the connection-oriented model is difficult, and remains an area of intensive research
investigation.
ATM has achieved success in the voice network, but so far its success in data networks has been limited to
its use as a high speed, point-to-point technology in the network backbone. It has not been successful for
data traffic in the local area, and is likely to be eclipsed by switched gigabit Ethernet in those applications.
Since ATM is unlikely to be a universal end-to-end technology, its advantages in terms of end-to-end
performance guarantees are lost.
4.5. Next Generation Internet
An alternative effort to build an “integrated services” network, that is, a network equally good at carrying
delay sensitive transmissions as delay insensitive ones, based on Internet technology, has been called
“Integrated Services Packet Network” (ISPN).
The next generation Internet has several attractive features. It will have ubiquitous support for multipointto-multipoint communications based on multicast protocols. Recall that multiparty calls were the number
one requested additional functionality by cellular telephone subscribers. The next generation of the Internet
protocols, IPv6, has built-in support for mobility and mobile route optimization.
To achieve good real-time performance, the Internet Engineering Community has developed a signaling
protocol called RSVP (ReSerVation Protocol). This enables the creation of a weaker notion of connections
and performance guarantees than the more rigid approach found in ATM networks. Performance is a
promise rather than a guarantee, so it is still necessary to construct applications to adapt to changes in
network performance. RSVP is grafted onto the Internet multicast protocols, which already require
participants to explicitly join sessions. Since receivers initiate the signaling, this has nice scaling properties.
Another attractive aspect of the ISPN approach is its basis on soft state in the network. Performance
promises and applications expectations are continuously exchanged and refreshed. Soft state makes the
network very resilient to failure. The protocols have the ability to work around link and switch failures.
ISPN requires that a sophisticated network stack execute in the end node. Add to this the requirement to
handle real-time data streams, primarily audio but also video. Fortunately, Moore’s Law tells us that this
year’s microprocessor with be twice as fast in 18 months (or the same performance at half the price).
Existing microprocessors have become powerful enough to perform real-time encode and decode of audio,
and are achieving ever higher throughput rates for video.
Recall that in the traditional telephony infrastructure, hardware operates at 64 kbps for PCM encoding. The
Internet’s Multicast Backbone (MBone) audio software supports encoding and decoding at many rates: 36
kbps Adaptive Delta Pulse Code Modulation (ADPCM), 17 kbps GSM (the full rate coder used in the GSM
Cellular system), and 9 kbps Linear Predictive Coding (LPC). It is possible to achieve adequate video at
rates from 28.8 kbps to 128 kbps using scalable codecs and layered video techniques.
A key element of the ISPN’s approach to providing reasonable performance is the Real Time Protocol
(RTP) built on top of the underlying routing protocols. RTP supports application level framing. This places
the responsibility onto applications to adapt to the performance capabilities of the network. The protocol
includes a control protocol component that reports to the sender received bandwidth at the receiver. Thus, if
the network cannot support the sender’s transmission rate, the sender reacts by sending at a lower rate. The
protocol has the capability to gently probe the network at regular intervals in order to increase the sending
rate when the network can support it.
This approach stands in stark contrast to the ATM model. The latter has a more static view of performance
in terms of guarantees. Guarantees simplify the applications since they need not be written to be adaptive.
13
But it places the onus on the network to police the behavior of the traffic so that the guarantees can be
achieved. This puts more state into the network, which now requires set-up before use and which makes the
network sensitive to failures.
A second critical advantage of the ISPN architecture is the ease with which new services can be introduced
into the network, using the so-called “proxy architecture.” Proxies are software intermediaries that provide
useful services on behalf of clients while shielding servers from client heterogeneity. For example, consider
a multipoint videoconference involving all but one participant who are connected by high speed links.
Rather then default the lowest common denominator resolution and frame rate of the poorly connected
node, a proxy can customize the stream for the available bandwidth of the bottleneck link.
It should be noted that there has recently been some controversy within the Internet Engineering
community about whether there is a need for RSVP-based reservations and control signaling at all. One
way to look at the Internet’s solution for improving performance is simply to add more bandwidth to the
network, either by using faster link technology or by increasing the number of switches in the network. (I
have already argued that the telephone wastes bandwidth in its 64 kbps voice encoding, though it carefully
manages the total available bandwidth through call admissions procedures). This is not such a farfetched
idea. With the arrival of widespread fiber optic infrastructure, the amount of worldwide bandwidth has
increased enormously. This is one of the reasons that the long distance phone service has rapidly become a
commodity business (and this why AT&T has such a low price-to-earnings ratio on the stock exchange!).
For example, the amount of transpacific and transatlantic bandwidth is expected to increase by a factor of
five between 1996 and 2000 as new fiber optic cables come on line. Nevertheless, local access bandwidth is
quite limited, and we expect this to continue for some time to come despite the rollout of new technologies
like ADSL.
4.6. Internet Telephony
A recent development has been the rise in the use of the Internet as an alternative infrastructure to exchange
spoken or other telephone data, such as fax. Bypassing the traditional telephone infrastructure makes it
possible to circumvent existing telephone tariff structures, thus achieving much lower calling costs.
The so-called “gateway architecture” even makes it possible to support existing handsets and local
telephone access. A user calls a local number of a gateway server, that translates the analog voice in digital
packets that are sent out on the Internet to another gateway in the country being called. This server dials the
local number of called party, converting the packet voice back into analog signals. The process is reversed
in the opposite direction.
Note that by using a data communications, as opposed to voice communications approach, it is possible to
introduce a variety of new services in this context. For example, if the called party is not available, the
digitized voice message can be archived for later replay (voice mail). Or the called party may have
specified a user profile that request such messages to be converted to text and delivered via electronic mail.
Real-time streams can be converted to best effort streams. For example, a fax transmission rarely has a
person waiting on the receiving end. So it is possible to send fax data not as a circuit switched telephone
call, but as a leisurely delivered packet stream. The gateway at the receiving end converts the fax data into
an analog stream for delivery to the receiving fax machine.
The pricing for such services can be very attractive. A one-minute call from San Francisco to Germany
using AT&T and Deutsche Telecomm costs approximately $1.20 per minute. The same call made through a
gateway provider has been advertised as costing as little as $0.28 per minute.
Why is this service so inexpensive? First, the Internet’s infrastructure is based on more commodity
technologies than the telephone network, and is thus much less expensive. A second reason is that existing
long distance tariffs actually far exceed the costs of providing the service. This is in part due to the
government-backed monopolies that provide telecommunications services in most places of the world. This
is likely to change in the near future, as the World Trade Organization oversees worldwide
telecommunications deregulation in 1998. As the world moves into a new age of competition and
14
deregulation, it will be interesting to see how the comparative pricing of Internet and telecommunications
networks works out.
An often-stated reservation about Internet telephony is that the audio quality is not good enough. This
comes, in part, from a tradition in telephony circles that always sought to achieve the highest possible audio
quality, almost independent of cost. However, there are trends at work that are likely to improve the quality
Internet audio. This will be achieved by circumventing the problems associated with high latency and
dropped packets.
The first trend will be the deployment of independent long-haul IP-networks by the telecommunications
operators. Operators will interconnect with each other at well-determined access points. Such networks will
be carefully sized to meet the needed delay and bandwidth demands.
The second trend is Moore’s Law, which states that computer power at a given cost doubles every 18
months to two years. Faster hardware will help to reduce latencies in the gateway servers. Scalable
processing architectures like networks of workstations (NOWs) will make these servers more scalable as
well.
The final trend is the emergence of new standards that will help improve the quality of Internet audio.
These include RSVP, the H.323 coding scheme, new encoding strategies that allow lost packets to be
reconstructed (for example, MPEG audio), and software implementations of voice coding that will operate
at 8 kbps. The Voice over Internet Protocol (VoIP) Forum has been tackling the issues of making these
emerging formats and protocols interoperable.
The final comment is that many leading telecommunications companies are already using the Internet to
provide low cost, international services. Internet FAX services have been around for several years. And
many local operators in the United States view the Internet as an inexpensive way for them to enter the long
distance service market without a major capital investment.
$Billions
3
2 .5
2
S pent
1 .5
S a v in g s
1
0 .5
0
1998
1999
2000
2001
2002
2003
2004
A leading market research firm, Forrester Research, believes that the Internet Telephony market in the U.S.
will grow to $2 billion in new revenues by the year 2004. 11 In addition, the Internet-based technologies will
yield a savings to service operators of an additional $1 billion per year. This $3 billions represents about of
11
The Economist Magazine, 13 September 1997
15
4% of the total U.S. telephony revenue. These are conservative numbers. The actual revenue may be much
higher.
5. Implications Beyond the Third Generation
5.1. Introduction
The third generation of cellular systems goes by many names, such as Future Public Land Mobile
Telecommunications System (FPLMTS), Universal Mobile Telecommunications System (UMTS) and
International Mobile Telecommunications of the Year 2000 (IMT-2000). In general, the third generation is
understood to provide universal multimedia information access with mobility spanning residences,
businesses, public/pedestrian, mobile/vehicular, national, and global regions. Such systems are expected to
support data rates in the wide-area in the range of 512 kbps, to 155 mbps.
This follows on the heels of the preceding two generations. The first cellular systems were based on analog
transmission techniques, such as AMPS in North America. The second generation is digital cellular, such
as IS-54 (TDMA-based), IS-95 (CDMA-based), and GSM (an alternative TDMA system).
The details of the third generation are still being defined, but it is known that it will embrace multiple radio
access technologies suitable for local, wide-area, satellite, etc. and able to achieve higher bandwidth than
existing airlinks. Futhermore, it is highly likely that the wireline infrastructure associated with the third
generation system will be based on the same one used by GSM, though this is still under debate.
In the rest of this section, we will present one possible vision for how the GSM cellular and wireline
telephony infrastructure might evolve into the “fourth generation,” based on Internet technology.
VLR
HLR
Interexchange
Network (IXC)
AUC
AT&T
MCI
Sprint
WorldCom
...
EIR
BSC
BSC
Local Exch
Network
MSC
MSC
TSC
TSC
BTS
BSC
BSC
MS
MSC
MSC
Transit
Switching
Center
Local Loop
RBOCs
5.2. The Cellular Telephone Network
The GSM cellular telephone infrastructure consists of Base Station Controllers (BSC), Mobile Switching
Centers (MSCs), the mobility and authentication databases (Visitor Location Register, VLR; Home
Location Register, HLR; Authentication Center; AuC; and the Equipment Information Register, EIR), and
the Transit Switching Center (TSC). The TSC provides the interconnection to the Interexchange Network
16
(IXC) provided by the long distance carriers. A local exchange network (LEC) provides the connectivity
between IXCs and a local user. See the figure above.
If a mobile user makes a call on her cellular phone, the call is routed to the BSC that controls her local cell.
It is switched by the MSC network through to the TSC, and then on to an IXC. From there, the call goes
through to the called party, via the LEC.
VLR
HLR
Interexchange
Network (IXC)
AUC
MS
EIR
BSC
BSC
MSC
MSC
Local Exch
Net (LEC)
TSC
TSC
5.3. Data over Analog Cellular
Now consider a data connection established across the cellular network (see the figure above). The
originator has a laptop computer, a modem, and an analog cellular telephone. A connection is made from
the laptop to the telephone network, to a modem at the dialed-up computer.
In following this connection path, it is interesting to note the progression from digital to analog encodings,
and back again. The stream starts in digital form on the laptop. The modem converts it to analog form for
transmission through the cellular system via the voice band. At the IXC, the analog signals are converted to
64 kbps PCM coded digital signals. When this reaches the local loop, it is once again converted to analog
form. From here, the modem at the dialed-up computer converts it back to digital form!
These conversions and translations have much to do with the high latencies and limited bandwidths seen on
dial-up lines.
VLR
9.6 kbps to
100+ kbps
HLR
Interexchange
Network (IXC)
AUC
EIR
BSC
BSC
MSC
MSC
TSC
TSC
IWF
17
Local Exch
Net (LEC)
5.4. Data over Digital Cellular
Digital cellular systems are supposed to help, but they accomplish this only up to a point. See the figure
above. Since the cellular system is digital, there is no need for a modem on the end device. Data can be
encoded in the TDMA frame structure and transmitted just like digitized voice.
Unfortunately, the rest of the phone network cannot readily accept the data inside GSM frames. Associated
with the MSC is a special function called the IWF, or Interworking Function. This is really nothing more
than a modem, converting the digital stream back to analog for conversion and carriage via the IXC. The
multiple conversions from digital to analog and back again occur once again. The only difference is the
relocation of the modem from the end node into the mobile switching infrastructure.
VLR
HLR
AUC
EIR
BSC
BSC
Local Exch
Net (LEC)
MSC
MSC
GW
GW
ISP
ISP
Internet
ISDN: digital to
the home
VLR
HLR
AUC
EIR
BSC
BSC
Corporate
Intranetwork
MSC
MSC
GW
GW
GW
GW
Internet
18
5.5. Internet-Based Architecture
An alternative, Internet-based architecture, might be organized as follows. See the figures above.
Associated with the MSC is an IP-gateway, with direct connectivity to the Internet. Voice streams are
routed through to the TSC and the IXC. Data streams are forwarded to this gateway, and sent out over the
Internet. This has the significant advantage of converting a circuit-switched data call into a packet-switched
transmission that bypasses the existing telephone network.
The rest of the path depends on the way in which the computer on the other end of the data call is
connected to the Internet. If it is connected to an Internet Service Provider (ISP) via ISDN, then a local
ISDN circuit between the ISP and the end computer is used to complete the path.
If the end computer is on a corporate intranetwork with Internet connectivity, then all of the routing is done
via the Internet, with no need for modem dial-ups of any kind.
In its ultimate form, we expect to see proxy servers associated with the gateways to the Internet. These
could be used, for example, to packetize the GSM-encoded voice for ultimate delivery to a corporate user
on her desktop computer attached to a corporate network. Such an architecture offers the possibility of
complete integration between cellular handsets and laptops computers in the wide-area, and wireless PBXs
and wireless LANs in the local area.
VLR
HLR
AUC
EIR
Digital Cellular
Link Layer
BSC
BSC
WLAN/WPBX
Link Layer
HA
HA
FA
FA
MSC
MSC
GW
GW
Smart
Communicator
Next
Generation
Internet
GW
GW
BS
BS
Corporate
Intranetwork
Proxy
Servers
6. Summary and Conclusions
We believe that data access in telecommunications networks in general, and cellular telecommunications
networks in particular, will accelerate in importance. Data applications will come to dominate voice
applications.
The Internet’s emerging capabilities for real-time traffic, multipoint communications, and broadcast-based
information dissemination make it a compelling technology to use for the next generation of multimedia
systems.
19
The secret of the Internet’s success is threefold. First, the network can be dumb because the intelligence is
embedded in the end points. Moore’s Law tells us that for a given cost, end points will get ever more
powerful at an exponential rate. Second, the relatively simple and inexpensive infrastructure components
that underlie Internet technology give a tremendous economic edge to IP-networks in comparison with
existing telephone network technologies. And third, the easiest way to gain performance in IP-networks is
by throwing bandwidth at the problem. This is a reasonable approach, given the trends towards ever
increasing bandwidth in the wide area.
Finally, the economies of scale favor the Internet. Hundreds of thousands to millions of units of IPinfrastructure components are sold per year, compared to a few thousand for telephony network
infrastructures.
These trends present a powerful case to rethink the architecture of the telecommunications infrastructure
beyond the third generation.
7. References
[Goodman97] D. J. Goodman, Wireless Personal Communications Systems, Addison-Wesley Longman,
Reading, MA, 1997.
[Keshav97] S. Keshav, An Engineering Approach to Computer Networking: ATM Networks, the Internet,
and the Telephone Network, Addison-Wesley Longman, Reading, MA, 1997.
20
Download