Emerging Internet Technologies I. Introduction and History T

advertisement
Emerging Internet Technologies
Harish Sethu
Department of Electrical and Computer Engineering
Drexel University
I. Introduction and History
The last decade has seen a remarkable growth in the development of new networking
technologies and applications, contributing to a historic transformation in the way we
communicate amongst ourselves. This growth, most recently dominated by the
emergence of the World Wide Web application, is fueling development of a variety of
novel applications in education, engineering, business, entertainment and medicine.
These applications and emerging developments in the Internet infrastructure are likely to
have a profound impact on our day-to-day lives, underscoring the importance of gaining
an appreciation for the technologies underlying the Internet.
The Internet began as a modest network called the ARPANET, first deployed in 1969
with just four routers (then known as Interface Message Processors), interconnecting a
small number of host computers and terminals. Funded by the Advanced Research
Projects Agency (ARPA) within the U. S. Department of Defense, the ARPANET project
was intended to facilitate the sharing of computing resources among researchers at
various institutions across the country.
Packet switching, as opposed to circuit switching used in telephone networks, was the
most important new technology used in the ARPANET (and in today’s Internet as well).
A switch is a device that transfers information from its inputs to the appropriate outputs.
Our voices spoken over a phone or the e-mails we send go through several such devices
before being received at the destination. In circuit switching, when information is to be
transferred from a certain input to a certain output, a physical circuit is set up between the
input and the output before any data transfer begins. Once a dedicated circuit is set up, all
data that arrives on the input is directly transferred to the output. In packet switching,
however, data arrives in small blocks of bits called packets carrying a header containing
information about the intended destination. A packet that arrives at an input is forwarded
to the appropriate output based on the information contained in the packet or some preprogrammed information in the switch. Sending messages in a packet-switched network
does not require a dedicated connection from the input link to the output link.
Figure 1 illustrates the difference between circuit switching and packet switching.
Each of the rectangles in the figure represents a switch (as in a telephone switching
office). Assume that you wish to call a long-distance friend. Your voice is likely to go
through a cascade of switches before it reaches your friend’s telephone. As shown in
1
Figure 1(a), the switching network establishes a dedicated physical connection between
the two telephones with the required bandwidth set aside for this connection through the
entire duration of the call. If you were to send an e-mail to your friend through a packetswitched network, however, no such physical connection with dedicated bandwidth is
established between your computer and that of your friend. Your e-mail is converted into
a sequence of packets, and each packet is routed through the network independently.
Figure 1(b) shows how two packets of your e-mail may take altogether different paths to
reach your friend’s computer. The packets may arrive out-of-order at your friend’s
computer, but will be reassembled in correct order before your friend reads your e-mail.
Figure 1(a). Circuit Switching
Packet 1
Packet 2
Figure 1(b). Packet Switching
The advantage of packet switching lies in its efficient use of the shared network
resources. In a circuit-switched telephone network, the bandwidth required to support a
call is reserved through preserving the physical connection that is established for the
entire duration of the call. This bandwidth is tied up and unavailable for others to use
whether you and your friend talk slow or fast or say nothing at all. This kind of service
from the network is ideal for a telephone conversation, since you do want to have the
bandwidth reserved for the entire duration of your conversation. Computers, on the other
hand, typically communicate in short bursts after long periods of silence. It is wasteful to
reserve bandwidth between your computer and a printer on the local area network for the
few milliseconds per hour that it is likely to be used. It is similarly wasteful to occupy
resources for several seconds setting up a circuit just to send an e-mail across it for a
2
couple of microseconds, and then spend several additional seconds to terminate the call.
Packet switching is more efficient in its use of network resources since each packet uses
up resources it needs only when necessary and without making reservations.
How we use road space offers an excellent analogy to illustrate the differences
between circuit switching and packet switching. Circuit switching is analogous to
reserving the entire road space between the source and the destination before getting into
your car and embarking on your journey. With such inefficient use of road space, the
disadvantage is clearly that you may almost never be able to get yourself on the road
since you will almost always find it reserved by someone else! The benefit, of course, is
that once you are on the road, you never experience congestion and the delays are entirely
predictable. Packet switching, on the other hand, is analogous to the way we actually
share road space amongst ourselves. Each car uses up space as necessary, without the
benefit of any prior guarantees. Large-scale networks such as the Internet are shared
resources similar to the network of roads we use, and packet switching offers a more
efficient use of this shared resource.
Packet switching was invented in a theoretical form by Leonard Kleinrock as part of
his Ph.D. thesis at MIT. In 1959, he published the first paper on digitizing and
transmitting information, which later came to be known as packet switching. Paul Baran,
a 1949 graduate of Drexel University (then, Drexel Institute of Technology), furthered the
study of packet switching significantly with a goal toward designing communication
networks that could survive a nuclear war. Donald Davies, a British researcher, is also
widely recognized as having independently contributed to the early development of the
technology.
In the mid-1960's, packet switching was a novel concept, and the telephone companies
did not wish to have anything to do with it. The telephone companies, with over a
hundred years of experience in circuit switching, had allowed familiarity to cloud their
analysis of what is technically feasible and what is not. The original creators of the
ARPANET had requested the telephone industry to participate in the venture but were
instead only given assurances that packet switching wouldn't work! It is ironic that
Alexander Graham Bell, responsible for the birth of the telephone industry during the
latter part of the 19th century, encountered similar discouragement from industrialists too
used to telegraphy as the medium of communication. Western Union, for example,
rejected the technology invented by Bell, claiming that telephones had no future since
phone calls did not leave a written record the way telegrams do!
Because of the motivations behind the work of Paul Baran mentioned above, and
because of the Pentagon parentage of ARPA, there has been a longstanding myth that the
Internet was developed to create a survivable communication network for the nation in
the event of a devastating nuclear attack. While the ARPANET project had nothing to do
with surviving war, the myth has been repeated endlessly by the mainstream media as
well as some textbooks in networking. The original pioneers only intended it as a project
to save computing resources by allowing sharing of these resources between various
scientific laboratories supported by ARPA. Bob Taylor was the young director of ARPA
3
who started the ARPANET project. He later learned that myths have a life of their own
and that there are occasions when facts cannot be allowed to spoil a good story. When
Time magazine recently repeated the same myth, Taylor wrote them a letter. Time did not
publish his letter and he received a reply back informing him that their sources were
correct!
After the original deployment of four routers in 1969, ARPANET grew steadily, and
many more researchers in universities gained access to it. At the time, several other
networks had begun to sprout and grow, but they all used different interfaces, packet
formats and transmission rates. In 1973, Bob Kahn and Vint Cerf came up with the idea
of creating a network of networks, so that anyone on any of these networks could
communicate with anyone else. They invented TCP/IP, the set of protocols that run the
Internet today, and which marked the true beginnings of internetworking.
Another significant milestone in the growth of the Internet was the invention of
Ethernet, the network protocol that allowed computers within a building to link to each
other and share common resources such as printers. Invented by Bob Metcalfe in 1973,
Ethernet transformed the nature of office and personal computing. Through their office
computers, a much larger segment of the population now gained access to the Internet.
The Ethernet provided the “local roads” to access the “superhighways” that were built of
the Internet technologies that descended from the ARPANET. The invention of TCP/IP
and the Ethernet, together fueled most of the growth of the Internet during the 70s and the
early 80s.
While the ARPANET was actually an official federal research facility and was not
originally invented for personal communications, as early as the early 1970s, it became
apparent that researchers were increasingly using it to exchange personal messages. Ray
Tomlinson, in 1972, saw the need and created the first e-mail delivery software. He is
today credited with the brilliant decision of choosing the '@' sign to separate a username
from the domain name in e-mail addresses. E-mail, the first network application to be
appreciated by more than just the researchers, was almost singularly responsible for
increased popularity of the Internet during its early years.
More new networks emerged during the 1980's, the most prominent among which was
NSFNET, created by the National Science Foundation as a high-speed successor to the
ARPANET that would be open to all researchers in universities. Sometime during the
mid-80's, the collection of all the different networks began being referred to as the
Internet. Since around 1995, the Internet has become increasingly commercial with a
steady decline of the U. S. federal government's support of the Internet infrastructure.
Large companies such as AT&T, Sprint and WorldCom have created large, very highspeed backbone networks that have essentially replaced earlier non-commercial
deployments.
The application that really gave the Internet its strongest boost was the World Wide
Web, allowing everyone to easily navigate through the rich and ever-increasing resources
of the Internet. Tim Berners-Lee, while at CERN, the European particle physics
4
laboratory, wrote the original prototype software in 1990. Though surprising from
hindsight, the world wide web got off to a very slow start. The ability to jump around
from one resource to another through hyperlinks was not of much use when there were
almost no institutions, let alone individuals, that supported such browsing with public
web pages. When Marc Andreesen wrote the browser called Mosaic in 1992, it was
released for free over the Internet. Mosaic gained popularity very soon, especially for the
seamless way in which it integrated text and images. In 1994, the Mosaic browser,
renamed Netscape, was commercialized.
There is much controversy and debate about what should be regarded as the true
beginning of the Internet. Did the Internet begin with ARPANET, or was it TCP/IP that
really created it through internetworking? Was it Ethernet that really gave birth to the
Internet through greatly increasing accessibility, or should this credit go to the World
Wide Web that really catapulted the world into the information revolution we are
witnessing today? Regardless of these questions and regardless of what the coming
decades may bring, the Internet has certainly begun a historic transformation in our lives
with social, economic and political consequences far beyond the rational imagination of
its pioneers.
The Internet, still very young, has continued to evolve in its architecture and
infrastructure to this date, with many exciting new developments yet to come. The
Internet Society is an organization that oversees a number of boards and task forces
involved in Internet development and standardization. There are three organizations
under the auspices of the Internet Society which are responsible for Internet design,
engineering and management, including the development and publication of protocol
standards. The Internet Architecture Board is responsible for the overall architecture of
the Internet, and provides broad direction to the Internet Engineering Task Force (IETF).
IETF is the engineering and development body, and the most relevant one as regards
engineering details that shape the Internet. The Internet Engineering Steering Group is
responsible for the technical management of the IETF activities and the Internet standards
process.
The following section discusses an important selection of fundamental aspects of
networking relevant to the Internet. Section III presents a selection of emerging network
architectures and services that may become important components of a future Internet.
Section IV concludes this article with brief speculations on the future evolution of the
Internet.
II. A Few Fundamentals
Networking has steadily evolved into a vast field of study, borrowing from many areas
of science and engineering. This section provides a high-level overview of some of the
basic concepts in networking, with an emphasis on those fundamental aspects that are
relevant to gaining an understanding of the future technological evolution of the Internet.
5
Protocol Layering
A protocol is a set of rigid rules by which meaningful communication can be
accomplished between a set of entities such as application processes running on different
computers. For example, an e-mail application uses a certain protocol that defines address
and data formats to communicate with another e-mail application on a different
computer. Similarly, a hardware interface device uses a pre-defined protocol with
specifications such as clock speed and signal levels to transmit data over a physical wire
to the device on the other end. Protocol layering is a design technique that decomposes
the entire task of computer communications into manageable components or layers.
Application Layer
Transport Layer
Network Layer
Access Layer
Application protocol, e.g., HTTP
Transport protocol, e.g., TCP
Network protocol, e.g., IP
Network access protocol, e.g., Ethernet
Application Layer
Transport Layer
Network Layer
Access Layer
Physical Layer
Physical Layer
Physical medium, e.g. copper
System 2
System 1
Figure 2. Layers in a typical implementation of a protocol architecture
A simple analogy of mailing a greeting card through the U. S. Postal Service
illustrates the concept of layering. When you have to mail a greeting card to a distant
friend, you place the greeting card in an envelope, write your friend's address on the
envelope, stamp it and drop it in a mailbox. The Postal Service picks up your card, reads
the address on the envelope, routes it to your friend's city and, for the most part, manages
to deliver it to your friend's door. In sending your friend the card, you follow a protocol
that includes writing the address on the envelope in an acceptable format, stamping it and
dropping it in the mailbox. However, you never have to concern yourself with details
such as the mode of transportation by which your card is carried to your friend's city, or
the exact directions to your friend's house. These details are hidden from you because the
Postal Service provides you a service by presenting you an abstraction that simplifies
your task. The abstraction presented to you is that the mailbox at your friend's house is
the same as the mailbox at the corner of your street into which you drop your card.
6
A network architecture is also somewhat similarly designed as multiple layers of
protocols, in which each layer simplifies the task of the layer above it by providing a
service that presents a simple abstraction to the higher level layer. Just as your task is
simplified by not having to know the directions to reach your friend’s house, an e-mail
application is made less complex by not having to concern itself with the details of
exactly how data should be routed from the sender to the recipient.
Another important goal in the design of networks is to be able to accommodate
changes in the technologies as well as in the way networks are used by people. An
improvement in the technology to send more data per second on a wire, for example,
should not require one to rewrite the e-mail application software. In a network
architecture implemented as multiple layers of protocols, if you wish to add a certain new
functionality to the network, you have to change only the protocol layer concerning that
function. Protocol layering, thus, serves the additional purpose of allowing independent
growth of various technologies in networking through providing a modular design of the
network architecture.
The protocol architecture that is most commonly implemented can be roughly divided
into five layers, as illustrated in Figure 2.
The Application layer is the top layer, which allows applications such as web browsers
and e-mail software packages to communicate with a peer application. Examples include
ftp (File Transfer Protocol) and http (HyperText Transfer Protocol).
The Transport layer provides a communication service to the application layer. The
most common transport layer protocol is TCP (Transmission Control Protocol), which
provides an abstraction of a point-to-point error-free channel that delivers data in exactly
the order they were sent.
The Network layer hides the topology of the network from the transport layer, and is
primarily responsible for routing the data passed to it to its ultimate destination. IP
(Internet Protocol) is the dominant network layer protocol. Unless you are using a
special-purpose package meant for proprietary networks, chances are that your browser
or your e-mail package uses the service provided by the TCP/IP suite of protocols.
The Network Access layer provides access to the network, whether through point-topoint links or through a broadcast medium such as in wireless networks. This layer hides
details from the network layer such as whether the communication is over copper wires,
optical cables or over a wireless medium, by providing an abstraction of a raw
transmission facility free of undetected transmission errors. This layer is also often
referred to as the data link layer.
The Physical layer is the lower-most layer concerned with the transmission of raw bits
of data over a communication channel. This layer determines details such as how many
volts should be used to represent a 1 as opposed to a 0, whether transmission may
7
proceed simultaneously in both directions, and how many bits should be transmitted per
second.
Switches and Routers
The term switch is generally used when the data being forwarded is at the data link layer.
The term router is generally used when the data being forwarded is at the network layer.
In the following, we will use the terms switch and switching, although the concepts
discussed in this section are also relevant to routers.
Packet headed to output 0
Packet headed to output 1
0
0
0
0
1
1
1
1
(a)
Before
After
0
0
0
0
1
1
1
1
(b)
Before
After
Figure 3. (a) When arriving packets are headed to different output links, (b) When more
than one arriving packet is headed to the same output link.
Achieving good performance in the forwarding of packets from inputs to outputs is not
quite straightforward. An important performance objective is to be able to successfully
forward as many packets as possible and achieve an output bandwidth close to the input
bandwidth. Consider a simple 2-input 2-output switch. Consider a scenario, shown in
Figure 3(a), in which a packet arrives at each of the two inputs at the same instant of
time, but headed to different outputs. In this case, the switch can immediately begin the
transmission of both the packets over the output links. Now consider another scenario,
shown in Figure 3(b), in which both the packets need to be forwarded to the same output
link. This presents a difficulty since only one packet can be transmitted out of an output
link at any given instant of time. The second packet would either have to be discarded or
stored in the switch until the output link is available for transmission. In general, in a 2-
8
input 2-output switch, if both output ports are equally likely to be the destination of the
incoming packets, and if p is the probability that a new packet arrives at an input port
during any given cycle, then the probability that a packet appears at a given output port is
only p(1-p/4). As an example, if a new packet arrives at each input port each cycle (p=1),
only 75% of the packets will successfully exit the switch through an output port while the
rest will be discarded! This gets even worse as the number of input and output ports
increases; in an n-input n-output switch, if a new packet arrives at each input port each
cycle, the probability that a packet is discarded is as high as (1-1/n)n.
Since discarding a packet is clearly a sub-optimal option, most routers and switches
implement buffers to queue packets headed to output links that are temporarily busy
transmitting other packets. Buffers in switches and routers reduce the probability that a
packet is discarded, and thus help achieve a higher throughput.
(a)
(b)
Figure 4. (a) An input-queued switch, (b) An output-queued switch
Buffer storage in switches and routers for temporary queueing of packets can be
organized in a variety of ways. The primary goals in choosing a buffer organization are to
minimize the delay and maximize the data rate of packets successfully going through the
switch. There are two important queueing strategies that are commonly seen in
commercial products. In input queueing, packets are queued in buffers at the inputs of the
switch through which they arrive, while in output queueing, packets are queued in buffers
at the outputs through which they need to be transmitted. Figure 4(a) illustrates a 2-input
2-output input-queued switch and Figure 4(b) illustrates a 2-input 2-output output-queued
switch.
Consider a 2-input 2-output input-queued switch with buffers that only allow First-InFirst-Out (FIFO) access. Assume that no more than one packet can be transmitted over a
link during each cycle. At the end of cycle 1, as shown in Figure 5(a), assume that each
of the buffers has one packet queued in it. Let both of these packets be headed to output
1. Assume also, at this point, that there is a new packet arriving at input 0, and headed to
output 1. During cycle 2, if the packet from input 1 is the first one to begin transmission,
the new packet has to be stored in the input buffer even though output 0 is free and
available for transmission. This is a well-known phenomenon known as head-of-line
blocking, and causes significant loss of bandwidth in input-queued switches with FIFO
buffers. Figure 5(a) shows this scenario, in which the packet headed to output 0 is waiting
9
behind another packet even though output link 0 is free and available for transmission
during cycle 2.
Head-of-line blocking can be avoided with implementations of buffers which do not
require a strict FIFO policy, so that packets do not have to wait unnecessarily behind
other packets awaiting transmission. Alternately, one can use output queueing. As shown
in Figure 5(b), output queueing does not suffer from the problem of head-of-line
blocking. By the end of cycle 2, the new packet headed to output 0 is already transmitted,
while in the input-queued switch, it is still waiting in its input buffer. In output queueing,
however, when packets headed to the same output arrive at each of the two input queues,
the output buffer should be capable of receiving both packets so that no packet is
discarded. This implies that the output buffers shown in Figure 5(b) have to be capable of
receiving data at twice the rate of each input link. The improvement in bandwidth with
output-queueing comes at the price of this increase in design complexity.
Packet headed to output 0
Packet headed to output 1
0
0
0
0
1
1
1
1
(a)
End of Cycle 2
End of Cycle 1
0
0
0
0
1
1
1
1
(b)
End of Cycle 2
End of Cycle 1
Figure 5. An illustration of Head-of-Line Blocking. (a) Input-queued switch,
(b) Output-queued switch.
A variety of different buffer organizations can be seen in commercially available
switches today. A buffer organization that includes both input and output queueing is
fairly common. Many high-performance switches implement a single shared buffer for
the output queues in order to achieve better efficiency in the use of buffers. An increasing
10
number of switches now employ, at each input port, separate input buffers for packets
headed to different output ports. With sophisticated scheduling mechanisms, this makes it
possible to avoid head-of-line blocking and actually achieve a very high throughput. With
the rising demands for high bandwidths and low delays, novel high-performance switch
architectures are still being proposed and implemented.
Virtual Circuit Switching
Virtual circuit switching seeks to combine the efficiency and flexibility of packet
switching with the desirable properties of circuit switching such as guaranteed bandwidth
and bounded delays. In traditional packet switching, each packet is routed independently
based on the destination information contained in its header. In virtual circuit switching,
instead of destination information, each packet carries only a virtual circuit identifier,
which is used to route the packet. Before beginning transmission, the sender has to
establish a virtual circuit through the network by a process (known as signaling) similar
to that in circuit switching. The only difference is that the circuit established is virtual,
not real. Switches in the network maintain information about each virtual circuit
established through them, and route all packets with the same virtual circuit identifier to
the same output. Since the virtual circuit identifier in a packet is only known to those
switches through which the virtual circuit was set up, all packets sent in the same session
by a source follow identical paths to the destination. This is in contrast to packet
switching, where each packet is routed independently and therefore, packets belonging to
the same session may traverse altogether different paths to reach the destination.
Virtual circuits (VCs) facilitate easy management of traffic flows since each flow is
identified by its virtual circuit identifier, and switches can maintain per-VC information
such as reserved bandwidth or delay requirements. In addition, since a switch only has to
look up an identifier to determine the forwarding action, and since identifiers are much
smaller than destination addresses, the packet header overhead with virtual circuit
switching is smaller. In addition, not having to make a forwarding decision for each
packet increases the speed and allows easier implementation in hardware.
Asynchronous Transfer Mode (ATM) is a technology that uses virtual circuits. The
development of ATM standards in the late 80s and early 90s attempted to create a
complete end-to-end wide area network architecture. The explosive growth of the Internet
based on the TCP/IP architecture in the early 90s, however, caused the idea of a universal
end-to-end ATM network to perish. ATM technology continues to have several technical
advantages over IP networks, and it has found wide application in the high-speed Internet
backbone networks today.
Routing
Routing is the process by which a network device such as a router collects, maintains and
distributes information about paths to various destinations in the network. A router uses
11
this process to determine the output link to which an arriving packet should be forwarded.
Routing is typically accomplished through use of distinct routing tables maintained
within each router. A simple destination-based routing table, such as in Internet routers,
consists of two columns: the first is the address of the destination, and the second
specifies the link to use to reach the destination through the best path. Determining the
best path to each destination from each router in the network is the primary challenge in
routing.
There are two popular algorithms by which routing protocols determine the best path
to any destination. One is known as link-state routing, and the other is known as
distance-vector routing. Both of these algorithms are based on measuring the cost of
traversing the links in the network, such as by measuring the queueing delay on each link,
and then determining the lowest-cost route to the destination.
In link-state routing, each router periodically measures the cost to reach its neighbors,
and distributes this information to all the routers in the network. Each router has the latest
information about all the links in the network, and thus has complete knowledge of the
network topology. Each router, therefore, can independently compute the best path to
every destination.
In distance-vector routing, each router periodically sends its neighbors information
about the cost to reach each destination from itself. Thus, each router measures the cost
of the links to its neighbors, and in addition, is periodically informed of the cost of
reaching each of the destinations from each of its neighbors. Based on this information,
each router can determine the lowest-cost link through which packets should be
forwarded to reach a destination.
Both distance-vector and link-state routing algorithms are used in the Internet. Open
Shortest Path First (OSPF) is a link-state routing protocol, commonly used for routing
within domains. Border Gateway Protocol (BGP) is a distance-vector protocol, used in
the core of the Internet, i.e., the Internet backbone. Within each domain, the border
routers, which directly connect to the Internet backbone, are capable of using both
routing protocols.
Flow Control and Congestion Avoidance
Flow Control is the problem of ensuring that a sender does not send data faster than the
rate at which the receiver can receive data. One may think of the receiver as either the
network or the destination end-system. Solutions to this simple-sounding problem can be
quite complex. A good flow control protocol should respond quickly to changes in the
state of the network, and also reach a stable equilibrium quickly after the change. In
addition, scalability, simplicity and fairness are important desirable properties of flow
control protocols. All of these requirements present a variety of interesting trade-offs in
the design of flow control algorithms.
12
Flow control algorithms can be divided into two broad categories; open loop flow
control and closed loop flow control. In open loop flow control, the sender receives no
feedback on whether or not it needs to slow down its rate. Instead, before the start of
transmission, the sender describes the behavior of its traffic to all the devices in the
network that may be affected by it. Each network element examines this traffic
descriptor, and determines whether or not it can support this traffic from the sender. After
an automated negotiation phase, the sender and the network elements agree on a traffic
descriptor. The sender now makes sure to shape its traffic to the parameters in the
negotiated traffic descriptor. A traffic descriptor used frequently in real networks
specifies the long-term average rate, , at which data can be sent, and the maximum size
of a burst, . Mathematically, over any interval of time of length t, the number of bits
transmitted can be no more than ( + t).
A token bucket regulator is frequently used to shape traffic to the kind of traffic
descriptor detailed above. Consider a bucket with tokens in it, with each token implying
permission to send one packet. Assume that tokens are generated in the bucket at a rate
equal to , the long-term average rate at which packets can be sent. Assume that the size
of the bucket, i.e., the maximum number of tokens that the bucket can hold, is . A
token-bucket regulator ensures that the amount of data that can be sent at any instant of
time is no more than that corresponding to the number of tokens in the bucket. Clearly,
the token-bucket regulator controls the largest burst to the size of the token-bucket and
the average rate to the rate at which tokens are generated. Figure 6 illustrates an instance
of transmission through a token bucket regulator.
Packets
Token
Bucket
Tokens
Network
Network
Before
After
Figure 6. Token-bucket flow control.
13
Open-loop flow control strategies with token bucket traffic regulators at the source
have steadily gained in acceptance and popularity. The token bucket regulator is a key
component of new emerging service models proposed for the future Internet, and
discussed in the following section.
In closed-loop flow control, the source receives feedback from the network or the endsystem receiver regarding whether it should cut down its rate or increase it. Closed-loop
flow control algorithms used in the Internet typically use the acknowledgments received
at the sender as the only feedback from the receiver or the network. Such algorithms
typically use a dynamic window, maintained at the sender. The size of the window at any
time represents the amount of data the sender can send without waiting for an
acknowledgment from the receiver.
Time-out occurs
due to congestion
TCP Sending rate
Linear
increase
Threshold
New threshold
Exponential
increase
Time
Figure 7. An illustrative plot of TCP sending rate against time
TCP, the dominant transport protocol in the Internet, uses closed-loop flow control
with a dynamic window. The sender typically begins sending packets slowly,
exponentially increasing its window size and therefore the rate, during the slow-start
phase. When the window size increases beyond a certain threshold, the rate of increase in
the window size is reduced to a slower, linear rate. Finally, when the sender's rate reaches
a point that causes congestion in the network, acknowledgments begin to take longer to
get back to the sender. When the time taken to receive an acknowledgment increases
beyond a certain value called the time-out period, the TCP source assumes that there is
too much congestion in the network and therefore the packet has either been delayed or
has been dropped. At this point, the sender retransmits the packet, cuts down its rate
14
drastically and resets the threshold to half the rate at which time-out occurred.
Immediately after cutting down its rate, the sender begins to increase its window size
again, first exponentially until the threshold rate, and then linearly until a new time out
occurs. Figure 7 is a typical “saw-tooth” pattern plot showing the TCP sending rate
changes with time.
There exist several versions of TCP, many of them designed to improve the sender's
performance or the network's performance by improving the reaction time of the sender
to the congestion in the network or to smoothen out the saw-tooth behavior described
above. TCP, however, suffers from the fact that it interprets any abnormal delay in
receiving an acknowledgment as being due to congestion in the network. In wireless
networks, for example, where packets may be lost due to the unreliability of the channel
itself, TCP sources end up reducing their rates unnecessarily. With the increasing
popularity of wireless communications, novel flow control algorithms are being
developed for the future Internet where packets may have to routinely go through both
wireless and wired channels.
Note that the TCP flow control algorithm increases its rate until it causes congestion
and then backs off. The algorithm, therefore, does not actually avoid congestion as much
as it controls the congestion. It is always desirable to avoid congestion rather than to
constantly push the network into a state of congestion. Random Early Detection (RED) is
a clever algorithm that avoids congestion by deliberately dropping packets early and thus
causing time-outs even before congestion actually occurs.
Each RED router monitors the queue lengths of each of the flows in its buffers. It
drops packets from the flows with a probability that is a function of the queue length. The
larger the queue length, the closer the flow is to causing a state of congestion and so the
higher the drop probability. When a packet is dropped, the TCP source observes a timeout and it backs off to a slower sending rate. Note that the packet is dropped when
congestion is impending, and not necessarily when congestion actually occurs.
III. Emerging Architectures and Services
The Internet was originally designed to support data communications, at a time when
data did not mean digital audio and video streams. Multimedia, however, is now
becoming an increasingly dominant component of Internet traffic. On-demand audio and
video applications, such as Internet radio stations and broadcast news services are
becoming quite pervasive. Many new advances in the Internet architecture have been
significantly influenced by the need to accommodate the growing importance of
multimedia communications in the Internet.
The Internet backbone is over-designed for bandwidth, and is almost always capable
of accommodating the requirements of multimedia applications today. However, the
access networks closer to the end-user are typically constrained in bandwidth and are
unable to support multimedia applications satisfactorily at all times. For the most part,
15
today, the Internet offers only a best-effort service to its users. Best-effort service implies
that the network does the best it can to forward and deliver packets, but provides no
guarantees whatsoever. Overloaded routers may drop packets from the queues or a router
may provide too little bandwidth to a video stream requiring a steady high-speed
connection.
Different kinds of traffic from different applications can have widely varying
requirements in terms of guarantees sought from the network. A throughput guarantee for
a flow implies that the network delivers at least a certain minimum bandwidth to the
application requesting this service. Internet telephony, for example, requires a steady
guaranteed bandwidth to achieve quality equivalent to that in traditional telephony. A
delay guarantee provides a bound, either deterministic or statistical, on the delays
experienced by packets belonging to the application. For example, a deterministic bound
may specify the maximum permitted delay, while a statistical bound may specify the
maximum permissible average delay. Almost all kinds of interactive applications require
certain reasonable delay bounds for them to have a utility value. A delay-jitter bound
specifies that the network should bound the difference between the largest and the
smallest delays experienced by the packets belonging to the application. This is important
in playback applications, such as in music or video broadcasting, where the receiver plays
back the stream of frames that arrives from the sender. Ideally one desires that there is a
constant difference between the time when one frame is displayed to the next. In the
absence of this constant time interval between the display of consecutive frames, the
video can appear jerky, or just unreal. Bounding the delay jitter minimizes the variation
in the delays between packets, and thus allows a smoother playback.
All of the above parameters and several others such as availability and error loss
characteristics define the Quality of Service (QoS) received by an application. A network
that can provide these different varieties of service is said to support QoS.
In this section, we briefly discuss several important network architectures and services
that will likely be important components of the future Internet. Some of these
technologies are now in early stages of deployment.
Fairness in Traffic Management
The most basic guarantee that an application desires is that its packets be treated fairly by
the network. Scheduling is the process by which a decision is made on which packet to
forward next for transmission over an output link. Over the last decade, a consensus has
emerged on what is fair in the allocation of shared resources such as bandwidth on a link
among multiple flows. This consensus is based on the following two principles:


No flow should be allocated more resources than it demands.
All flows that end up with unsatisfied demands should also end up with equal
shares of the resource allocated to them.
16
The above principles are sometimes also referred to as the max-min principle since it
tries to maximize the amount allocated to the flow with the smallest (minimum)
allocation. If the traffic of each flow could be divided into packets of infinitesimal size
and then transmitted in a round robin fashion, one would end up with an ideally fair
scheduler. However, in practical systems, all bits of a packet have to be transmitted
together; in addition, different packets are of different lengths. These constraints in real
systems have made it difficult to achieve simple fair scheduling algorithms. Further, it is
necessary for such a scheduling algorithm to be scalable without having to keep track of
thousands of flows that may all be traversing through a router at any given instant.
It is interesting that the above notion of fairness among flows of traffic is quite
different from what people expect is fair when they queue up for service. For example,
first-come-first-serve (FCFS) is a scheduling algorithm that is very unfair if used in the
forwarding of packets for transmission. This is because it is possible that a rogue flow
suddenly sends far too many packets, significantly increasing the delays experienced by
packets from all other flows that arrive just a little while later. The FCFS discipline was
used in almost all Internet routers and ATM switches until as recently as a few years ago.
Today, however, most router manufacturers support some version of fair queueing.
Over the last decade, a number of proposals have been made to achieve fairness as
described above in the allocation of bandwidth on the links in networks. One of the
earliest such schedulers is Weighted Fair Queueing, which tries to simulate the actions of
the ideally fair scheduler on the side, and then tries to achieve the same order of packet
transmission times as in the ideally fair system. Certain other simpler scheduling
algorithms use a round robin approach while keeping track of how much each flow is
behind other flows in getting its fair share of service and appropriately compensating
each flow during its service opportunity.
Fairness in traffic scheduling has the additional advantage that it enhances the
predictability of the network’s behavior. This helps bound the delays as well, and
therefore, fair queueing algorithms are also used in routers to provide delay guarantees.
It has often been speculated that as the Internet becomes more commercialized, an
economic model based on usage-based pricing will emerge. Fairness in traffic
management will then be an important issue, not just in the scheduling of packets on a
link but also in the allocation of the multitude of other shared resources in the network.
The Integrated Services Framework
In the mid-90's, the IETF proposed an overall QoS architecture for the Internet referred
to as Integrated Services or IntServ. The goal was to provide services to Internet users
beyond just best-effort services, and facilitate end-to-end QoS guarantees for applications
that might need them. The Int-Serv model requires applications to request resources from
the network through signaling. Each router in the path of the flow is informed of the
traffic characteristics of the flow and its desired service, based upon which the router may
admit the flow through it. Once a flow is accepted, the routers explicitly reserve
17
bandwidths and buffers for the flow to satisfy the requested QoS. At this point, the
application begins transmission.
The IntServ model defines two new service classes, i.e., two new kinds of service that
an application's traffic may request and receive from the network. These service classes
are the guaranteed service and the controlled load service.
Guaranteed service provides hard guarantees on bandwidth and delay, and therefore,
may be chosen by applications involving interactive multimedia. An application provides
a traffic specification (TSpec) and a service request specification (RSpec), to the network.
The TSpec describes the traffic behavior of the flow in terms of the token bucket
parameters discussed earlier. The specifications may additionally include a peak rate and
the maximum packet size. The RSpec describes the QoS requested by the application in
terms of the service rate, delay and packet loss. In practice, the service rate tends to be the
most important parameter in the guaranteed service RSpec, since it can be used to also
control the delay bound. For example, an application can increase its requested service
rate to achieve a lower delay bound. Guaranteed service requires a separate queue in each
router for each flow using the service. This leads to problems with scalability since perflow management in the routers with a large number of flows can potentially lead to poor
efficiency in implementation and poor network utilization.
The controlled load service is for applications that can tolerate some delay, but which
work best when the network is not overloaded. These are applications that perform well
under best-effort service in a lightly loaded network, but which degrade significantly in a
heavily loaded network. One-way audio or video communication is a good example of
such an application, where a few missing/delayed packets do not matter as long as most
of the packets arrive in time as in a lightly loaded network. The controlled load service,
just as in guaranteed service, requires an application to specify its traffic characteristics
through TSpec. The routers use admission control and policing to ensure that low delay
and low loss, as in a lightly loaded network, can be provided to the application with the
specified characteristics. The controlled load service class deliberately avoids specifying
precise service requirements in order to avoid per-flow management and thus achieve
lower implementation complexity.
The resource ReSerVation Protocol (RSVP) is the primary signaling protocol
proposed for Integrated Services. This protocol allows applications to signal QoS
requirements to the network, and the network responds with whether or not these
requirements can be satisfied. The RSVP protocol needs to convey to the network a
unique set of identifiers for each flow (including, for e.g., the sender and receiver
addresses). In addition, it needs to convey to the network the TSpec, the RSpec and the
desired service (guaranteed or controlled load).
RSVP uses two basic message types: Path and Resv messages. The Path messages are
sent by the sender to the receiver or receivers, and include information about the traffic
characteristics. The Resv messages are sent by the receivers and contain information
about the QoS requirements of the receivers. Note that the receiver describes the QoS
18
requirements rather than the sender, since there may be more than one receiver, each with
its own QoS requirements. The Resv messages travel in the direction opposite to that of
the Path messages, with each router reserving the requested resources. Figure 8 shows
the flow of Path and Resv messages.
Path
Path
Sender
Receiver 1
Resv
Resv
Path
Resv
Resv
Path
Receiver 2
Path
Path
Path
Resv
Resv
Path
Resv
Receiver 3
Resv
Figure 8. Flow of RSVP Path and Resv messages
RSVP was explicitly designed to support multicast, with one sender and possibly
thousands of receivers which may all have different QoS requirements with different
kinds of connectivity into the Internet. There are at least two ways of accomplishing
multicast operations such as those required by Internet radio stations or broadcast
services. A multicast message can be sent from the sender as a separate message for each
receiver, or messages may be replicated only when they have to such as when the paths to
two different receivers diverge. The latter method is used in the Internet for obvious
performance and efficiency reasons. In this method, when there are multiple receivers,
the network resources do not need to be reserved separately for each individual receiver.
Receivers only need to share network resources up to the point where the paths to
different receivers diverge. At each point where multiple Resv messages converge, RSVP
merges them into a single Resv message as shown in Figure 8.
The reservation state created by RSVP in each router expires after a certain period of
time, and therefore, the reservations are refreshed approximately every 30 seconds using
Path and Resv messages. When a reservation state is not refreshed within a certain predefined period of time, the state is deleted from the router.
The Differentiated Services Framework
The disadvantage of the Integrated Services model is that a per-flow reservation state has
to be maintained in the routers for the guaranteed service class. The number of flows in
an Internet router can reach thousands, and therefore, maintaining and managing
19
information on so many flows can require large amounts of storage space and processing
power. The controlled load service class also incurs the overhead of the signaling step
prior to each new transmission. A more scalable option was proposed by IETF, known as
the Differentiated Services (DiffServ) model.
In the IntServ model, network resources are allocated on a per-flow basis. In the
DiffServ model, however, traffic is divided into a small number of classes and resources
are allocated on a per-class basis. Since the number of classes is small, the class of a
packet can be carried in the packet header itself. This achieves significantly simpler flow
management as compared to the IntServ model, in which each router in the path has to be
signaled in advance about each flow's QoS requirements. The Differentiated Services
model uses a 6-bit field in the packet header to carry information about the QoS
requirements of the packet. The model defines a set of Per-Hop Behaviors (PHBs), and
each packet carries its PHB in this 6-bit field. The PHB of a packet defines the treatment
it should receive from the network as far as QoS is concerned. Unlike in the Integrated
Services model where the router reserves resources on a per flow basis, a router in the
Differentiated Services model reserves resources on a per-PHB basis. With only a few
PHBs defined, the task of the router is significantly simplified. Besides a default PHB,
which is just equivalent to a request for best-effort service, the model defines two other
standard PHBs.
Drexel University DS Domain
Hosts
Border
router
ISP
router
Internet
backbone
network
Service Level
Agreement made
on aggregated rate
Hosts
Figure 9. A potential scenario in a Differentiated Services Internet
Expedited Forwarding (EF) PHB is a request to forward the packet as quickly as
possible. Expedited forwarding is meant for applications with stringent delay
requirements, such as interactive audio and video. Since EF-PHB packets end up getting
a premium service as far as delay is concerned, they have the potential to significantly
degrade network performance for other traffic. Therefore, EF-PHB traffic has to be very
20
strictly regulated at the source or at the source organization. EF-PHB packets also require
very careful capacity planning and allocation, so that the total bandwidth required by EFPHB packets at the router is never more than that reserved for EF-PHB.
Assured Forwarding (AF) PHB, on the other hand, provides a slightly weaker
guarantee and promises to deliver a customer's traffic with high assurance, as long as the
traffic is within the subscribed traffic profile. This standard provides a set of classes and
drop precedence levels. Packets that belong to different classes are to be forwarded
independently, i.e., queued separately. Within each class, the higher the drop precedence,
the more likely that the packet will be dropped or discarded during congestion. Packets
belonging to the same class are always delivered in-order, even though some packets may
be dropped. The drop precedence levels may be set by a traffic regulator close to the
source. For example, excess packets that are transmitted in violation of the subscribed
traffic profile may be marked with a higher drop precedence value.
In DiffServ, complex per-flow processing is moved from the core routers to the edge
of the network, where the number of flows to be managed is smaller. In many proposed
scenarios with DiffServ, per-flow service is replaced by per-organization or per-customer
service, as shown in Figure 9. For example, a customer or an organization such as Drexel
University, wishing to receive differentiated services must first have a service level
agreement with its Internet service provider. This agreement specifies the forwarding
service that the organization should receive including, for example, the token bucket
parameters describing the traffic behavior. Packets sent by Drexel University which do
not conform to the service level agreement regarding traffic behavior may be marked
with a higher drop precedence. For example, all packets sent when there are no tokens in
the token bucket may be so marked. This marking may be done either within the
customer's premises, or by the service provider.
Multi-Protocol Label Switching (MPLS)
Internet routers today determine how to forward a packet by searching for the longest
match between the prefix of the destination address of the packet, and a route table entry.
In ATM switches, however, a short identifier (virtual circuit identifier) can be used to
directly index into a forwarding table. This allows easier low-cost hardware
implementations, and faster forwarding capability. Multi-Protocol Label Switching
(MPLS) achieves a similar speed and efficiency in forwarding by using a short fixedlength label instead of a 32-bit IP destination address. Packets carry a label which is used
to index into a forwarding table to determine how the packet should be forwarded. Figure
10 shows an example of a simple entry in the forwarding table. Note that the incoming
label may be different from the outgoing label.
IP is the dominant internetworking layer, while ATM is still being widely deployed in
high-speed backbone networks for its low-cost switching capabilities. Overlaying IP on
top of ATM provided an economic solution, and this was the primary motivation in the
early part of the work that eventually lead to IETF's establishment of the MPLS working
21
group to standardize a label-switching method. The efficiency and speed achieved with
label switching allows routers to achieve significantly better price/performance
characteristics. This was an important secondary motivation that led to the MPLS effort.
Incoming Label
6
Outgoing Label
Outgoing Interface
Next Hop Address
10
2
208.0.39.219
Figure 10. An example entry in the forwarding table
Control Component
Updates to/from
other routers
Routing
Protocols
Updates to/from
other routers
Routing
Tables
Forwarding
Tables
Packets with
labels
Forwarding
Fabric
Packets with
labels
Forwarding Component
Figure 11. Separation of control and forwarding components
In addition to its price/performance advantages and its ability to integrate IP and
ATM, label switching was also recognized as a means of achieving greater scalability as
well as providing better flexibility in traffic management. Label switching separates the
control component of a router's task from the forwarding component. This is achieved by
maintaining the forwarding tables (to determine the port to which a packet should be
forwarded to based on its intended destination) through using routing algorithms, and
through label distribution and setup protocols that are defined as part of MPLS. As shown
in Figure 11, the forwarding component uses the forwarding table derived from the
routing table maintained by the control component, but is otherwise independent of the
control component. This separation allows a fast forwarding speed, independent of the
scalability or other properties of the routing and other control algorithms. This separation
22
between control and forwarding components also allows flexibility in the independent
development and modification of each of these components as technology evolves.
Point of
Congestion
1
A
A&B
4
A&B
3
(a)
2
1
6
B
5
A
A
4
A
3
(b)
2
6
B
B
5
B
Figure 12. (a) Destination-based routing, (b) Label-switched routing.
Yet another advantage of label-switching is the ability to control routing explicitly on
a per-flow basis. Routing algorithms in the Internet today use only the destination address
to determine the output link through which the packet should be forwarded. Label
switching techniques, however, can provide better functionality.
Consider a network topology as shown in Figure 12. Consider two flows of traffic:
flow A from router 1 to router 6, and flow B from router 2 also to router 6. When a
packet headed to router 6 arrives at router 3, if only destination-based routing is used,
packets from both flows will be routed through one of the two paths available to reach
router 6. With both flows going over the same path, one path may end up being congested
while the other is under-utilized. This can be corrected with source-based routing,
although adding the entire path of the packet into the packet header consumes more
overhead than is considered tolerable. With label switching, however, it is easy to specify
that one of the flows should reach its destination through router 4 while the other should
go through router 5. In general, explicit per-flow routing made possible by label
switching allows a significantly better utilization of the network topology.
All of these benefits of MPLS have strongly influenced the industry into adopting this
technology. Many large Internet service providers today have deployed MPLS in their
23
networks to allow better traffic engineering and to provide IP services over ATM
backbone networks. Manufacturers of high-end routers have also begun supporting
MPLS. However, it is not yet clear if MPLS will indeed emerge as a fundamentally
important technology in the future Internet.
IV. Concluding Remarks
The
technologies underlying the current Internet have been reasonably successful in
sustaining its early exponential growth. This same enormous growth, however, has also
resulted in increased user expectations and has exposed a number of shortcomings in the
current technology. Several of these problems have had to do with providing quality of
service to applications that need it, and in achieving scalability in the management of
traffic. In response to these challenges, novel routing protocols, algorithms, and
architectures are constantly being devised and deployed in experimental backbone
networks today. Routers are the key components of a network and most of these new
developments and experiments involve new router functionality. With a rapidly evolving
Internet infrastructure, one can be sure to expect the availability of exciting new Internet
services in the near future.
What is not certain, however, is exactly how the Internet will evolve. The original
designers of the Internet believed in the principle of keeping the routers as simple as
possible and pushing all the intelligence into the end-systems. New service models such
as Integrated Services, however, place almost all of the intelligence in the routers and
very little in the end-systems. The Differentiated Services model approaches a
compromise by having some intelligence in the routers and some in the end-systems.
Today, it is not clear which of these or other service models will emerge to dominate the
future Internet, although the DiffServ framework appears to be gaining ground.
Another significant but alternative means of achieving quality of service is through
pricing. Many researchers have advanced the idea of an economic market system for
congestion control and resource management in the Internet. This may involve usagebased pricing, whereby each entity is charged based on the amount of traffic that it
actually generates. Such pricing schemes may be achieved on a per-packet basis or on a
per-flow basis, many of which have already been shown to work well in simulation
models. However, the logistics of billing and accounting as well as the associated
security issues are among some of the reasons that have delayed the large-scale adoption
of pricing in the Internet.
The most unclear, but feared, question among designers of architectures and systems
for new service models is whether services required by future applications will fit neatly
into service classes being defined today. New applications are being designed and
developed each day with unique new service requirements. Many of these new service
requirements demand some novel application-specific functionality in the routers. As
rapidly as routers can evolve within constraints of interoperability requirements, they are
24
unlikely to be able to keep pace with the development of new applications for the
Internet.
Is there a limit to how much application specific information can be pushed into the
routers? How can we devise a solution that will allow the Internet infrastructure to
continue to provide the “service-of-the-day”, without massive design and development
efforts? Is it possible that the Internet will evolve to let each user invent and define
whatever service he/she desires and provide it whenever asked, no-questions-asked?
Active networking is an exciting new idea that promises such a solution. In an active
network, packets will not only carry data but also carry small pieces of code that will
instruct the router on exactly how it should process the packet. The field of active
networking has already matured enough that it has moved out of the prototype phase.
Security challenges associated with letting any user program the Internet are certainly
daunting.
The evolution of the Internet over the last couple of decades was led by a complex set
of factors including technological breakthroughs, strong market forces and the timely
emergence of open standards. No matter how the Internet evolves over the next couple of
decades, engineering the Internet promises to be both challenging and rewarding.
Further Information
WHERE WIZARDS STAY UP LATE: THE ORIGINS OF THE INTERNET. Kate Hafner and Matthew
Lyon, Simon and Schuster, New York, 1998. (An authoritative account of the early
history of the Internet, based on interviews with the scientists and engineers who
pioneered the Internet).
NERDS 2.0.1: A BRIEF HISTORY OF THE INTERNET. Stephen Segaller. TV Books, New York,
1998. (An absorbing anecdotal history).
COMPUTER NETWORKING: A TOP-DOWN APPROACH FEATURING THE INTERNET. James F. Kurose
and Keith W. Ross, 2nd edition, Addison Wesley, Reading, Massachussetts, 2003. (A
highly readable in-depth treatment of the fundamental principles of networking).
25
Exercises
1. If you were required to build a network exclusively for communications by fax,
would you choose packet switching or circuit switching?
2. Visit the web site of the IETF, and find out what is an RFC.
3. Routers in the Internet discard packets when there is congestion, which occurs often
enough. The e-mails you send, however, are rarely ever lost. How come?
4. In an output-queued switch, do you think larger output buffers will yield better
performance?
5. In an input-queued switch, can you eliminate head-of-line blocking if you had input
buffers of infinite size?
6. Given a distance-vector routing algorithm and a link-state routing algorithm, which
do you think requires less memory in the router?
7. A computer is connected to a network through a link of bandwidth equivalent to 2
million packets per second. Assume that all packets are of equal length, and each
token corresponds to one packet. A new token is generated every microsecond, and
the bucket is initially filled to capacity with 1 million tokens. How long can the
computer transmit at the full rate of 2 million packets per second?
8. Upon detecting congestion, a TCP sender backs off by reducing its rate. However,
TCP is something that runs on your computer and you are free to modify the software
to continue sending at whatever rate you choose. Will you get better performance
from the network with this modification? How about if the TCP standard itself were
to be modified and everyone used this new version of TCP?
9. You have to distribute 10 candies among 3 kids who you feel all have the same rights
to the candies. The first kid demands to have 3 candies, the second demands 5 and the
third demands 8. If you were to use the notion of fairness used in the Internet, how
would you distribute the candies?
10. An application using guaranteed service in the Integrated Services model specifies a
TSpec consisting of token bucket parameters, and an RSpec in terms of the bandwidth
alone. Can this application receive a guaranteed delay bound? Explain how.
11. Why do you think RSVP allows its reservation states to expire unless refreshed
periodically? Explain your answer.
12. Why do you think RSVP was explicitly designed to support multicast?
13. While transmitting compressed audio and video, some packets are much more
important to the quality of the playback than some other packets. Explain how you
will exploit this in a Differentiated Services Internet.
14. If every flow’s traffic was extremely smooth (without any sudden increase or
decrease in the sending rate), which do you think is a better model, Integrated
Services or Differentiated Services?
15. Since MPLS allows explicit routing, can you use this feature to provide QoS to a flow
by always selecting a route that will meet the requirements? Explain your answer
with arguments.
16. All packets belonging to the same flow can use the same label through the entire path
of the network, which can certainly offer some convenience in traffic management.
Speculate on why MPLS does not require that the outgoing label be the same as the
incoming label.
26
Download