Abstract

advertisement
Design and Implementation of MPLS Network Simulator
Supporting LDP and CR-LDP
Gaeil Ahn and Woojik Chun
Department of Computer Engineering, Chungnam National University, Korea
{fog1, chun}@ce.cnu.ac.kr
Abstract
The explosive growth of the Internet and the advent of
sophisticated services require an epoch-making change.
As an alternative that can take in it, MPLS was proposed.
Recently, many efforts and activities on MPLS have
already been initiated, which prompt the necessity of
MPLS simulator that can evaluate and analyze newly
proposed MPLS techniques. This paper describes design,
implementation, and capability of a MPLS simulator,
which supports label swapping operation, LDP, CR-LDP,
and various sorts of label distribution function. It enables
researchers to simulate how a LSP is established and
terminated, and how the labeled packets act on the LSP.
In order to show MPLS simulator's capability, the basic
MPLS function defined in MPLS standards is simulated;
label distribution schemes, flow aggregation, ER-LSP, and
LSP Tunnel. The results are evaluated and analyzed, and
their behaviors are shown by means of graphical manner.
1. Introduction
Since the Internet was opened to commercial traffic in
1992, it has grown rapidly from an experimental research
network to an extensive public data network. This has
increased traffic volume in a geometric progression, and
makes it more difficult to support quality of service (QoS)
on the Internet. For the purpose of solving such problem,
two research areas are addressed in particular; traffic
engineering and high-speed packet forwarding.
First, traffic engineering is to transport traffic through a
given network in the most efficient, reliable, and
expeditious manner possible. However, it is impossible to
apply traffic engineering to today’s Internet because of its
conventional hop-by-hop and packet-by-packet routing
scheme.
Second, High-speed packet forwarding is to make a
router forward packets in high speed. Several techniques
have been proposed to support that; a Giga-bit router[10],
lookup of IP routing table in high speed with longest prefix
matching[6], and L2 packet switching[2].
L2 packet switching is proposed as Ipsilon's IP switching,
Cisco's Tag switching, IBM's Aggregate Routed-Based IP
switching (ARIS), and Toshiba's Cell Switch Router (CSR).
The working group of the Internet Engineering Task Force
(IETF) made Multiprotocol Label Switching (MPLS)
[4][7] by merging those approaches in 1997.
The characteristics of MPLS, LSP setup in prior to
packet transmission, make it possible to deliver various
new services that can not be supported in the conventional
Internet. The MPLS prompted active researches on
application area of MPLS such as traffic engineering and
VPN. However, those newly proposed techniques can not
be deployed and experimented on real MPLS networks.
Instead, those techniques should be tested in a simulated
environment. For example, a feedback scheme to establish
CR-LSP based on bandwidth is proposed by AshwoodSmith[3].
The scheme is said to be evaluated its
performance in the real MPLS network composed of about
1000 LSRs. In most case, the construction of an MPLS
network with reasonable number of LSRs may be
impossible or requires high cost and long time even if
possible. This is why a simulator is inevitable in MPLS
related area.
This paper proposes a MPLS simulator, which supports
label swapping operation, LDP, CR-LDP, and various
options of label distribution function defined in MPLS
standards.
In order to show MPLS simulator's usefulness, we
simulate the basic MPLS function defined in MPLS
standards; label allocation scheme, LSP trigger strategy,
and label distribution control mode. As the result, we show
LSP establishment/termination time by the scheme. And
also, we simulate core techniques required in traffic
engineering area; flow aggregation, ER-LSP, and LSP
Tunnel. As the result, we show how the labeled packets act
on the established LSP.
The rest of this paper is organized as follows. Section 2
overviews the key concepts of MPLS, and NS and NAM
that are used for implementation of the MPLS simulator.
Section 3 describes design, architecture, and capability of
MPLS simulator. The MPLS functions defined in MPLS
standards are simulated and evaluated in section 4 and
conclusion is given in section 5.
2. Related Study
2.1. MPLS
In IP protocol, in order to choose a next hop, each router
should partition the packets into Forwarding Equivalence
Classes (FECs) and map each FEC to the next hop. In
MPLS, the assignment of a particular packet to a particular
FEC is done just once, as the packet enters the network.
The Proceedings of the IEEE International Conference on Networks (ICON'00)
0-7695-0777-8/00 $10.00 @ 2000 IEEE
The FEC to which the packet is assigned is encoded with a
short fixed length value known as a label. When a packet
is forwarded to its next hop, the label is sent along with it;
that is, the packets are labeled. A labeled packet can be
forwarded directly to its next hop because its label is used
as an index into a table that specifies the next hop.
Label Distribution Protocol(LDP)[5] is the set of
procedures and messages by which Label Switching
Routers(LSRs) establish Label Switched Paths(LSPs)
through a network by mapping network-layer routing
information directly to data-link layer switched paths. LDP
defines several options in label allocation scheme, LSP
trigger strategy, label distribution control mode, and label
retention mode.
At first, according to which side of LSRs allocates labels,
label allocation schemes are divided into upstream,
upstream-on-demand, downstream, and downstream-ondemand. And according to the timing a LSR triggers a
LSP establishment, LSP trigger strategies are divided into
data-driven, control-driven, and request-driven. And also,
according to whether or not a LSR could send a Mapping
message to its neighboring LSRs whenever it wants to do,
label distribution control modes are divided into
independent and ordered. Finally, according to whether or
not a LSR could keep the label information received from
its neighboring LSRs, label retention modes are divided
into liberal and conservative. Label retention modes have a
meaning only in downstream label distribution mode.
Constraint-based Routing LDP(CR-LDP)[1] supports an
Explicit Routed LSP(ER-LSP) setup based on a constraintbased routing. It provide an end-to-end setup mechanism
of a constraint-based routed LSP (CR-LSP) initiated by the
ingress LSR and mechanisms to provide means for
reservation of resources.
2.2. NS and NAM
The proposed MPLS simulator has been implemented by
extending Network Simulator (NS)[8]. NS began as a
variant of the REAL network simulator implemented by
UC Berkeley in 1989. Now, NS version 2 is available as a
result of the VINT project. NS consists of event scheduler
and IP-based network components. This is written in both
OTcl and C++ language. C++ is used for detailed protocol
implementation such as packet action and state information
management. Tcl is used for simulation configuration such
as event scheduling.
The simulation results from NS may be shown with
Graphic User Interface (GUI) that is called Network
Animation (NAM)[9]. NAM is a Tcl/TK based animation
tool for viewing network simulation traces and real world
packet traces. It supports topology layout, packet level
animation, and various data inspection tools.
3. Overview of MPLS Network Simulator
3.1. Purpose and Scope
The primary purpose of this work is to develop a
simulator that supports various MPLS applications without
constructing a real MPLS network. The MPLS simulator is
designed under the following considerations:
z Extensibility -- faithfully follow the object-oriented
concept so as to support various sorts of MPLS
applications
z Usability -- design so that users may easily learn and use
it.
z Portability -- minimize the modification of NS code so
as not to be subordinate to a specific NS version.
z Reusability -- design so as to aid in developing a real
LSR switch.
The implementation scope of MPLS simulator is as
follows:
z MPLS Packet switching -- label operation (push, swap,
pop), TTL decrement, and penultimate hop popping
z LDP -- handling LDP messages (Request, Mapping,
Withdraw, Release, and Notification)
z CR-LDP -- handling CR-LDP messages
The capability of MPLS simulator for setting up LSP is
as follows:
z In LSP Trigger Strategy -- support control-driven and
data-driven trigger.
z In Label Allocation and Distribution Scheme -- support
only downstream scheme in control-driven trigger, and
both upstream and downstream-on-demand scheme in
data-driven trigger.
z In Label Distribution Control Mode -- support only
independent mode in control-driven trigger, and both
independent and ordered mode in data-driven trigger.
z In Label Retention Mode – support only conservative
mode.
z ER-LSP based on CR-LDP -- established based on the
path information pre-defined by user.
z Flow Aggregation -- aggregate fine flows into a coarse
flow
3.2. Architecture of MPLS Node
MPLS simulator has been implemented by extending NS
that is an IP based simulator. In NS, a node consists of
agents and classifiers. An agent is the sender/receiver
object of protocol, and a classifier is the object that is
responsible for the packet classification. For the purpose
of making a new MPLS node, 'MPLS Classifier' and 'LDP
agent' are inserted into the IP node.
The Proceedings of the IEEE International Conference on Networks (ICON'00)
0-7695-0777-8/00 $10.00 @ 2000 IEEE
packets
MPLS
node
A packet arrived
link
Labeled
Packet ?
MPLS Classifier
Port
Classifier
Agent
(src or null)
Addr
Classifier
Node entry
FEC
PHB
LIBptr
lookup
PFT
Yes
lookup
Push
Swap / Pop
incominginterface
entry_
No
incominglabel
outgoinginterface
outgoinglabel
LIBptr
LIB
LDPAgent
ldp_agents_
classifier_
L3 forwarding
L2 switching
to
another
node
Figure 1: Architecture of MPLS Node
Figure 1 shows the architecture of a MPLS node in the
MPLS simulator. 'Node entry' returns an entry point for a
node. This is the first element that will handle packets
arriving at the node. The Node instance variable, 'entry_',
stores the reference to the 'MPLS Classifier', which will
first classify the incoming packet into labeled/unlabeled
one. The 'MPLS Classifier' also executes label operation
and L2 switching for the labeled packet. L2 switching
means sending the labeled packet to its next hop directly.
The MPLS Node instance variable, 'classifier_', contains
the reference to 'MPLS Classifier'. 'Addr Classifier' is
responsible for L3 forwarding based on the packet’s
destination address, and 'Port Classifier' is responsible for
agent selection. The MPLS Node instance variable,
'ldp_agents_', contains the reference to 'LDPAgent' that
will handle LDP message.
On receiving a packet, a MPLS node operates as
follows:
1. 'MPLS Classifier' determines whether the received
packet is labeled or unlabeled. If it is labeled, 'MPLS
Classifier' executes label operation and L2 switching
for the packet. If it is unlabeled but its LSP exists, it is
handled like a labeled packet. Otherwise, 'MPLS
Classifier' sends it to 'Addr Classifier'.
2. 'Addr Classifier' executes L3 forwarding for the packet.
3. If the next hop of the packet is itself, the packet is sent
to ‘Port Classifier’.
A MPLS node has three tables to manage information
related to LSP; Partial Forwarding Table (PFT), Label
Information Base (LIB), and Explicit Routing information
Base (ERB). PFT is a subset of Forwarding Table, and
consists of FEC, PHB (Per-Hop-Behavior), and LIBptr.
When LSPs/ER-LSPs is established, LIB table has
information for the established LSPs, and ERB table has
information for the established ER-LSPs. Figure 2 shows
the simple algorithm and the structure of the tables for
MPLS packet forwarding at MPLS Classifer. The LIBptr
in each table is a pointer to a LIB entry.
LSPID
FEC
LIBptr
Push
ERB
Figure 2: Simple algorithm and structure of tables for
MPLS packet switching at MPLS Classifier
The lookup of PFT/LIB table is initiated when a MPLS
node received a packet. In the case of an unlabeled packet,
The MPLS Classifier identifies the PFT entry for it by
using the packet’s FEC as an index into PFT table. If the
LIBptr of the PFT entry indicates null, then it executes L3
forwarding for it. Otherwise, it executes a label push
operation for it. That is, it pushes the outgoing-label of the
LIB entry pointed by a LIBptr of the PFT entry into it. In
succession, for the packet, there may be a label stack
operation that a push operation is repeated until the LIBptr
of the LIB entry indicates null. Finally, it executes L2
switching that it is forwarded directly to the next node
indicated by the outgoing-interface of the LIB entry.
In the case of a labeled packet, the MPLS Classifier
identify the LIB entry for it by using the label inserted in it
as an index into LIB table. Then, it performs a label swap
operation that replaces the packet’s label with an outgoinglabel of the LIB entry. If the outgoing-label is a null label,
which means penultimate hop popping, it performs a label
pop operation instead of a label swap operation. And then,
it may perform the label stack operation. Finally, it
performs L2 switching for it.
The ERB table doesn’t participate in packet forwarding.
If it is needed to map a new flow onto an previously
established ER-LSP, a new PFT entry which has the same
LIBptr as the ERB entry should be inserted into PFT table.
3.3. APIs for LDP and CR-LDP
When a MPLS node receives a LDP message, it should
handle the message, select its next node, create a new LDP
message, and send it to an LDP agent attached to the next
node. A sequence of the API invocation for this is
described as Figure 3. When a LDP agent receives a LDP
message, its API, get-*-msg, is called. After handling the
message, the LDP agent calls an API, ldp-trigger-by-*,
which belongs to the MPLS node, in order to select the
next node that will receive a LDP message. Once the next
node is determined, the MPLS node calls an API, send-*msg, which belongs to the LDP agent that can
communicate with an agent attached to the next node. The
The Proceedings of the IEEE International Conference on Networks (ICON'00)
0-7695-0777-8/00 $10.00 @ 2000 IEEE
invoked LDP agent creates a new LDP message and sends
it to its next node.
3.5. Implementation Environment
LDP Packet
LDP Packet
LDPAgent
LDPAgent
MPLSNode
ldp-trigger-by-control
ldp-trigger-by-data
ldp-trigger-by-explicit-route
ldp-trigger-by-withdraw
ldp-trigger-by-release
API Invocation
MPLS simulator has been implemented on Sun Unix
system by extending ns-2.1b5 program, NS version 2.1.
LDPAgent
API Invocation
get-request-msg
get-cr-request-msg
get-mapping-msg
get-cr-mapping-msg
get-notification-msg
get-withdraw-msg
get-release-msg
packet's out-going interface and out-going label. The last
two fields indicate the shim header's TTL and size.
send-request-msg
send -cr-request-msg
send -mapping-msg
send -cr-mapping-msg
send -notification-msg
send -withdraw-msg
send -release-msg
4. Example: basic MPLS function Simulation
4.1. Experiment Environment
Figure 5 shows the experiment environment for a
simulation of the basic MPLS functions.
API Invocation
Figure 3: API Invocation for handling LDP
MPLS Domain
Node0
(Src0)
LSR2
3.4. API for Creating MPLS network
The followings are APIs defined for creating a MPLS
network:
z MPLSnode -- create a new MPLS node
z configure-ldp-on-all-mpls-nodes -- attach LDP agents to
all MPLS node.
z enable-control-driven -- let LSRs operate as controldriven trigger.
z enable-traffic-driven -- let LSRs operate as trafficdriven trigger.
z enable-on-demand -- let LSRs operate as on-demand
label allocation mode
z enable-ordered-control -- let LSRs operate as ordered
mode
z make-explicit-route -- establish an ER-LSP
z flow-erlsp-binding -- map a flow onto an established
ER-LSP.
z flow-aggregation
z trace-mpls -- trace MPLS packets
z trace-ldp -- trace LDP packets
0.1796 1: 0->8 U -1
0.1912 3: 0->8 L 1
0.1948 5: 0->8 L 1
Push(ingress)
3 1 32 4
Swap
5 1 31 4
Pop(penultimate) 7 0 30 0
Figure 4: Example of MPLS Packet Trace
When the 'trace-mpls' API is used, an example of trace
result might appear as Figure 4. First field indicates the
simulated time (in seconds) at which each event occurred.
The next field indicates the address of the node that
processes the packet. The next two fields indicate the
packet's source and destination node addresses. The next
field indicates whether the received packet is unlabeled(U)
or labeled(L). The next field is an incoming-label value.
The next field represents a label operation such as Push,
Pop, and Swap. The subsequent two fields indicate the
LSR6
(ingress)
LSR5
LSR7
(egress)
LSR3
Node1
(Src1)
LSR4
Node9
(Dst0)
LSR8
(egress)
Node10
(Dst1)
: Direction of Packet Forwarding based on Shortest Path scheme
Figure 5: an Example of MPLS Networks
In Figure 5, Node0, Node1, Node9, and Node10 are IP
nodes, and the others are MPLS nodes. Src0 agent attached
to Node0 sends packets toward Dst0 agent attached to
Node9. Src1 agent attached to Node1 sends packets
toward Dst1 agent attached to Node10. Under the packet
forwarding scheme based on shortest path, packets from
Src0 are delivered along LSR 2-5-6-7, and packets from
Src1 are delivered along LSR 2-3-4-8.
Figure 6 is the code written in Tcl to construct the MPLS
networks described in Figure 5. Each node is connected
with a duplex link with the bandwidth 1Mbit, a delay of
10ms, and a DropTail queue. Src0 agent generates 500byte-long packets heading to Dst0 agent every 0.01
seconds. Src1 agent also generates 500-byte-long packets
heading to Dst1 agent every 0.01 seconds. All LSRs
operate as control-driven trigger.
In a MPLS network of Figure 5, the following MPLS
functions are simulated:
z Control-driven trigger
z Data-driven trigger
z Flow Aggregation
z Establishment of ER-LSP using CR-LDP
z LSP Tunnel
The Proceedings of the IEEE International Conference on Networks (ICON'00)
0-7695-0777-8/00 $10.00 @ 2000 IEEE
set ns [new Simulator]
# make IP nodes & MPLS nodes
set Node0 [$ns node]
set Node1 [$ns node]
set LSR2 [$ns MPLSnode]
set LSR3 [$ns MPLSnode]
............
# connect nodes
$ns duplex-link $Node0 $LSR2 1Mb 10ms \
DropTail
$ns duplex-link $Node1 $LSR2 1Mb 10ms \
DropTail
..............
# create traffic source
set Src0 [new Agent/CBR]
set Src1 [new Agent/CBR]
..............
Tunnel for LSPID 3600 is terminated at 1.6 seconds and
the ER-LSP for LSPID 3500 is terminated at 1.7 seconds.
Finally, at 1.8 seconds LSR2 operates as data-driven
trigger, and Src0 stops generating packet at 2.0 seconds
# create traffic sink
set Dst0 [new Agent/Null]
set Dst1 [new Agent/Null]
..............
# connect two agents
$ns connect $Src0 $Dst0
$ns connect $Src1 $Dst1
# create LDP agents on all MPLSnodes
$ns configure-ldp-on-all-mpls-nodes
# set LSP establishment option
$ns enable-control-driven
# trace MPLS and LDP packets
$ns trace-mpls
$ns trace-ldp
Figure 6: Code for Figure 5
$ns at 0.1 "$Src0 start"
$ns at 0.1 "$Src1 start"
$ns at 0.2 "$LSR7 ldp-trigger-by-withdraw 9"
$ns at 0.2 "$LSR8 ldp-trigger-by-withdraw 10"
$ns at 0.3
$ns at 0.3
$ns at 0.5
$ns at 0.6
"$LSR2 flow-aggregation
9 6"
"$LSR2 flow-aggregation 10 6"
"$LSR6 ldp-trigger-by-withdraw 6"
"$Src1 stop"
$ns at 0.7 "$LSR2 make-explicit-route 7 5_4_8_6_7
$ns at 0.9 "$LSR2 flow-erlsp-install
9 -1 3000"
$ns at 1.1 "$LSR2 ldp-trigger-by-release 3000"
$ns at 1.2
$ns at 1.3
$ns at 1.4
$ns at 1.6
$ns at 1.7
"$LSR4 make-explicit-route 8
"$LSR2 make-explicit-route 7
"$LSR2 flow-erlsp-install
9
"$LSR2 ldp-trigger-by-release
"$LSR4 ldp-trigger-by-release
3000"
4_5_6_8
3500"
2_3_4_3500_7 3600"
-1 3600"
3600"
3500"
$ns at 1.8 "$LSR2 enable-data-driven"
$ns at 2.0 "$Src0 stop"
Figure 7: Simulation Code
Figure 7 is a code for the event scheduling in this
simulation. First, at 0.1 seconds, Src0 sends packets to
Dst0, and Src1 sends packets to Dst1. At 0.2 seconds,
LSR7, which is egress LSR for FEC 9, terminates a LSP
for FEC 9, and LSR8, which is egress LSR for FEC 10,
terminates a LSP for FEC 10. At 0.3 seconds, flows of
FEC 9 and FEC 10 are aggregated into a flow of FEC 6 at
LSR 2. At 0.5 seconds, a LSP for FEC 6 is terminated, and
Src1 agent stop generating packet at 0.6 seconds.
Subsequently, at 0.7 seconds, an ER-LSP of which
LSPID is 3000 is established between LSR2 and LSR7
through LSR 5-4-8-6. At 0.9 seconds, a flow of FEC 9 is
bound to the established ER-LSP. At 1.1 seconds, the ERLSP is terminated with LDP Release Message.
And then, at 1.2 seconds, an ER-LSP Tunnel of which
LSPID is 3500 is established between LSR4 and LSR8
through LSR5 and LSR6. An ER-LSP of which LSPID is
3600 is also established between LSR2 and LSR7 through
LSR 3-4-3500 at 1.3 seconds. 3500 in the specified ER
means LSPID, which is used to identify the tunnel ingress
point as a next hop. This allows for stacking new ER-LSP
(that is, LSPID 3600) within an already established LSP
Tunnel (that is, LSPID 3500). At 1.4 seconds, the flow of
FEC 9 is bound to the established ER-LSP. The LSP
4.2. Experiment Results
Figure 8 are pictures shown when the simulation result
generated by MPLS simulator is executed with NAM.
Figure 8-a shows the initial simulated network. Figure
8-b shows a snapshot at 0.01 seconds that LDP Mapping
Message is used to distribute labels based on controldriven trigger. As the result, every possible LSP in the
MPLS network is established. Figure 8-c is a snapshot at
0.34 seconds shown when flows of FEC 9 and FEC 10
were aggregated into a flow of FEC 6.
Subsequently, Figure 8-d shows a snapshot at 0.70
seconds that CR-LDP Request Message initiated by LSR2
is delivered along LSR 5-4-8-6 in order to create an ERLSP between LSR2 and LSR7. Figure 8-e shows a
snapshot at 0.76 seconds that CR-LDP Mapping Message
is sent by LSR7 as the response for the LDP Request
Message initiated by LSR2. Figure 8-f shows a snapshot at
0.97 seconds that packets are forwarded along the ER-LSP
created through the step shown in Figure 8-d and Figure 8e.
Finally Figure 8-g shows a snapshot at 1.49 seconds that
traffic is forwarded along an established ER-LSP, which
includes an already established LSP Tunnel. A step to
establish the LSP Tunnel and the ER-LSP is not shown in
Figure 8 because it is similar to that of the ER-LSP shown
in Figure 8-d and Figure 8-e. LSR4 is a Tunnel Ingress
point and LSR8 is a Tunnel Egress point. When a packet is
forwarded along the ER-LSP, the trace result of the MPLS
packet appears as Figure 9. There is a label push operation
in LSR2, a label swap operation in LSR3 and LSR4, a label
push operation in LSR4, a label swap operation in LSR5,
and a label pop operation in LSR6 and LSR8.
(a) At 0.0 seconds: Initiation
LDP Mapping Msgs
(b) At 0.01: Label Distribution based on control-driven
The Proceedings of the IEEE International Conference on Networks (ICON'00)
0-7695-0777-8/00 $10.00 @ 2000 IEEE
simulation result can be analyzed by using numerical value
as well as GUI shown in Figure 8.
Ingress
LSR
Table 1: LSP Establishment/Termination Time
LSP for FEC 6
LSP Setup
for all FECs
Trigger
Schemes
Controldriven
LSP Setup
for FEC 9
Datadriven
LSP
Termination
for FEC 9
Requestdriven
Operation
(c) At 0.34: Flow Aggregation
CR-LDP Request Msg.
(d) At 0.70: CR-LDP Request Message for an ER-LSP
CR-LDP Mapping
Msg.
(e) At 0.76: CR-LDP Mapping Message for an ER-LSP
LDP Schemes
Time
(ms)
➧ downstream
43
➧ downstream-on-demand
- independent mode
- ordered mode
➧ upstream
➧ LDP withdraw message
56
77
56
42
➧ LDP release message
31
5. Conclusion
This paper describes design, implementation, and
capability of MPLS simulator. The proposed MPLS
simulator helps researchers to simulate and evaluate their
MPLS related techniques. For example, the MPLS
simulator may be easily applied to the area of traffic
engineering by using its function such as the establishment
of ER-LSP and LSP Tunnel verified in this paper.
The simulator proposed in this paper is still in premature
stage; that means, there still remains a lot more capabilities
to be added and extended such as RSVP extension for
MPLS and QoS support on each MPLS node and so on.
6. References
(f) At 0.97: Traffic flow on the established ER-LSP
Ingress LSR
Egress LSR
Tunnel
Ingress
Point
LSP
Tunnel
Tunnel Egress
Point
(g) At 1.49: Traffic flow on the established LSP Tunnel
Figure 8: Simulation Results viewed with NAM
1.434000
1.448000
1.462000
1.462000
1.476000
1.490000
1.504000
1.518000
2(0->9):
3(0->9):
4(0->9):
4(0->9):
5(0->9):
6(0->9):
8(0->9):
7(0->9):
U
L
L
L
L
L
L
U
-1
11
12
11
12
12
11
-1
Push(ingress)
Swap
Swap
Push(tunnel)
Swap
Pop(penultimate)
Pop(penultimate)
L3
3
4
8
5
6
8
7
-1
11
12
11
12
12
0
0
-1
32
31
30
32
31
30
29
-1
4
4
4
8
8
4
0
0
Figure 9: MPLS Packet Trace for Figure 8-g
Table 1 shows time to be taken to establish/terminate
LSP according to various label distribution schemes
defined in MPLS standards. Table 1 demonstrates a
[1] Bilel Jamoussi, "Constraint-Based LSP Setup using LDP,"
Internet Draft, Oct. 1999.
[2] Bruce Davie, Paul Doolan, Yakov Rekhter, "Switching in IP
Networks: IP Switching, Tag Switching, and Related
Technologies," Morgan Kaufmann Publishers, Inc. 1998.
[3] Don Fedyk, Peter Ashwood-Smith, D Skalecki, "Improving
Topology Data Base Accuracy with LSP Feedback via CRLDP," Internet Draft, Oct. 1999.
[4] Eric C.
Rosen, Arun Viswanathan, Ross Callon,
"Multiprotocol Label Switching Architecture," Internet Draft,
April 1999.
[5] Loa Andersson at al., "LDP Specification," Internet Draft,
June 1999.
[6] M. Waldvogel, G. Varghese, J. Turner, and B. Plattner,
"Scalable High Speed IP Routing Lookups," ACM
Computer Communication Review, Volume 27,Number 4,
October 1997, pages 25-36.
[7] R. Callon at al., "A Framework for Multiprotocol Label
Switching," Internet Draft, Sep. 1999.
[8] UCB/LBNL/VINT Network Simulator, ns, URL:
http://www-mash.cs.berkeley.edu/ns.
[9] UCB/LBNL/VINT Network Animator, nam, URL:
http://www-mash.cs.berkeley.edu/nam.
[10] V.P. Kumar, T.V. Lakshman, D. Stiliadis, "Beyond Best
Effort: Router Architectures for the Differentiated Services
of Tomorrow's Internet," IEEE Communications Magazine,
May 1998.
The Proceedings of the IEEE International Conference on Networks (ICON'00)
0-7695-0777-8/00 $10.00 @ 2000 IEEE
Download