ATM NETWORKS

advertisement
CHAPTER 6:
PNNI (Private Network Node Interface or
Private Network-to-Network Interface)
PNNI is a switch-to-switch protocol developed within
the ATM Forum to support efficient, dynamic and
scalable routing of SVC requests in a multi-vendor
private ATM environment.
Internet Routing Protocols (Overview)
IP finds route on a per packet basis. Packet specifies end
system address. Switch picks next hop.
• A Protocol is run by routers in Internet to update routing
tables.
• Routing tables are updated automatically on a topology
change, e.g., a node failure will be recognized and avoided.
Internet Routing Protocols (Ctd)
Two well-known approaches (Two Religions)
1. Distance Vector Routing Protocols (Distributed)
•
•
•
•
•
Based on Bellman-Ford shortest path algorithm
(Distributed Version)
Router maintains best-known distance to each destination
and next hop in the routing table.
Each router periodically communicates to all neighbor
routers its best-known distance to each destination.
(May take a long time in a large network!!!)
Routers update distances based on the new information.
Internet Routing Protocols (Ctd)
2. Link-State (Topology Broadcast) Routing
Protocols (Centralized)
•
•
•
Each router broadcasts topology information
(e.g., link states) to all routers.
Each router independently computes exact
shortest paths using a centralized algorithm.
Each router creates then a NETWORK MAP
referred as
LINK STATE DATABASE.
ATM Routing Protocols
Invoked only for connection setup!!!
• Protocols to route connection requests through
interconnected network of ATM switches.
• P-NNI Phase 1 completed by ATM Forum in March
’96.
- Will allow switches from multiple vendors to
interoperate in large ATM networks
PNNI (Private Network Node Interface or
Private Network-to-Network Interface)
PNNI Phase I consists of 2 Protocols:
1. Routing:
• PNNI routing is used to distribute information on
the topology of the ATM network between switches
and groups of switches.
• This information is used by the switch closest to the
SVC requestor to compute a path to the destination
that will satisfy QoS objectives.
• PNNI supports a hierarchical routing structure 
scalable for large networks.
PNNI (Private Network Node Interface or
Private Network-to-Network Interface)
2. Signaling:
• PNNI signaling uses the topology and resource
information available at each switch to construct
a source-route path called a Designated Transit
List (DTL).
• The DTL contains the specific nodes and links the
SVC request will traverse to meet the requested
QoS objectives and complete the connection.
• Crankback and alternate routing are also supported
to route around a failed path.
ARCHITECTURE
PNNI
End
System
Switch
Switch
End
System
ATM
Network
End
System
PNNI
End
System
ATM
Network
• Private Network-to-Network Interface
• Private Network-Node-Interface
ARCHITECTURE (Cont.)
NNI Signaling
UNI Signaling
ATM
Switch
ATM
Switch
ATM
Switch
ATM
Switch
Features of PNNI
• Point-to-point and point-to-multipoint connections
• Can treat a cloud as a single logical link
• Multiple levels of hierarchy => Scalable for global networking.
• Reroutes around failed components at connection setup.
• Automatic topological discovery => No manual input required
• Connection follows the same route as the setup message
(associated signaling)
• Uses: Cost, capacity, link constraints, propagation delay
• Also uses: Cell delay , cell delay variation, current average load,
current peak load
• Uses both link and node parameters
• Supports transit carrier selection
• Supports anycast
Architecture Reference Model of
Switching System
Management
Interface
Protocol
Topology
Protocol
Route
Determination
Topology
Database
Topology
Exchange
UNI
Signaling
Call
Processing
NNI
Signaling
UNI Signaling
NNI Signaling
Cell Stream
Cell Stream
Switching Fabric
Overview of PNNI Routing
Design Concepts
PNNI uses several formerly known techniques :
• Link State Routing
• Hierarchical Routing
• Source Routing
CHOICES IN THE BEGINNING
• PNNI is a routing protocol  requires a routing algorithm.
CHOICE 1.
Distance Vector Routing Algorithm used in RIP.
* Not selected because:
Not scalable; Prone to routing loops; Does not converge
rapidly; and uses excessive overhead control traffic.
CHOICE 2: Link-State Routing (such as OSPF).
* Selected because
Scalable; Converges rapidly; Generates less overhead traffic; and is
extendible. Extendible means that information in addition to the status of
the links can be exchanged between nodes and incorporated into the
topology database.
Difference to OSPF: Status of an ATM switch is advertised in addition to the status
of the links.
1. Concept of Link State Routing
Each ATM switch uses HELLO protocol and sends
HELLO packets periodically or on state changes.
The HELLO packet is flooded to all other switches in
the network.
Each ATM switch exchanges updates with its
neighbor switches on the status of the links, the
status and resources of the switches, and the
identity of each other’s neighbor switches.
The switch information may include data about
switch capacity, QoS, and transit time.
Concept of Link State Routing (Ctnd’)
This information is important because SVC requests
are routed over a path that must meet its QoS
objectives.
This information is used to build a topology database
(NETWORK MAP) of the entire network.
Each ATM switch in the group will have an identical
copy of the topology database.
If a change in topology occurs (e.g., link is broken),
then only that change is propagated between the
switches.
Concept of Link State Routing
Topology Database
ATM
Switch 2
ATM
Switch 1
ATM
Switch 3
ATM
Switch 4
ATM
Switch 2
ATM
End User
A
ATM
Switch 4
ATM
Switch 1
ATM
Switch 3
ATM
End User
B
2. Routing Hierarchy Concept
(Similar to 2-level hierarchy of OSPF)
Can support 104 levels of hierarchy. In practice we need 3 or 4 levels.
Multilevel Routing Hierarchy
Logical
Level
ATM
Switch
ATM
Switch
ATM
Switch
ATM
Switch
ATM
Switch
ATM
Switch
ATM
Switch
ATM
Switch
ATM
Switch
Real ATM
Switches
ATM
Switch
ATM
Switch
Physical
Links
ATM
Switch
ATM
Switch
ATM
Switch
ATM
Switch
Routing Hierarchy Concept (Cont.)
• Peer Groups: Switches that share a common addressing
scheme are grouped into an area.
• Members of a peer group will exchange information with
each other about the topology of the peer group.
An ATM switch, called the Peer Group Leader (PGL), then
summarizes this information and exchanges it with other
PGLs that represent other PEER GROUPs of switches in the
next higher layer of the Peer Group.
Routing Hierarchy Concept (Cont.)
Example:
ATM
Switch
ATM
Switch
ATM
Switch
ATM
Switch
ATM
Switch
ATM
Switch
ATM
Switch
ATM
Switch
ATM
Switch
ATM
Switch
ATM
Switch
Routing Hierarchy Concept (Cont.)
PNNI Routing Hierarchy
Example:
Peer Group N
L GN A
ATM
Switch
ATM
Switch
L GN B
L GN C
ATM
Switch
Peer Group A
Peer Group C
ATM
Switch
ATM
Switch
ATM
Switch
A.2
A.4
ATM
Switch
ATM
Switch
B.2
C.3 Switch
ATM
Switch
B.1
ATM
Switch
ATM
C.1
C.4
ATM
Switch
A.3
ATM
Switch
C.2
Peer Group B
A.1
ATM
Switch
B.3
Explanation of the Example:
• The
three peer groups at the bottom of the figure represent a topology of
real ATM switches connected by physical links.
• The switches in peer group A, e.g., will exchange topology and resource
information with the other switches in the peer group.
• Switch A.1 is elected the PGL and will summarize the information about
peer group A.
• .In the next higher-level peer group, N, the PGL for A, switch A.1 will
exchange the summarized information with the other nodes in N.
• The other PGLs representing B and C will do likewise.
• Switch A.1 will then advertise the summarized information it has gained
from the other members of N into its own lower level,
i.e., child peer group A.
•
Remark:
• Each switch in a peer group will have complete information
about the topology of the peer group it is part of and partial or
summarized information about the outside or external peer
groups.
• Hierarchy enables a network to scale by reducing the amount of
information a node is required to maintain.
• It contains the amount of real topology information that is
transmitted on the network to a local area or peer group.
• The information on the network is further reduced by the
process of topology aggregation so that a collection of real
switches can appear as a single node to other peer groups.
PNNI Terminology
• PEER GROUP: A peer group is a collection of nodes that share a common addressing
scheme and maintains an identical topology database and exchange topology and resource
information with each other. Members of a peer group discover their neighbors using a
HELLO protocol.
LGN A
Peer Group N
ATM
Switch
ATM
Switch
LGN B
LGN C
ATM
Switch
Peer Group A
Peer Group C
ATM
Switch
ATM
Switch
A.1
ATM
ATM
A.4
A.2
Switch
Switch
A.3
ATM
Switch
Peer Group B
ATM
Switch
B.1
ATM
Switch
ATM
Switch
B.2
B.3
C.2
ATM
ATM
C.3Switch
Switch C.1
C.4
ATM
Switch
Example : Peer groups A, B, and C consist of real ATM switches connected by physical links. Peer group N
consists of three logical group nodes (LGN). The LGNs are summarized representations of the peer groups
of actual switches they represent below them.
• PEER GROUP IDENTIFIER: Members of the same peer group are identified by a
common peer group identifier. The peer group identifier is defined from a unique
20-byte ATM address that is manually configured in each switch. (See the
addressing subsection!)
•LOGICAL NODE.
A logical node is any switch or group of switches that runs the
PNNI routing protocol, e.g., all members of PG A and the node above it, LGN A
are logical nodes.
Peer Group N
LGN A
ATM
Switch
ATM
Switch
LGN B
LGN C
ATM
Switch
Peer Group A
Peer Group C
ATM
Switch
ATM
Switch
A.1
ATM
ATM
A.4Switch
Switch A.2
A.3
ATM
Switch
Peer Group B
ATM
Switch
ATM
Switch
B.2
B.1
ATM
Switch C.1
C.2
C.4
ATM
Switch
B.3
ATM
C.3Switch
ATM
Switch
•LOGICAL GROUP
NODE (LGN). An LGN is an abstract representation of a
lower-level peer group for the purposes of representing that peer group in the
next higher-level peer group. In other words, representation of a group as a single
point.
LGN A
Peer Group N
ATM
Switch
ATM
Switch
LGN B
LGN C
ATM
Switch
Peer Group A
Peer Group C
ATM
Switch
ATM
Switch
A.1
ATM
ATM
A.2
A.4
Switch
A.3 Switch
ATM
Switch
Peer Group B
ATM
Switch
ATM
Switch
B.2
B.1
ATM
Switch C.1
C.2
C.4
ATM
Switch
ATM
C.3Switch
ATM
Switch
B.3
LGN A represents PG A, LGN B represents PG B, and LGN C represents PG C. Even though
an LGN is not a real switch but a logical representation of a group of switches, it still behaves
as if it was a real ATM switch.
• PARENT PEER GROUP: LGN representing peer group below it, e.g.,
• PG N is a parent peer group.
• CHILD PEER GROUP: Any node at the next lower hierarchy level. In
other words, a node that is part of an LGN in the next higher level peer
group.
LGN A
Peer Group N
ATM
Switch
ATM
Switch
LGN B
LGN C
ATM
Switch
Peer Group A
Peer Group C
ATM
Switch
ATM
Switch
A.1
ATM
Switch A.2
ATM
A.4Switch
Peer Group B
Switch
ATM
Switch
B.2
B.1
C.2
C.4
ATM
Switch
A.3
ATM
ATM
Switch C.1
ATM
Switch
B.3
e.g., Peer groups A, B, and C are child peer groups.
ATM
C.3Switch
ATM
Switch
• PEER GROUP LEADER (PGL). Within the peer group, a PGL is elected
to represent the peer group as a logical group node in the next higher-level
peer group. The PGL is responsible for summarizing information about
the peer group upward and passes higher-level information downward.
• SWITCH with the highest “leadership priority” and highest ATM address is
elected as a leader.
Note Continuous process  Leader may change any time.
LGN A
Peer Group N
ATM
Switch
ATM
Switch
LGN B
LGN C
ATM
Switch
Peer Group A
Peer Group C
ATM
Switch
ATM
Switch
A.1
ATM
Switch A.2
ATM
A.4Switch
Peer Group B
Switch
ATM
Switch
B.2
B.1
C.2
C.4
ATM
Switch
A.3
ATM
ATM
Switch C.1
ATM
Switch
ATM
C.3Switch
ATM
Switch
B.3
e.g., Each of the peer groups has a PGL shaded in gray, i.e., A.1, B.2, C.2 and LGN A.
•
HELLO PROTOCOL. This is a standard link-state procedure used by neighbor
nodes to discover the existence and identify of each other.
• BORDER NODES.
A border node is a logical node which has a neighbor that
belongs to a different peer group. This is established when neighbor switches exchange
hello packets. The links connecting two peer groups are called outside links.
LGN A
Peer Group N
ATM
Switch
ATM
Switch
LGN B
LGN C
ATM
Switch
Peer Group A
Peer Group C
ATM
Switch
ATM
Switch
A.1
ATM
ATM
A.4Switch
Switch A.2
A.3
ATM
Switch
Peer Group B
B.2
B.1
C.2
C.4
ATM
Switch
ATM
Switch
ATM
Switch C.1
ATM
Switch
B.3
e.g., Nodes A.4, B.2, B.3, and C.1 are border nodes.
ATM
C.3Switch
ATM
Switch
UPLINKS.
• An uplink is a logical connection from a BORDER NODE to a higher-level
LGN.
• The existence of an uplink is derived from an exchange of
HELLO PACKETS between BORDER NODES.
• The other members of the peer group are then informed about the
existence of the uplink.
• An uplink is used by the PGL to construct a logical link between LGN in
the next higher-level peer group.
LGN A
Peer Group A
ATM
Switch
ATM
Switch
Peer Group N
LGN B
ATM
Switch
ATM
Switch
LGN C
A.1
ATM
Switch
A.2
ATM
A.4 Switch
A.3
ATM
Switch
Uplinks
e.g., uplinks from PG A to LGN B and LGN C.
• LOGICAL LINK:
• A connection between 2 nodes.
• They interconnect the members of PG N.
• Horizontal links are logical links that connect nodes in the same peer group
• ROUTING CONTROL CHANNEL:
• VPI=0, VCI=18 is reserved as the VC used to exchange routing
information between logical nodes.
• An RCC that is established between two LGNs serves as the logical link
information needed by LGNs to establish the RCC SVC between other
nodes in the peer group which is derived from the existence of uplinks.
• TOPOLOGY AGGREGATION:
• This is the process of summarizing and compressing
information at one
peer group to advertise into the next higher-level peer group.
• Topology aggregation is performed by the PGLs.
• Links can be aggregated such that multiple links in the child peer group
may be represented a single link in the parent peer group.
• Nodes are aggregated from multiple child nodes into a single LGN.
• PNNI TOPOLOGY STATE ELEMENT (PTSE):
•
This unit of information is used by nodes to build and synchronize a
topology database within the same peer group.
• PTSEs are reliably flooded between nodes in a peer group and
downward from an LGN into the peer group it represents.
• PTSEs contain topology state information about the links and nodes in
the peer group.
• PTSEs are carried in PNNI topology state packets (PTSP).
• PTSPs are sent at regular intervals or will be sent if triggered by an
important change in topology.
• REMARK: (Summary)
Upon initialization nodes exchange PTSE headers.
e.g., My topology database is dated 11-March-2001:11:59.
 Node with older database requests more recent
information.
 After synchronizing the routing databases, they advertise
the link between them.
 The ad (PTSP) is flooded through the peer group.
 All PTSPs have a lifetime and are unless renewed.
 Only the node that originated a PTSP can reissue it.
 PTSPs are issued periodically and also event-driven.
• UPWARD AND DOWNWARD INFORMATION FLOW:
Fig. shows the information flow during this process for PG A and LGN A.
• The PGL in A, A1, is responsible for producing information about PG A,
summarizing it and then representing A as a single LGN in PG N. This is
the upward flow.
* Note that no PTSEs flow upward.
* PTSEs flow downward and horizontally from the PGL.
* This provides the nodes in PG A with visibility outside its peer group and
enables them to intelligently route an SVC request.
* External visibility for nodes in a peer group is limited to knowledge
about uplinks to other LGNs.
PNNI Upward/Downward Information Flow
LGN A
Peer Group N
ATM
Switch
LGN C
ATM
Switch
LGNs communicate
within peer group by
flooding
LGN B
ATM
Switch
Summarized Peer
Group A topology
and resource data
PTSE
LGN & PGL exchange
topology information
PGL
PGLs summarize
state information
within peer group
communicate to
higher level peer
group.
ATM
Switch
A.1
ATM
Switch
A.2
A.4
ATM
Switch
Flood information
at the peer level
A.3
ATM
Switch
Peer Group A
Group Leaders also pass summarized topology
information to nodes of lower-level peer groups.
Addressing
•
•
•
•
The fundamental purpose of PNNI is to compute a route from a source to a
destination based on a called ATM address.
The called ATM address is an information element contained in the SETUP
message that is sent over UNI from the device to a switch (ATM UNI 3.1
specification).
Presumably a switch running PNNI Phase I will have in its topology database an
entry that will match a portion or prefix of the 20-byte ATM address that is
contained in the SETUP message.
The switch will then be able to compute a path through the network to the
destination switch.
ATM end system address (20 bytes)
PNNI uses 19 bytes
End System Identifier (ESI) SEL
AFI
IDP
DSP
6 bytes
Address prefix (13 bytes)
1 byte
Addressing (Ctnd)
ATM end system address (20 bytes)
PNNI uses 19 bytes
End System Identifier (ESI) SEL
AFI
IDP
DSP
6 bytes
Address prefix (13 bytes)
•
•
•
•
•
•
1 byte
Addressing and identification of components of the PNNI routing hierarchy are
based on the use of ATM end system addresses.
PNNI routing works off of the first 19 bytes of this address or some prefix of this
address.
The 20th byte is the selector field which only has local significance to the end
station and is ignored by PNNI routing.
Most significant 13 bytes in ATM address field used to define PEER GROUPs.
Nodes in PEER GROUP have common high-order bits.
Allows up to 13  8 = 104 levels in hierarchy. (Practice: 3 — 4 levels enough).
Addressing (Cont.)
• Nodes in a peer group have the same prefix address bits in
common.
Address Prefix
x Bits
ESID
SEL
Address Prefix
x+y Bits
ESID
SEL
Address Prefix
x+y+z Bits
ESID
SEL
Addressing (Cont.)
• At the highest level illustrated, the LGNs that make
up the high-order LGN have their left x high-order
bits the same.
• At the next lower level, the three LGNs shown have
their left x+y high order bits the same.
• At the lowest level illustrated, the LGNs have their
left x+y+z high order bits the same.
(At this level, they are all real physical switches.)
Peer Group Generation Process
• Two identifiers are used in PNNI to define the hierarchy and a node
placement in the hierarchy.
• The first is the “Peer Group Identifier”. This is a 14-byte value.
• The first byte is a level indicator which defines which of the next 104 leftmost bits are shared by switches in the peer group. In other words, what
level in the hierarchy the peer group is in.
• Peer group identifiers must be prefixes of ATM addresses.
Level Indicator
1 Byte
Peer Group Identifier
13 Bytes
Peer Group Identifier
Peer Group Generation Process (Cont.)
• A peer group is identified by its peer group identifier.
• Peer group IDs are specified at the configuration time.
• Neighboring nodes exchange peer group IDs in hello packets.
• If they have the same peer group ID, then they belong to the same peer
group.
• If the exchanged peer group IDs are different, then the nodes belong to
different peer groups.
• The “Node Identifier” is 22 bytes in length and consists of a 1-byte level
indicator, 1-Byte Lowest Level Node Indicator; 20-Bytes ATM address.
•
The Node Identifier is unique for each PNNI node in the routing
domain. Identifying the ACTUAL-PHYSICAL NODE address.
• A PNNI node that advertises topology information in PNNI topology
state packets will include the Node Identifier and the Peer Group
Identifier to indicate the originator of the information and the scope (on
which level of the hierarchy it is directed to).
PNNI Routing Hierarchy
Example:
Peer Group N
L GN A
ATM
Switch
ATM
Switch
L GN B
L GN C
ATM
Switch
Peer Group A
Peer Group C
ATM
Switch
ATM
Switch
ATM
Switch
A.2
A.4
ATM
Switch
ATM
Switch
B.2
C.3 Switch
ATM
Switch
B.1
ATM
Switch
ATM
C.1
C.4
ATM
Switch
A.3
ATM
Switch
C.2
Peer Group B
A.1
ATM
Switch
B.3
Peer Group Generation Process (Cont.)
The process of building PNNI peer groups is recursive, i.e., the same
process is used at each level in hierarchy. The exceptions are (1) the lowest
level peer groups because the logical nodes representing actual switches
can have no child nodes and (2) the highest-level peer group because there
is no parent to represent it.
PROCEDURE
0. Initiate physical connections or (VPs) between switches (at
lowest level).
1. Exchange HELLO messages with physical peer switches by
flooding.
2. Determine peer group membership (configure lowest level
peer groups)
3.
Flood topology-state PTSEs in peer group.
3a. Create the “Topology Database”
3b. Determine the “BORDER NODES”
4. Elect peer group leader.
PROCEDURE(Cont.)
5. Identify UPLINKS from the BORDER NODES (if any).
6.
Build horizontal links between LGNs at the next higher level.
7.
Exchange HELLO messages with adjacent-logical nodes
(LGNs at that level).
8.
Determine peer group membership at that level.
9.
Flood topology-state PTSEs in peer group.
9a. Create TOPOLOGY DATABASE
9b. Determine the BORDER NODES
10. Elect peer group leader
11. If highest-level peer group reached, then process complete.
12. Return to Step 5.
PNNI Information Exchange
• A PNNI node will advertise its own direct knowledge of the ATM
network.
• The scope of this advertisement is the peer group.
• The information is encoded in TLVs called PNNI Topology State
Elements (PTSE).
• Multiple PTSEs can be carried in a single PNNI Topology State
Packet (PTSP).
• The PTSP is the packet used to send topology information to a
neighbor node in the peer group.
PNNI Information Exchange (Ctd)
Each switch advertises the following:
Nodal Information: This includes the switch’s ATM
address, peer group identifier, leadership priority,
and other aspects about the switch itself.
Topology State Information: This covers outbound
link and switch resources.
Reachability: ATM addresses and ATM address
prefixes that the switch has learned about or is
configured with.
•
PNNI is a topology state protocol  logical nodes will advertise link
state and nodal state parameters.
•
A link state parameter describes the characteristics of a specific link
and a nodal state parameter describes the characteristics of a node.
•
Together these can form topology state parameters that are advertised
by PNNI nodes within their own peer group.
Topology state parameters are either metrics or attributes.
•
A topology state metric (added along the path, e.g., delay) is a
parameter whose values must be combined for all links and nodes in
the SVC request path to determine if the path is acceptable.
•
A topological state attribute (considered individually on each elements)
is a parameter that determines if a path is acceptable for an SVC
request.
Topological state attributes can be further subdivided into
two categories:
Performance-related and Policy-related.
Performance-related attributes (e.g., capacity) measure the
performance of a particular link or node.
Policy-related attributes (e.g., security) provide a measure of
conformance level to a specific policy by a node or link in the
topology.
Table: PNNI Topology State Parameters
Metrics
Performance/
Policy
Resource Attributes Attributes
Cell Delay Variation Cell Loss Ratio for CLP=0 Restricted Transit
Flag
Maximum Cell
Transfer Delay
Maximum Cell Rate
Administrative
Weight
Available Cell Rate
Cell Rate Margin
Variance Factor
Branching Flag
Cell Delay Variation (CDV)
Expected CDV along the path relevant for CBR
and VBR-rt traffic.
Administrative Weight (AW)
Link or nodal state parameter set by administrator
to indicate preference for A NETWORK LINK.
Cell Loss Ratio (CLR)
Describes the expected CLR at a node or link for
CLP=0 traffic.
Maximum Cell Rate (MCR)
Describes the maximum link or node capacity.
Available Cell Rate (ACR)
Measure of effective available bandwidth in cells
per second, per traffic class.
Cell Rate Margin (CRM)
A measure of the difference between effective bandwidth
allocation per traffic class and the allocation for sustainable cell rate
(SCR). A measure of the safety margin allocated above the
aggregate sustained rate.
Variance Factor (VF)
A relative measure of the square of the CRM normalized by the
variance of the aggregate cell rate on the link.
Branching Flag
Used to indicate if a node can branch point-to-multipoint traffic.
Restricted Transit Flag
Nodal state parameter that indicates whether a node supports
transit traffic or not.
PNNI Routing Hierarchy
• The process of generating a PNNI routing hierarchy
is an Automatic Procedure that defines how nodes
will interact with each other.
• It begins at the lowest level in the hierarchy and is
based on the information that is exchanged between
switches.
• The same process is performed at each level of the
hierarchy.
Peer Group N
LGN A
ATM
Switch
LGN C
ATM
Switch
Peer Group C
LGN B
ATM
Switch
C.2
ATM
Switch
ATM
Switch C.1
C.3
C.4
Peer Group A
ATM
Switch
Peer Group B
ATM
Switch
A.1
ATM A.2
Switch
A.4
A.3
ATM
Switch
ATM
Switch
B.1
ATM
Switch
ATM
Switch
B.2
ATM
Switch
B.3
ATM
Switch
Switches in peer group A exchange HELLO packets
with their neighbor switches over a special reserved
VCC (VPI=0, VCI=18) called the Routing Control
Channel (RCC).
HELLO packets contain
* A node’s ATM end system address,
* node ID, and
* its port ID for the link.
 The HELLO protocol makes the neighboring nodes
known to each other.
Membership in the peer group is determined based
on addressing. Those with a matching peer group
identifier are common peer group members.
Topology information in the form of PTSEs is reliably
flooded in the peer group over the Routing Control
Channel.
PTSEs are the smallest collection of PNNI routing
information that is flooded as a unit among all logical
nodes within a peer group.
A node’s topology database consists of a collection
of all PTSEs received, which represent that node’s
present view of the PNNI routing domain.
The topology database provides all the information
required to compute a route from the given node to
any address reachable through that routing domain.
•
•
•
•
•
•
A peer group leader (PGL) is elected based on the
leadership priority configured in the switch.
The PGL represents the peer group as a logical group node
in the next higher-level peer group.
PGLs summarize and circulate info in the parent group.
Switch A.1 is the PGL for peer group A.
A logical group node (LGN) is an abstract representation of
a lower-level peer group for the purposes of representing
that peer group in the next higher-layer peer group.
LGN A represents peer group A in the next higher-level
peer group, i.e., peer group N.
•
•
•
•
•
•
•
Because PNNI is recursive, LGN A behaves just like it was a
switch in a peer group which in this case is peer group N.
It is also the responsibility for the PGL to advertise PTSEs
that it has collected in higher-level peer groups.
This enables the switches in peer group A to have at least a
partial picture of the entire network.
Identify uplink and build horizontal links between LGNs.
An uplink is a connection to an adjacent peer group.
This is discovered when border switches exchange HELLOs
and determine that they are not in the same peer group.
From the perspective of a switch in peer group A; an uplink
is a connection to an LGN in a higher-level peer group.
•
•
•
•
A horizontal link is a logical connection between LGNs in the
next higher-level peer group. It is in actuality an SVC between
PGLs.
So the horizontal link that connects LGN A and LGN B in peer
group N is an SVC between switches A.1 and B.2. It functions as
a RCC so that nodes in peer group N can exchange topology
information.
The same process of exchanging HELLOs and flooding PTSEs is
performed in peer group N, i.e., PTSEs flow horizontally
through the peer group and downward through children.
Border nodes do not exchange databases (different peer groups).
Generic Connection Admission Control (GCAC)
•
•
•
•
•
CAC is the function performed by ATM switches that determines
whether a connection request can be accepted or not.
This is performed by every switch in the SVC request path.
But CAC is not standardized, so it is up to individual switch to decide
if a connection request and its associated QoS can be supported.
PNNI uses information stored in the originating node’s topology
database along with the connection’s traffic characteristics and QoS
requirements, to compute a path.
But again, CAC is a local switch process that the originating node
cannot realistically keep track of.
Generic Connection Admission Control (GCAC)
•
Therefore, PNNI invokes a Generic Connection Admission
Control. (GCAC) procedure during the path selection
process which provides the originating node with an
estimate of whether each switch’s local CAC process will
accept the connection.
Generic Call Admission Control (GCAC)
 Run by a switch in choosing source route
 Determines which path can probably support the call
Actual Call Admission Control (ACAC)
 Run by each switch
 Determines if it can support the call
Generic Connection Admission Control (Ctd)
•
•
•
Ingress Switch performs GCAC to check QoS-based or route
information available.
Individual switches on path perform actual CAC on receipt of SETUP
message.
When local admission fails, request backtracked to previous switch in
path (crankback)
Runs ACAC
Runs GCAC
Chooses Path
Runs ACAC
Runs ACAC
Runs ACAC
3. Source Routing Concept
• A switch that receives an SVC request from a user-device
over a UNI connection will compute and generate the entire
path through the network based on its knowledge of the
network.
• Since QoS metrics are advertised and contained in the
topology-state database, the first switch has a good idea
about what path to take.
• The first switch will designate which switches the SVC
request should pass through. This is called Designated
Transit List (DTL).
• Note that the Intermediate Switches along the path do not
need to perform any path computations.
Source Routing Concept (Ctnd)
• They only perform CAC and forward SVC request by
following the information in the source-route path.
• If the SVC request is destined for a switch in another peer
group, it will specify all external peer groups the SVC should
travel through and direct it to a border switch in an adjacent
peer group.
• It will be up to the entry or border switch of the adjacent or
intermediate peer group to generate a DTL for its peer
group.
• Advantage of source routing => Prevents loops!!!
Source Routing Concept (Cont.)
Example:
3
S
1
2
5
4
Destination Pointer
1 2 4 5
D
Designated Transit Lists
• PNNI uses source routing to forward an SVC request across one or more
peer groups in a PNNI routing hierarchy.
• The PNNI term for the source route vector is designated transit list (DTL)
which is a vector of information that defines a complete path from the
source node to the destination node across a peer group in the routing
hierarchy.
• A DTL is computed by the source node or the first node in a peer group
to receive an SVC request.
• Based on the source node’s knowledge of the network, it computes a path
to the destination that will satisfy the QoS objectives of the request.
• Nodes then simply obey the DTL and forward the SVC request through
the network.
Designated Transit Lists
• A DTL is implemented as an information element (IE) that is added to the
PNNI signaling messages SETUP and ADD PARTY.
• One DTL is computed for each peer group and contains the complete
path across the peer group.
• In other words, it is a list of nodes and links that the SVC request must
visit on its way to the destination.
• A series of DTLs is combined into a stack with the lowest-level peer group
on top and highest at the bottom.
• A pointer is also included to indicate the DTL node currently visited
by SVC.
• When the pointer reaches end of DTL, the DTL is removed from
stack and next DTL is processed.
• If the SVC request enters a new lowest-level peer group, then a new DTL
will be generated by the ingress switch and placed at the top of the DTL
stack for processing.
DTL  EXAMPLE
L GN A
Peer Group N
ATM
Switch
ATM
Switch
L GN B
L GN C
ATM
Switch
Peer Group A
Peer Group C
ATM
Switch
ATM
Switch
Peer Group B
A.1
ATM
Switch
ATM
A.4Switch
A.2
ATM
Switch
End User X
(B2,B3)
(A,B,C)
ATM
C.1
C.3Switch
C.4
ATM
Switch
B.1
B.2
(A2,A1,A4)
(A,B,C)
ATM
Switch
ATM
Switch
A.3
ATM
Switch
C.2
ATM
Switch
B.3
(C1,C2,C3)
(A,B,C)
End User Y
Suppose User A wishes to establish an SVC with user C and for policy
reasons the SVC request can only traverse the path shown in the Figure.
DTL  EXAMPLE (Cont.)
•
•
•
•
•
•
•
•
•
•
The SVC request is signaled across the UNI to node A.2.
The node A.2 will use DIJKSTRA’s shortest path algorithm to find the path to
the destination. VIEW from A.2!!!!
Node A.2. knows that User C is reachable through LGN C and that LGN C is
reachable through LGN B.
Node A.2 constructs two DTLs, one to provide a path across PG A and another
across PG N. The SVC request is forwarded.
Not shown but included in a pointer that indicates which node in the DTL is
currently being visited.
When the last node in the DTL is reached, node A.4, is removed and the next
DTL in the stack is processed.
When the SVC request reaches node B.2, a new DTL (B.2,B.3) is popped on
top of the stack.
Node B.2 simply adds a DTL that enables the SVC request to traverse PG B.
When the SVC request reaches the end of the current DTL (B.2,B.3), it is
removed and the next one in the stack is processed.
When the SVC request reaches node C.1, a new DTL (C.1,C.2,C.3) is popped
on top and the call is forwarded to the destination.
CRANKBACK ALTERNATE ROUTING
• Nodes that generate DTLs (A2, B.2, C.1) in
the previous
example use information in the topology and resource
database that may change while the SVC request is being
forwarded.
• This may cause the SVC request to be blocked.
• Short of going all the way back to User A and attempting to
reestablish the connection, PNNI invokes a technique called
crankback with alternate routing .
• When the SVC request cannot be forwarded according to
the DTL, it is cleared back to the originator of the DTL with
an indication of the problem. This is the crankback
mechanism.
CRANKBACK ALTERNATE ROUTING (Cont.)
L GN A
Peer Group N
ATM
Switch
ATM
Switch
L GN B
L GN C
ATM
Switch
Peer Group A
Peer Group C
ATM
Switch
ATM
Switch
Peer Group B
A.1
ATM
Switch
ATM
A.4Switch
A.2
ATM
Switch
(B2,B3)
(A,B,C)
ATM
C.1
C.3Switch
C.4
ATM
Switch
B.1
B.2
(A2,A1,A4)
(A,B,C)
ATM
Switch
ATM
Switch
A.3
ATM
Switch
C.2
ATM
Switch
B.3
(C1,C2,C3)
(A,B,C)
End User A
End User C
New DTL of (B.2, B.1, B.3)
CRANKBACK ALTERNATE ROUTING (Cont.)
•At that point a new DTL (alternate routing) may be constructed
that bypasses the nodes or links that blocked the SVC request
but which must match the higher-level DTLs which are further
down in the DTL stack.
• If no path can be found, then the request is cranked back to
the previous DTL originator.
• If the DTL originator is original source node, then the crankback
message is translated into a REJECT and USER A must attempt
another connection request.
CRANKBACK ALTERNATE ROUTING (Cont.)
• In our example, suppose the port on node B.3 that connects the
link to node B.2 experienced congestion while the SVC request
was being forwarded.
• Node B.3 would realize, after running CAC, that the SVC
request could not be satisfied over this port.
• A crankback message would then be sent back to node B.2
indicating a problem with the specified port on node B.3.
• Node B.2 would then recompute a new DTL as shown and
forward the SVC request around the failed resource.
•This is illustrated in the Figure.
Download