Cisco ASR9000 Enterprise L2VPN for Metro-Ethernet, DC-WAN, WAN-Core, and

Cisco ASR9000 Enterprise L2VPN for
Metro-Ethernet, DC-WAN, WAN-Core, and
Government and Public Networks
Implementation Guide
January 16, 2015
Building Architectures to Solve Business Problems
CCDE, CCENT, CCSI, Cisco Eos, Cisco Explorer, Cisco HealthPresence, Cisco IronPort, the Cisco logo, Cisco Nurse Connect, Cisco Pulse, Cisco SensorBase,
Cisco StackPower, Cisco StadiumVision, Cisco TelePresence, Cisco TrustSec, Cisco Unified Computing System, Cisco WebEx, DCE, Flip Channels, Flip for Good, Flip
Mino, Flipshare (Design), Flip Ultra, Flip Video, Flip Video (Design), Instant Broadband, and Welcome to the Human Network are trademarks; Changing the Way We Work,
Live, Play, and Learn, Cisco Capital, Cisco Capital (Design), Cisco:Financed (Stylized), Cisco Store, Flip Gift Card, and One Million Acts of Green are service marks; and
Access Registrar, Aironet, AllTouch, AsyncOS, Bringing the Meeting To You, Catalyst, CCDA, CCDP, CCIE, CCIP, CCNA, CCNP, CCSP, CCVP, Cisco, the
Cisco Certified Internetwork Expert logo, Cisco IOS, Cisco Lumin, Cisco Nexus, Cisco Press, Cisco Systems, Cisco Systems Capital, the Cisco Systems logo, Cisco Unity,
Collaboration Without Limitation, Continuum, EtherFast, EtherSwitch, Event Center, Explorer, Follow Me Browsing, GainMaker, iLYNX, IOS, iPhone, IronPort, the
IronPort logo, Laser Link, LightStream, Linksys, MeetingPlace, MeetingPlace Chime Sound, MGX, Networkers, Networking Academy, PCNow, PIX, PowerKEY,
PowerPanels, PowerTV, PowerTV (Design), PowerVu, Prisma, ProConnect, ROSA, SenderBase, SMARTnet, Spectrum Expert, StackWise, WebEx, and the WebEx logo are
registered trademarks of Cisco and/or its affiliates in the United States and certain other countries.
All other trademarks mentioned in this document or website are the property of their respective owners. The use of the word partner does not imply a partnership relationship
between Cisco and any other company. (1002R)
THE SOFTWARE LICENSE AND LIMITED WARRANTY FOR THE ACCOMPANYING PRODUCT ARE SET FORTH IN THE INFORMATION PACKET THAT
SHIPPED WITH THE PRODUCT AND ARE INCORPORATED HEREIN BY THIS REFERENCE. IF YOU ARE UNABLE TO LOCATE THE SOFTWARE LICENSE
OR LIMITED WARRANTY, CONTACT YOUR CISCO REPRESENTATIVE FOR A COPY.
The Cisco implementation of TCP header compression is an adaptation of a program developed by the University of California, Berkeley (UCB) as part of UCB’s public
domain version of the UNIX operating system. All rights reserved. Copyright © 1981, Regents of the University of California.
NOTWITHSTANDING ANY OTHER WARRANTY HEREIN, ALL DOCUMENT FILES AND SOFTWARE OF THESE SUPPLIERS ARE PROVIDED “AS IS” WITH
ALL FAULTS. CISCO AND THE ABOVE-NAMED SUPPLIERS DISCLAIM ALL WARRANTIES, EXPRESSED OR IMPLIED, INCLUDING, WITHOUT
LIMITATION, THOSE OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT OR ARISING FROM A COURSE OF
DEALING, USAGE, OR TRADE PRACTICE.
IN NO EVENT SHALL CISCO OR ITS SUPPLIERS BE LIABLE FOR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, OR INCIDENTAL DAMAGES, INCLUDING,
WITHOUT LIMITATION, LOST PROFITS OR LOSS OR DAMAGE TO DATA ARISING OUT OF THE USE OR INABILITY TO USE THIS MANUAL, EVEN IF CISCO
OR ITS SUPPLIERS HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
Cisco ASR9000 Enterprise L2VPN for Metro-Ethernet, DC-WAN, WAN-Core, and Government and Public Networks Implementation Guide
Service Provider Segment
© 2015 Cisco Systems, Inc. All rights reserved.
CONTENTSZ
Preface
iii
CHAPTER
1
Implementation Overview
CHAPTER
2
Enterprise L2VPN Transport Design
1-1
2-1
Small Scale Network Design and Implementation 2-1
Provider Edge and Provider Transport Configuration 2-2
Fast Failure Detection Using Bidirectional Forwarding Detection 2-2
Fast Convergence Using Remote Loop Free Alternate Fast Reroute 2-3
Provider Edge and Provider Routers Transport Configurations 2-3
Provider Edge Router Transport Configuration 2-3
Provider Router Transport Configuration 2-5
Core Network Quality of Service (QoS) Operation and Implementation 2-7
Provider Edge and Provider Routers Core QoS Configuration 2-8
Large Scale Network Design and Implementation 2-10
Using Core Network Hierarchy to Improve Scaling 2-10
Large Scale Hierarchical Core and Aggregation Networks with Hierarchy
Fast Convergence Using BGP Prefix Independent Convergence 2-12
Route Reflector Operation and Configuration 2-17
Route Reflector Configuration 2-17
CHAPTER
3
Enterprise L2VPN Services Design
Pseudo-wire
2-12
3-1
3-1
Virtual Private LAN Service (VPLS) 3-2
Virtual Forwarding Instance (VFI) 3-3
Provider Backbone Bridging Ethernet VPN (PBB-EVPN)
Backbone Edge Bridge (BEB) 3-5
Backbone Core Bridge (BCB) 3-5
Ethernet VPN Instance (EVI) 3-5
E-Line (EPL & EVPL)
3-4
3-8
Ethernet Private LAN (EP-LAN) and Ethernet Virtual Private LAN (EVP-LAN)
E-TREE (EP-TREE/ EVP-TREE)
3-9
3-11
Cisco ASR9000 Enterprise L2VPN for Metro-Ethernet, DC-WAN, WAN-Core, and Government and Public Networks
Implementation Guide
i
Contents
CHAPTER
4
Provider Edge-Customer Edge Design Options
Inter-Chassis Communication Protocol
4-1
4-1
Ethernet Access 4-2
Hub and Spoke Using MC-LAG Active/Active
G.8032 Ring Access 4-7
4-2
nV Access 4-12
nV Satellite Simple Rings 4-13
nV Satellite L2 Fabric 4-15
nV Cluster 4-18
MPLS Access Using Pseudo-wire Head-end (PWHE)
CHAPTER
5
Provider Edge User Network Interface
QoS Implementation with MPLS access
4-21
5-1
5-1
QoS Implementation with Ethernet Hub and Spoke Access
QOS Implementation with G.8032 Access
5-7
QoS Implementation with Network Virtualization Access
CHAPTER
6
5-4
5-11
Virtual Private LAN Service (VPLS) Label-Switched Multicast (LSM)
6-1
Cisco ASR9000 Enterprise L2VPN for Metro-Ethernet, DC-WAN, WAN-Core, and Government and Public Networks
ii
Implementation Guide
Preface
The Enterprise Layer 2 Virtual Private Network (L2VPN) architecture enables a single, physical network
to support multiple, virtual L2 networks. From the end user perspective, these networks appear
connected to a dedicated L2 network that has its own Quality of Service (QoS) and access policies.
This functionality gives the user numerous applications, including:
•
Meeting requirements to separate departments within an organization.
•
Sharing workload and resources as well as disaster recovery by interconnecting two or more Data
Centers of an enterprise.
•
Extending L2 connectivity between enterprise branches and Data Centers present at different
locations.
•
Realizing economic benefits through collapsing multiple existing networks onto the one physical
infrastructure, while maintaining L2 isolation and policy implementations on the different networks.
•
Maintaining multiple campuses interconnectivity and access to external networks like Internet and
Internet 2.
For each of these applications where a separate dedicated network is required, a virtual L2 network
offers the following key benefits over a non-virtualized infrastructure, or separate physical networks:
•
To reduce costs, instead of using expensive WAN links, using a single network to support multiple
user groups with virtual networks to enable greater statistical multiplexing benefits and for
providing bandwidth services with higher utilization.
•
A single network enables simpler management and operation of Ethernet Operation, Application,
and Maintenance (EOAM) protocols.
•
Security between virtual networks is built-in without the need for complex Access Control Lists
(ACL) to restrict access for each user group.
•
By consolidating network resources into a single, higher-scale virtualized infrastructure, improved
high availability becomes feasible including clustering of devices and multi-homing.
Authors
•
Chris Lewis
•
Saurabh Chopra
•
Javed Asghar
Cisco ASR9000 Enterprise L2VPN for Metro-Ethernet, DC-WAN, WAN-Core, and Government and Public Networks
Implementation Guide
iii
Preface
Cisco ASR9000 Enterprise L2VPN for Metro-Ethernet, DC-WAN, WAN-Core, and Government and Public Networks
iv
Implementation Guide
CH A P T E R
1
Implementation Overview
An end-to-end enterprise virtual network infrastructure requires the following primary components:
•
Layer 2 (L2) instances on the edge router devices that bind the interface toward an enterprise branch
or campus router to the L2 Virtual Private Network (L2VPN).
•
Multiprotocol Label Switching (MPLS) for label-based forwarding in the network core so that
forwarding does not rely on L2 addresses in the virtual network.
Table 1-1 lists terminology concerned with the MPLS L2VPN architecture.
Table 1-1
Terms used in MPLS L2VPN Architecture
Term
Explanation
Ethernet Virtual Connection (EVC) This is the logical representation of an Ethernet service, defined
as an association between two or more User Network Interfaces
(UNIs) that identifies a point-to-point or
multipoint-to-multipoint path within the core network.
Ethernet Flow Point (EFP)
An Ethernet service endpoint. An EFP classifies frames from the
same physical port to one of the multiple service instances
associated with that port, based on user-defined criteria.
Label Distribution Protocol (LDP)
This protocol is used on each link in the MPLS core network to
distribute labels associated with prefixes; labels are locally
significant to each link.
Provider Router
This type of router, also called a Label Switching Router (LSR),
runs an Interior Gateway Protocol (IGP) and Label Distribution
Protocol (LDP).
Provider Edge Router
This type of router, also called an edge router, imposes and
removes MPLS labels and runs Interior Gateway Protocol (IGP),
LDP, L2VPN instances and Multiprotocol Border Gateway
Protocol (MP-BGP).
Customer Edge Router
This type of router is the demarcation device in a
provider-managed VPN service. It is possible to connect a LAN
to the provider edge router directly. If multiple networks exist at
a customer location, a customer edge router simplifies the task of
connecting the networks to L2VPN.
Figure 1-1 summarizes the three most common options used to virtualize Enterprise L2 WANs.
Cisco ASR9000 Enterprise L2VPN for Metro-Ethernet, DC-WAN, WAN-Core, and Government and Public Networks
Implementation Guide
1-1
Chapter 1
Figure 1-1
Implementation Overview
Transport Options for L2 WAN Virtualization
1 Self Deployed IP/MPLS Backbone
Customer-managed Backbone
CE
Site 1
Site 3
PE
P
Site 2
CE
CE
P
P
PE
Customer-deployed Backbone
(IP and/or MPLS)
2 SP-managed “Ethernet” Service
Customer-managed
Backbone
Customer-managed
Backbone
SP-managed Domain
CE
Provider
Ethernet
Service
Site 1
CE
Site 3
PE
PE
Site 2
CE
3 SP-managed “IP VPN” Service
Customer-managed
Backbone
Customer-managed
Backbone
SP-managed Domain
CE
Site 1
CE
Provider
MPLS VPN
Service
Site 3
PE
PE
Site 2
EVCs
298744
CE
This guide focuses on Option 1 in shown in Figure 1, the enterprise owned-and-operated MPLS L2VPN
model.
Figure 1-2 shows how the components combine to create an MPLS L2VPN service and support the
multiple L2VPNs on the physical infrastructure. In Figure 2, a provider router connects two provider
edge routers. The packet flow is from left-to-right.
Cisco ASR9000 Enterprise L2VPN for Metro-Ethernet, DC-WAN, WAN-Core, and Government and Public Networks
1-2
Implementation Guide
Chapter 1
Implementation Overview
Figure 1-2
Major MPLS L2VPN Components and Packet Flow
PE
P
P
PE
PE
P
PE
EVC
EFP
IGP
l
Labe
VC
l
Labe
Data
4 Byte
IGP Label
4 Byte
VC Label
Original Packet
Layer 2 VPN Packet Format
298745
Data
•
The provider edge router on the left has three groups each using their own virtual network. Each
provider edge router has three L2VPN instances (red, green, and blue); and, each L2 instance is for
the exclusive use of one group using a virtual infrastructure.
•
When a packet arrives on the provider edge router on the left, it appends two labels to the packet.
The BGP or LDP appends the inner (VC) label and its value is constant as the packet traverses the
network. The inner label value identifies the L2VPN instance on the egress provider edge so that the
L2 frame can be forwarded to the corresponding destination interface. LDP assigns the outer (IGP)
label and its value changes as the packet traverses the network to the destination provider edge
router.
Cisco ASR9000 Enterprise L2VPN for Metro-Ethernet, DC-WAN, WAN-Core, and Government and Public Networks
Implementation Guide
1-3
Chapter 1
Implementation Overview
Cisco ASR9000 Enterprise L2VPN for Metro-Ethernet, DC-WAN, WAN-Core, and Government and Public Networks
1-4
Implementation Guide
CH A P T E R
2
Enterprise L2VPN Transport Design
This chapter focuses on the use of Cisco Aggregation Services Routers 9000 Series (ASR 9000) as
provider and provider edge routers in the Multiprotocol Label Switching L2 Virtual Private Network
(MPLS L2VPN) architecture. See Figure 2 above.
You can use this architecture design to implement network infrastructures that connect virtual networks
among Data Centers, branch offices, and campuses through a variety of WAN connectivity.
In this architecture design, Data Centers (whether branch or campus) are considered customer edge
routers. The design requires that provider and provider edge routers are configured with the following
connectivity control and data plane options:
•
Ethernet hub-and-spoke or Ring;
•
Network virtualization (nV); and,
•
Pseudo-wire Head-end for MPLS access.
Enterprise L2 virtualization requires a common MPLS transport infrastructure in order to implement
multiple virtualized L2 networks. This MPLS transport must be resilient and equipped with fast
convergence and failure detection mechanisms. The architecture design requires that the MPLS transport
scales for future expansion. Two options for incorporating provider and provider edge routers into the
MPLS L2VPN transport infrastructure are:
•
A flat Label Distribution Protocol (LDP) domain option, which is appropriate for smaller MPLS
L2VPN deployments (700-1000 devices); or,
•
A hierarchical design using Request for Comments (RFC) 3107-labeled Border Gateway Protocol
(BGP) to segment provider and provider edge domains into Interior Gateway Protocol (IGP)
domains to support scaling the infrastructure beyond 50,000 devices.
This chapter examines topics common to small and large network implementations as they pertain to
small network design. It includes topics on additional technologies needed to enable small networks to
support large numbers of users.
Small Scale Network Design and Implementation
Figure 2-1 shows the small network deployment topology.
Cisco ASR9000 Enterprise L2VPN for Metro-Ethernet, DC-WAN, WAN-Core, and Government and Public Networks
Implementation Guide
2-1
Chapter 2
Enterprise L2VPN Transport Design
Small Scale Network Design and Implementation
Small Deployment Topology
Pre-Aggregation
Node
Data
Center
Core
Node
Core and
Aggregation
IP/MPLS Domain
Core
Node
Pre-Aggregation
Node
nV
Pre-Aggregation
Node
Pre-Aggregation
Node
Core
Node
Pre-Aggregation
Node
Ethernet
Core
Node
Pre-Aggregation
Node
Campus/
Branch
297260
Figure 2-1
You can implement a domain that includes a few hundred provider and provider edge routers using single
IGP and LDP instances. In Figure 3, the Data Center is on the left and the network extends across the
WAN to the branch and campus locations.
There are various components involved in a small network design to achieve end-to-end MPLS transport
and instantiate L2 services seamlessly. These components are described in the following sections.
Provider Edge and Provider Transport Configuration
Transport networks comprised of provider and provider edge routers, transport traffic from multiple
L2VPNs at one location to another location. Transport networks require reachability and label-based
forwarding across the transport domain, along with fast failure detection and convergence. Bidirectional
Forwarding Detection (BFD) is used for fast failure detection. Fast convergence uses Remote Loop Free
Alternate Fast Reroute (rLFA FRR). These methods are described in sections that follow.
Transport implementation requires provider and provider edge routers configured to use IGP for
reachability. These devices also use LDP to exchange labels for prefixes advertised and learned from
IGP. The devices maintain a Label Forwarding Information Base (LFIB) to make forwarding decisions.
When sending L2 traffic from a branch or campus router to a remote location, provider edge routers
encapsulate traffic in MPLS headers, using a label corresponding to the remote provider edge router.
Intermediate devices examine the top label on the MPLS header, perform label swapping, and use LFIB
to forward traffic toward the remote provider edge router. Provider routers forward packets using only
labels. This enables the establishment and use of labeled-switched paths (LSPs) when a provider edge
router forwards VPN traffic to another location.
Fast Failure Detection Using Bidirectional Forwarding Detection
Link failure detection in the core normally occurs through loss of signal on the interface. This is not
sufficient for BGP because its neighbors are typically not on the same segment. A link failure (signal
loss) at a BGP peer can remain undetected by another BGP peer. Absent from other failure detection
methods, convergence occurs only when a BGP timer expires that is too slow. BFD is a lightweight, fast
"hello" protocol that speeds remote link failure detection.
Provider edge and provider routers use BFD as a failure detection mechanism on the CORE interfaces
that informs IGP about link or node failure within milliseconds. BFD peers send BFD control packets to
each other on the interfaces enabled with BFD at negotiated intervals. If a BFD peer does not receive a
control packet and the configured dead timer (also timed in milliseconds) expires, the BFD session is
torn down and IGP is rapidly informed about the failure. IGP immediately tears down the session with
the neighbor and switches traffic to an alternate path. This enables failure detection within milliseconds.
Cisco ASR9000 Enterprise L2VPN for Metro-Ethernet, DC-WAN, WAN-Core, and Government and Public Networks
2-2
Implementation Guide
Chapter 2
Enterprise L2VPN Transport Design
Small Scale Network Design and Implementation
Fast Convergence Using Remote Loop Free Alternate Fast Reroute
After BFD detects a failure, the next step is to "fast converge" the network to an alternate path. For IGP
prefixes, Loop Free Alternate (LFA) enables fast convergence. The type of LFA depends on the network
topology. The first type, called simply LFA, is suitable for hub-and-spoke topologies. The second type
is called remote LFA (rLFA) and is suitable for ring topologies.
•
Loop Free Alternate Fast Reroute (LFA FRR) calculates the backup path for each prefix in the IGP
routing table; if a failure is detected the router immediately switches to the appropriate backup path
in about 50 milliseconds. Only loop-free paths are candidates for backup paths.
•
The rLFA FRR works differently because it is designed for cases with a physical path and no
loop-free alternate paths. In the rLFA case, automatic LDP tunnels are set up to provide LFAs for
all network nodes.
Without LFA or rLFA FRR, a router calculates the alternate path after a failure is detected, which results
in delayed convergence. LFA FRR also calculates the alternate paths in advance to enable faster
convergence. Provider and provider edge devices have alternate paths calculated for all prefixes in the
IGP table and can use rLFA FRR to quickly reroute in case of failure in a primary path.
Provider Edge and Provider Routers Transport Configurations
This section describes how to configure provider edge and provider router transport to support fast
failure detection and fast convergence.
Provider Edge Router Transport Configuration
Provider edge router configuration includes enabling IGP, Intermediate System to Intermediate System
(IS-IS) or Open Shortest Path First (OSPF), to exchange core and aggregation reachability, and enabling
LDP to exchange labels on the core facing interfaces. A loopback interface is also advertised in IGP as
the L2 services are instantiated-using Loopback0 as mentioned in Chapter 3, “Enterprise L2VPN
Services Design.” Using the loopback address improves reliability; the loopback interface is always up
when the router is up, unlike physical interfaces that can have link failures.
Configure BFD on core-facing interfaces using a 15-millisecond "hello" interval and multiplier three to
enable fast failure detection in the transport network. The rLFA FRR is used under IS-IS Level 2 for fast
convergence if a transport network failure occurs. BGP Personal Internet Communicator (PIC) is
configured for fast convergence of BGP prefixes if a remote provider edge router becomes unreachable.
Table 2-1 details the provider edge router transport configuration.
Table 2-1
Provider Edge Router Transport Configuration
Provider Edge Router Transport Configuration
Description
interface Loopback0
Loopback Interface for BGP VPNv4 neighbors.
ipv4 address 100.111.11.1 255.255.255.255
ipv6 address 2001:100:111:11::1/128
!
interface TenGigE0/0/0/0
Core interface.
ipv4 address 10.11.1.0 255.255.255.254
Cisco ASR9000 Enterprise L2VPN for Metro-Ethernet, DC-WAN, WAN-Core, and Government and Public Networks
Implementation Guide
2-3
Chapter 2
Enterprise L2VPN Transport Design
Small Scale Network Design and Implementation
Table 2-1
Provider Edge Router Transport Configuration (continued)
Provider Edge Router Transport Configuration
Description
!
router isis core
net 49.0100.1001.1101.1001.00
Enters router IS-IS configuration.
Assigns network address to the IS-IS process.
address-family ipv4 unicast
Enters IPv4 address-family for IS-IS.
metric-style wide
Metric style wide generates new-style type-length-value (TLV)
with wider metric fields for IPv4.
!
address-family ipv6 unicast
metric-style wide
Enters IPv6 address-family for IS-IS.
Metric style wide generates new-style TLV with wider metric
fields for IPv6.
!
interface Loopback0
Configures IS-IS for Loopback interface.
passive
Makes loopback passive to avoid sending unnecessary "hello"
packets on it.
address-family ipv4 unicast
Enters IPv4 address-family for loopback.
!
address-family ipv6 unicast
Enters IPv6 Address-family for loopback.
!
!
interface TenGigE0/0/0/0
Configures IS-IS for Ten Gigabit Ethernet bandwidth
(TenGigE0/0/0/0) interface.
circuit-type level-2-only
Configures IS-IS circuit-type on the interface.
bfd minimum-interval 15
Configures minimum interval between sending BFD "hello"
packets to the neighbor.
bfd multiplier 3
Configures BFD multiplier.
bfd fast-detect ipv4
Enables BFD to detect failures in the path between adjacent
forwarding engines.
address-family ipv4 unicast
Enters the IPv4 address-family for Ten Gigabit Ethernet
(TenGigE) interface.
metric 10
Configures IS-IS metric for Interface.
fast-reroute per-prefix level 2
Enables per prefix FRR for Level-2 prefixes.
fast-reroute per-prefix remote-lfa tunnel mpls-ldp
Configures an FRR path that redirects traffic to a remote LFA
tunnel.
mpls ldp sync
Enables MPLS LDP sync to ensure LDP comes up on link
before link is used for forwarding to avoid packet loss.
!
!
mpls ldp
Enters MPLS LDP configuration mode.
log
Cisco ASR9000 Enterprise L2VPN for Metro-Ethernet, DC-WAN, WAN-Core, and Government and Public Networks
2-4
Implementation Guide
Chapter 2
Enterprise L2VPN Transport Design
Small Scale Network Design and Implementation
Table 2-1
Provider Edge Router Transport Configuration (continued)
Provider Edge Router Transport Configuration
Description
graceful-restart
!
Configures router-id for LDP.
router-id 100.111.11.1
!
Enables LDP on TenGig0/0/0/0.
interface TenGigE0/0/0/0
address-family ipv4
!
Provider Router Transport Configuration
The provider router transport configuration includes enabling IGP (IS-IS or OSPF) to exchange core and
aggregation reachability. It also includes enabling LDP to exchange labels on core-facing interfaces.
Provider routers do not need to know VPN addresses because they interpret only core and aggregation
prefixes in the transport network. Provider routers swap labels based on the top packet label belonging
to the remote provider edge routers, and use label forwarding information base (LFIB) to accomplish
provider edge-to-provider edge label switch path (LSP). The rLFA FRR is used under IS-IS Level-2 for
fast convergence if a transport network failure occurs.
Table 2-2 details the provider router transport configuration.
Table 2-2
Provider Router Transport Configuration
Provider Router Transport Configuration
Description
interface TenGigE0/0/0/0
Core interface connecting to provider edge.
ipv4 address 10.11.1.1 255.255.255.254
!
interface TenGigE0/0/0/1
Core interface connecting to core MPLS network.
ipv4 address 10.2.1.4 255.255.255.254
!
Enters router IS-IS configuration.
router isis core
net 49.0100.1001.1100.2001.00
Assigns network address to the IS-IS process.
address-family ipv4 unicast
Enters IPv4 address-family for IS-IS.
metric-style wide
Metric style wide generates new-style TLV with wider
metric fields for IPv4.
!
interface Loopback0
passive
Configures IS-IS for loopback interface.
Makes loopback passive to avoid sending unnecessary
"hello" packets on it.
Cisco ASR9000 Enterprise L2VPN for Metro-Ethernet, DC-WAN, WAN-Core, and Government and Public Networks
Implementation Guide
2-5
Chapter 2
Enterprise L2VPN Transport Design
Small Scale Network Design and Implementation
Table 2-2
Provider Router Transport Configuration (continued)
address-family ipv4 unicast
Enters IPv4 address-family for loopback.
!
!
interface TenGigE0/0/0/0
Configures IS-IS for TenGigE0/0/0/0 interface.
circuit-type level-2-only
Configures IS-IS circuit-type on the interface.
bfd minimum-interval 15
Configures minimum interval between sending BFD
"hello" packets to the neighbor.
bfd multiplier 3
Configures BFD multiplier.
bfd fast-detect ipv4
Enables BFD to detect failures in the path between
adjacent forwarding engines.
address-family ipv4 unicast
Enters the IPv4 address-family for TenGigE interface.
metric 10
Configures IS-IS metric for interface.
mpls ldp sync
Enables MPLS LDP sync to ensure LDP comes up on
link before link is used for forwarding to avoid packet
loss.
!
!
interface TenGigE0/0/0/1
Configures IS-IS for TenGigE0/0/0/1 interface.
circuit-type level-2-only
Configures IS-IS circuit-type on the interface.
bfd minimum-interval 15
Configures minimum interval between sending BFD
"hello" packets to the neighbor.
bfd multiplier 3
Configures BFD multiplier.
bfd fast-detect ipv4
Enables BFD to detect failures in the path between
adjacent forwarding engines.
address-family ipv4 unicast
Enters the IPv4 address-family for TenGigE interface.
metric 10
fast-reroute per-prefix level 2
fast-reroute per-prefix remote-lfa tunnel
mpls-ldp
mpls ldp sync
Configures IS-IS metric for interface.
Enables per prefix FRR for Level-2 prefixes.
Configures an FRR path that redirects traffic to a remote
LFA tunnel.
Enables MPLS LDP sync to ensure LDP comes up on
link before link is used for forwarding to avoid packet
loss.
!
!
Enters MPLS LDP configuration mode.
mpls ldp
log
neighbor
graceful-restart
Cisco ASR9000 Enterprise L2VPN for Metro-Ethernet, DC-WAN, WAN-Core, and Government and Public Networks
2-6
Implementation Guide
Chapter 2
Enterprise L2VPN Transport Design
Core Network Quality of Service (QoS) Operation and Implementation
Table 2-2
Provider Router Transport Configuration (continued)
Configures router-id for LDP.
!
router-id 100.111.2.1
Enables LDP on TenGig0/0/0/0.
interface TenGigE0/0/0/0
!
Enables LDP on TenGig0/0/0/1.
interface TenGigE0/0/0/1
!
!
Core Network Quality of Service (QoS) Operation and
Implementation
Virtual enterprise networks consist of traffic types that include voice, video, critical applications traffic,
and end-user web traffic. This traffic requires different priorities and treatments based on their
characteristics and their business significance. In the MPLS core network, QoS ensures proper treatment
when transporting the virtual network's traffic. This section describes this configuration.
As discussed in previous sections, the MPLS header imposes on traffic in the enterprise virtual network
ingress to the MPLS network on provider edge routers. When the labeled traffic is transported in the core
network, QoS implementation uses 3-bit MPLS EXP bits fields (0-7) present in the MPLS header for
proper QoS treatment. The Differentiated services (DiffServ) Per-Hop Behavior (PHB), which defines
packet-forwarding properties associated with different traffic classes, is divided into the following:
•
Expedited Forwarding—used for traffic requiring low loss, low latency, low jitter, and assured
bandwidth
•
Assured Forwarding—allows four classes with certain buffer and bandwidth
•
Best Effort—best-effort forwarding
This section describes the MPLS Uniform QoS model. This model maps the differentiated services code
point (DSCP) marking from the received router’s traffic on a provider edge to the corresponding MPLS
experimental (EXP) bits. Table 2-3 shows mapping used for different traffic classes to PHB, DSCP, and
MPLS EXP.
Table 2-3
Traffic Class Mapping
Traffic Class
PHB DSCP MPLS EXP
Network Management
AF
56
7
Network Control Protocols
AF
48
6
Enterprise Voice and Real-time
EF
46
5
Enterprise Video Distribution
AF
32
4
Enterprise Telepresence
AF
24
3
Cisco ASR9000 Enterprise L2VPN for Metro-Ethernet, DC-WAN, WAN-Core, and Government and Public Networks
Implementation Guide
2-7
Chapter 2
Enterprise L2VPN Transport Design
Core Network Quality of Service (QoS) Operation and Implementation
Table 2-3
Traffic Class Mapping
Traffic Class
PHB DSCP MPLS EXP
Enterprise Critical
AF
In Contract
16
2
Out of Contract
8
1
0
0
Enterprise Best Effort
BE
The QoS configuration includes configuring class-maps created for the different traffic classes listed in
Table 2 that are assigned with the corresponding MPLS EXP.
As outlined in Table 3 below, while configuring policy maps, real-time traffic class, the class-map for
the real-time traffic, CMAP-RT-EXP, is configured with highest priority 1. It is also policed to ensure
low latency expedited forwarding. The rest of the classes are assigned with the respective required
bandwidth. Weighted random early detection (WRED) is used as a congestion-avoidance mechanism.
WRED is used for the EXP 1 and EXP 2 traffic in the enterprise critical class, CMAP-EC-EXP. The
policy-map is applied to the provider edge and provider router core interfaces in the egress direction
across the MPLS network.
Provider Edge and Provider Routers Core QoS Configuration
Table 2-2 details the provider edge and provider routers core QoS configuration.
Table 2-4
Provider Edge and Provider Routers Core QoS Configuration
Provider Edge and Provider Routers Core QoS Configuration Explanation
class-map match-any CMAP-EC-EXP
match mpls experimental topmost 1 2
Class-map for the enterprise critical traffic.
Matching MPLS experimental 1 OR 2 from traffic top-most
MPLS header.
end-class-map
!
class-map match-any CMAP-ENT-Tele-EXP
match mpls experimental topmost 3
Class map for enterprise telepresence traffic.
Matching MPLS experimental 3 from traffic top-most MPLS
header.
end-class-map
!
class-map match-any CMAP-Video-EXP
match mpls experimental topmost 4
Class-map for video traffic.
Matching MPLS experimental 4 from traffic top-most MPLS
header.
end-class-map
!
class-map match-any CMAP-RT-EXP
match mpls experimental topmost 5
Class-map for real-time traffic.
Matching MPLS experimental 5 from traffic top-most MPLS
header.
Cisco ASR9000 Enterprise L2VPN for Metro-Ethernet, DC-WAN, WAN-Core, and Government and Public Networks
2-8
Implementation Guide
Chapter 2
Enterprise L2VPN Transport Design
Core Network Quality of Service (QoS) Operation and Implementation
Table 2-4
Provider Edge and Provider Routers Core QoS Configuration (continued)
Provider Edge and Provider Routers Core QoS Configuration Explanation
end-class-map
!
class-map match-any CMAP-CTRL-EXP
match mpls experimental topmost 6
Class-map for control traffic.
Matching MPLS experimental 6 from traffic top-most MPLS
header.
end-class-map
!
class-map match-any CMAP-NMgmt-EXP
match mpls experimental topmost 7
Class-map for network management traffic.
Matching MPLS experimental 7 from traffic top-most MPLS
header.
end-class-map
!
!
policy-map PMAP-NNI-E
Policy-map configuration for 10 G link.
class CMAP-RT-EXP
Matching the real-time class.
priority level 1
Defining top priority 1 for the class for low latency queuing.
police rate 1 gbps
Policing the priority class.
!
!
class CMAP-CTRL-EXP
Assigning the desired bandwidth to the class.
bandwidth 200 mbps
!
class CMAP-NMgmt-EXP
bandwidth 500 mbps
!
class CMAP-Video-EXP
bandwidth 2 gbps
!
class CMAP-EC-EXP
bandwidth 1 gbps
!
random-detect exp 2 80 ms 100 ms
random-detect exp 1 40 ms 50 ms
Using WRED for for enterprise critical class for both EXP 1 and
2 for congestion avoidance. EXP 1 is dropped early.
!
class CMAP-ENT-Tele-EXP
bandwidth 2 gbps
!
Cisco ASR9000 Enterprise L2VPN for Metro-Ethernet, DC-WAN, WAN-Core, and Government and Public Networks
Implementation Guide
2-9
Chapter 2
Enterprise L2VPN Transport Design
Large Scale Network Design and Implementation
Table 2-4
Provider Edge and Provider Routers Core QoS Configuration (continued)
Provider Edge and Provider Routers Core QoS Configuration Explanation
class class-default
!
end-policy-map
!
interface TenGigE0/0/0/0
Core interface on provider or provider edge.
service-policy output PMAP-NNI-E
Egress service policy on the interface.
Large Scale Network Design and Implementation
When an MPLS network includes more than 1000 devices, implementing a hierarchical network design
is recommended. In this guide, the hierarchical network design uses labeled BGP, as defined in
RFC 3107. Figure 2-2 shows a network with hierarchy.
Core Network with Hierarchy to Improve Scaling
Aggregation
Node
Aggregation
Node
Data
Center
Aggregation Network
IP/MPLS Domain
Aggregation
Node
Aggregation
Node
Core
Node
Core Network
IP/MPLS Domain
Core
Node
Ethernet
Aggregation Network
nV
IP/MPLS Domain
Aggregation
Node
Core
Node
Core
Node
Aggregation
Node
Campus/
Branch
iBGP (eBGP) Hierarchical LSP
LDP LSP
LDP LSP
LDP LSP
297261
Figure 2-2
Using Core Network Hierarchy to Improve Scaling
The main challenges of large network implementation result from network size. For example, network
size includes the size of routing and forwarding tables in individual provider and provider edge devices
caused by the large number of network nodes; and, trying to run all nodes in one IGP/LDP domain. In
an MPLS environment, unlike in an all-IP environment, all service nodes need a /32 network address as
a node identifier. The /32 addresses cannot be summarized because link-state databases linearly
increases as devices are added to the MPLS network.
The labeled BGP mechanism, defined in RFC 3107, can be used so that link state databases in core
network devices do not have to learn the /32 addresses of all MPLS routers in the access and aggregation
domains. The mechanism effectively moves prefixes from the IG link state database into the BGP table.
Labeled BGP, implemented in the MPLS transport network, introduces hierarchy in the network to
provide better scalability and convergence. Labeled BGP ensures all devices only receive needed
information to provide end-to-end transport.
Cisco ASR9000 Enterprise L2VPN for Metro-Ethernet, DC-WAN, WAN-Core, and Government and Public Networks
2-10
Implementation Guide
Chapter 2
Enterprise L2VPN Transport Design
Large Scale Network Design and Implementation
Large-scale MPLS transport networks used to transport virtual network traffic can be divided into two
IGP areas. In the Open Shortest Path First (OSPF) backbone area, the core network is configured using
Intermediate System to Intermediate System (IS-IS) L2. In the OSPF non-backbone area, the
aggregation network is configured with IS-IS Layer 1 (L1). Another option is to run different IGP
processes in the core and aggregation networks. No redistribution occurs between core and aggregation
IGP levels, areas, and processes, which reduces the size of the routers routing and forwarding tables in
each domain; and, provides better scalability and faster convergence. Running IGP in the area enables
intra-area reachability, and LDP is used to build intra-area LSPs.
Because route information is not redistributed between different IGP levels and areas, provider edge
devices need a mechanism to reach provider edge device loopbacks in other area and levels, and send
VPN traffic. Labeled BGP enables inter-area reachability and accomplish end-to-end LSP between
provider edge routers. Devices that are connected to both aggregation and core domains are called Area
Border Routers (ABRs). ABRs run labeled Interior-BGP (iBGP) sessions with provider edge routers in
their local aggregation domain and serve as route reflectors for the provider edges. Provider edge routers
advertise their loopback addresses (used for L2VPN neighboring) and their corresponding labels to local
route reflector ABRs using labeled IBGP. ABRs run labeled IBGP sessions with a route reflector device
in the core domain, which reflects provider edge router loopback addresses and labels learned from one
ABR client to other ABR clients without changing next-hop or other attributes. ABRs learn provider
edge router loopback addresses and labels from other aggregation domains and advertise them to
provider edge routers in their local aggregation domain. ABRs use next-hop-self while advertising
routes to provider edge routers in local aggregation domain and to route reflectors in the core domain.
This makes provider edge routers learn remote provider edge loopback addresses and labels with local
ABR as BGP next-hop and ABRs learn remote provider edge loopback addresses with remote ABR as
the BGP next-hop. Provider edge routers use two transport labels when sending labeled VPN traffic to
the MPLS cloud: one label for remote provider edge router and another label for its BGP next-hop (local
ABR). The top label for BGP next-hop local ABR is learned from local IGP and LDP. The label below
that, for remote provider edge routers, is learned through labeled IBGP with the local ABR. Intermediate
devices across different domains perform label swapping based on the top label in received MPLS
packets. This achieves end-to-end hierarchical LSP without running the entire network in a single
IGP/LDP domain. Devices learn only necessary information, such as prefixes in local domains and
remote provider edge loopback addresses, which makes labeled BGP scalable for large networks.
Core Network and Aggregation Large Scale Network with Hierarchy
Aggregation Network
Core Network
ISIS Level 1 Or OSPF Non
Backbone Area
ISIS Level 2 Or OSPF Backbone Area
Aggregation
Node
 next-hop-self 
RR
ABR
Data
Center
Core RR
Aggregation Network
ISIS Level 1 Or OSPF Non
Backbone Area
Aggregation
Node
 next-hop-self 
RR
ABR
BGP IPv4+label
BGP IPv4+label
BGP IPv4+label
Ethernet
nV
Aggregation
Node
Aggregation
Node
Aggregation
Node
VC
Remote Local RR
Label PE Label ABR Label
Core
Node
Core
Node
VC
Remote Local RR
Label PE Label ABR Label
Aggregation
Node
VC
Remote
Label PE Label
iBGP Hierarchical LSP
LDP LSP
LDP LSP
Campus/
Branch
LDP LSP
298746
Figure 2-3
Cisco ASR9000 Enterprise L2VPN for Metro-Ethernet, DC-WAN, WAN-Core, and Government and Public Networks
Implementation Guide
2-11
Chapter 2
Enterprise L2VPN Transport Design
Large Scale Network Design and Implementation
Large Scale Hierarchical Core and Aggregation Networks with Hierarchy
Provider edge routers are configured in IS-IS Level-1 (OSPF non-backbone area) to implement ABR,
provider edge, and core route reflector transport configuration for large scale MPLS VPNs. ABR
aggregation domain facing interfaces are configured using IS-IS Level-1 (OSPF non-backbone area) and
core domain-facing interface configured with IS-IS Level-2 (OSPF backbone area). Core route reflector
interfaces will remain in IS-IS Level-2 (Or OSPF backbone area). Provider edge and local ABR are
configured with Labeled IBGP session with ABR as route reflector. Core route reflector is configured
with Labeled BGP peering with all ABRs. LDP is configured in a similar way to the smaller network.
ABR is configured with next-hop-self for both provider edge and core-labeled BGP peers to achieve
hierarchical LSP. BFD is used on all interfaces as a fast failure detection mechanism. BGP PIC is
configured for fast convergence of IPv4 prefixes learnt through labeled IBGP. The rLFA FRR is
configured under IS-IS for providing fast convergence of IGP learnt prefixes.
ABR’s loopbacks are required in both aggregation and core domains since their loopbacks are used for
labeled BGP peering with provider edges in local aggregation domain as well as route reflector in the
core domain. To achieve this, ABR loopbacks are kept in the IS-IS Level-1 or -2 or OSPF backbone area.
Fast Convergence Using BGP Prefix Independent Convergence
For BGP prefixes, fast convergence is achieved using BGP PIC, in which BGP calculates an alternate
best path and primary best path and installs both paths in the routing table as primary and backup paths.
This functionality is similar to rLFA FRR, which is described in the preceding section. If the BGP
next-hop remote provider edge becomes unreachable, BGP immediately switches to the alternate path
using BGP PIC instead of recalculating the path after the failure. If the BGP next-hop remote provider
edge is alive but there is a path failure, IGP rLFA FRR handles fast convergence to the alternate path
and BGP updates the IGP next-hop for the remote provider edge.
Table 2-5 details the provider edge and ABR configuration.
Table 2-5
Provider Edge Transport Configuration
Provider Edge Transport Configuration
Description
router isis agg-acc
Enters Router IS-IS configuration for provider edge.
net 49.0100.1001.1100.7008.00
Defines NET address.
is-type level-1
Defines is-type as Level-1 for the provider edge in
aggregation domain.
address-family ipv4 unicast
Enters IPv4 address-family for IS-IS.
metric-style wide
Metric style Wide generates new-style TLV with wider
metric fields for IPv4.
!
interface Loopback0
Configures IS-IS for Loopback interface.
Makes loopback passive to avoid sending unnecessary
hellos on it.
passive
point-to-point
address-family ipv4 unicast
Enters IPv4 Address-family for Loopback.
!
interface TenGigE0/2/0/0
Configures IS-IS for TenGigE0/2/0/0 interface.
Cisco ASR9000 Enterprise L2VPN for Metro-Ethernet, DC-WAN, WAN-Core, and Government and Public Networks
2-12
Implementation Guide
Chapter 2
Enterprise L2VPN Transport Design
Large Scale Network Design and Implementation
Table 2-5
Provider Edge Transport Configuration (continued)
Provider Edge Transport Configuration
Description
bfd minimum-interval 15
Configures Minimum Interval between sending BFD hello
packets to the neighbor.
bfd multiplier 3
Configures BFD multiplier.
bfd fast-detect ipv4
Enables BFD to detect failures in the path between adjacent
forwarding engines.
point-to-point
Configures point-to-point IS-IS interface.
address-family ipv4 unicast
Enters the IPv4 address-family for TenGig interface.
fast-reroute per-prefix level 2
Enables per prefix FRR for Level 2 prefixes.
fast-reroute per-prefix remote-lfa tunnel mpls-ldp
Configures an FRR path that redirects traffic to a remote
LFA tunnel.
metric 10
Configures IS-IS metric for Interface.
mpls ldp sync
Enables mpls LDP sync to ensure LDP comes up on link
before Link is used for forwarding to avoid packet loss.
!
!
Enters Router BGP configuration mode.
router bgp 101
!
address-family ipv4 unicast
Enters IPv4 address-family.
additional-paths receive
Configures receive capability of multiple paths for a prefix
to the capable peers.
additional-paths send
Configures send capability of multiple paths for a prefix to
the capable peers.
additional-paths selection route-policy
add-path-to-ibgp
Enables BGP PIC functionality with appropriate
route-policy to calculate back up paths.
!
session-group intra-as
Configures session-group to define parameters that are
address-family independent.
remote-as 101
Specifies remote-as as AS number of Route-Reflector.
update-source Loopback0
Specifies Update-source as Loopback0 for BGP
communication
!
Enters neighbor-group configuration mode.
neighbor-group ABR
use session-group intra-as
Importing Session-group AF independent parameters.
address-family ipv4 labeled-unicast
Enables Labeled BGP address-family for neighbor group.
!
neighbor 100.111.3.1
use neighbor-group ABR
Configured ABR loopback as neighbor.
Inheriting neighbor-group ABR parameters.
!
Cisco ASR9000 Enterprise L2VPN for Metro-Ethernet, DC-WAN, WAN-Core, and Government and Public Networks
Implementation Guide
2-13
Chapter 2
Enterprise L2VPN Transport Design
Large Scale Network Design and Implementation
Table 2-5
Provider Edge Transport Configuration (continued)
Provider Edge Transport Configuration
Description
!
route-policy add-path-to-ibgp
set path-selection backup 1 install
Configures route-policy used in BGP PIC.
Configured to install 1 backup path.
end-policy
Enters MPLS LDP configuration mode.
mpls ldp
log
neighbor
graceful-restart
Configures router-id for LDP.
!
router-id 100.111.7.8
interface TenGigE0/2/0/0
Enables LDP on TenGig0/2/0/0.
!
Table 2-6 details the ABR transport configuration.
Table 2-6
ABR Transport Configuration
ABR Transport Configuration
Description
router isis agg-acc
Enters router IS-IS configuration for provider edge.
net 49.0100.1001.1100.3001.00
Defines network address.
address-family ipv4 unicast
Enters IPv4 address-family for IS-IS.
metric-style wide
Metric style wide generates new-style TLV with wider
metric fields for IPv4.
!
interface Loopback0
Configures IS-IS for loopback interface.
Makes loopback passive to avoid sending unnecessary
hellos on it.
passive
point-to-point
address-family ipv4 unicast
Enters IPv4 address-family for loopback.
!
interface TenGigE0/2/0/0
Configures IS-IS for TenGigE0/2/0/0 interface.
circuit-type level-1
Configured Aggregation facing interface as IS-IS Level-1
interface.
bfd minimum-interval 15
Configures Minimum Interval between sending BFD
"hello" packets to the neighbor.
bfd multiplier 3
Configures BFD multiplier.
Cisco ASR9000 Enterprise L2VPN for Metro-Ethernet, DC-WAN, WAN-Core, and Government and Public Networks
2-14
Implementation Guide
Chapter 2
Enterprise L2VPN Transport Design
Large Scale Network Design and Implementation
Table 2-6
ABR Transport Configuration (continued)
ABR Transport Configuration
Description
bfd fast-detect ipv4
Enables BFD to detect failures in the path between adjacent
forwarding engines.
point-to-point
Configures point-to-point IS-IS interface.
address-family ipv4 unicast
fast-reroute per-prefix level 2
Enables per prefix FRR for Level-2 prefixes.
fast-reroute per-prefix remote-lfa tunnel mpls-ldp
Configures an FRR path that redirects traffic to a remote
LFA tunnel.
metric 10
Configures IS-IS metric for Interface.
mpls ldp sync
Enables MPLS LDP sync to ensure LDP comes up on link
before link is used for forwarding to avoid packet loss.
!
!
interface TenGigE0/2/0/1
Configures IS-IS for TenGigE0/2/0/1 interface.
circuit-type level-2-only
Configured CORE facing interface as IS-IS Level-2
interface.
bfd minimum-interval 15
Configures Minimum Interval between sending BFD
"hello" packets to the neighbor.
bfd multiplier 3
Configures BFD multiplier.
bfd fast-detect ipv4
Enables BFD to detect failures in the path between adjacent
forwarding engines.
point-to-point
Configures point-to-point IS-IS interface.
address-family ipv4 unicast
fast-reroute per-prefix level 2
Enables per prefix FRR for Level-2 prefixes.
fast-reroute per-prefix remote-lfa tunnel mpls-ldp
Configures an FRR path that redirects traffic to a remote
LFA tunnel.
metric 10
Configures IS-IS metric for Interface.
mpls ldp sync
Enables mpls LDP sync to ensure LDP comes up on link
before Link is used for forwarding to avoid packet loss.
!
!
Enters Router BGP configuration mode.
router bgp 101
!
address-family ipv4 unicast
Enters IPv4 address-family.
additional-paths receive
Configures receive capability of multiple paths for a prefix
to the capable peers.
additional-paths send
Configures send capability of multiple paths for a prefix to
the capable peers.
additional-paths selection route-policy
add-path-to-ibgp
Enables BGP PIC functionality with appropriate
route-policy to calculate back up paths.
Cisco ASR9000 Enterprise L2VPN for Metro-Ethernet, DC-WAN, WAN-Core, and Government and Public Networks
Implementation Guide
2-15
Chapter 2
Enterprise L2VPN Transport Design
Large Scale Network Design and Implementation
Table 2-6
ABR Transport Configuration (continued)
ABR Transport Configuration
Description
!
session-group intra-as
Configures session-group to define parameters that are
address-family-independent.
remote-as 101
Specifies remote-as as number of route-reflector.
update-source Loopback0
Specifies update-source as Loopback0 for BGP
communication.
!
neighbor-group PE
Enters neighbor-group provider edge configuration mode.
use session-group intra-as
Importing session-group address-family-independent
parameters.
address-family ipv4 labeled-unicast
Enables labeled BGP address-family for neighbor-group.
route-reflector-client
Configured peer-group for provider edge as route-reflector
client.
next-hop-self
Sets next-hop-self for advertised prefixes to provider edge.
!
neighbor-group CORE
Enters neighbor-group CORE configuration mode.
use session-group intra-as
Importing session-group address-family-independent
parameters.
address-family ipv4 labeled-unicast
Enables labeled BGP address-family for neighbor-group.
next-hop-self
Sets next-hop-self for advertised prefixes to CORE
route-reflector.
!
neighbor 100.111.7.8
use neighbor-group PE
Configured provider edge loopback as neighbor.
Inheriting neighbor-group provider edge parameters.
!
neighbor 100.111.11.3
use neighbor-group CORE
Configured CORE route-reflector loopback as neighbor.
Inheriting neighbor-group CORE parameters.
!
!
route-policy add-path-to-ibgp
set path-selection backup 1 install
Configures route-policy used in BGP PIC.
Configured to install 1 backup path.
end-policy
Enters MPLS LDP configuration mode.
mpls ldp
log
neighbor
graceful-restart
Cisco ASR9000 Enterprise L2VPN for Metro-Ethernet, DC-WAN, WAN-Core, and Government and Public Networks
2-16
Implementation Guide
Chapter 2
Enterprise L2VPN Transport Design
Large Scale Network Design and Implementation
Table 2-6
ABR Transport Configuration (continued)
ABR Transport Configuration
Description
Configures router-id for LDP.
!
router-id 100.111.3.1
Enables LDP on TenGig0/0/0/0.
interface TenGigE0/2/0/0
!
Enables LDP on TenGig0/0/0/1.
interface TenGigE0/2/0/1
!
!
Route Reflector Operation and Configuration
Route reflectors addresses the scalability and overhead issues of requiring full mesh of IBGP sessions
because of the IBGP split-horizon rule. When a device is assigned as a route reflector, and provider edge
devices are assigned as its clients, the split horizon rule is relaxed on the route reflectors, enabling the
route reflectors to advertise the prefixes received from one client provider edge to another client provider
edge. Provider edges must maintain IBGP sessions with the route reflectors only to send and receive
updates. The route reflector reflects updates received from one provider edge to other provider edges in
the network, eliminating the requirement for IBGP full mesh.
By default, a route reflector does not change next-hop or any other prefix attributes. Prefixes received
by provider edges still have remote provider edges as next-hop, not the route reflectors, so provider
edges can send traffic directly to remote provider edges. This eliminates the requirement to have the
route reflectors in the data path and route reflectors can only be used for route reflectors function.
Route Reflector Configuration
This section describes ASR 1000 route reflectors configuration, which includes configuring a
peer-group for router BGP. Provider edges having the same update policies (such as update-group,
remote-as) can be grouped into the same peer group, which simplifies peer configuration and enables
more efficient updating. The peer-group is made a route reflectors client so that the route reflectors can
reflect routes received from a client provider edge to other client provider edges.
Table 2-7 details the CORE route reflectors transport configuration.
Table 2-7
CORE Route Reflectors Transport Configuration
CORE Route Reflectors Transport Configuration
Description
router isis agg-acc
Enters Router IS-IS configuration for provider edge.
net 49.0100.1001.1100.1103.00
Defines network address.
address-family ipv4 unicast
Enters IPv4 address-family for IS-IS.
metric-style wide
Metric style wide generates new-style TLV with wider metric
fields for IPv4.
!
interface Loopback0
passive
Configures IS-IS for Loopback interface.
Makes loop-back passive to avoid sending unnecessary
"hello" packets on it.
Cisco ASR9000 Enterprise L2VPN for Metro-Ethernet, DC-WAN, WAN-Core, and Government and Public Networks
Implementation Guide
2-17
Chapter 2
Enterprise L2VPN Transport Design
Large Scale Network Design and Implementation
Table 2-7
CORE Route Reflectors Transport Configuration
CORE Route Reflectors Transport Configuration
Description
point-to-point
address-family ipv4 unicast
Enters IPv4 address-family for loop-back.
!
interface TenGigE0/2/0/0
Configures IS-IS for TenGigE0/2/0/0 interface.
circuit-type level-2-only
Configured CORE interface as IS-IS Level-2 interface.
bfd minimum-interval 15
Configures Minimum Interval between sending BFD hello
packets to the neighbor.
bfd multiplier 3
Configures BFD multiplier.
bfd fast-detect ipv4
Enables BFD to detect failures in the path between adjacent
forwarding engines.
point-to-point
Configures point-to-point IS-IS interface.
address-family ipv4 unicast
fast-reroute per-prefix level 2
Enables per prefix FRR for Level-2 prefixes.
fast-reroute per-prefix remote-lfa tunnel mpls-ldp
Configures an FRR path that redirects traffic to a remote LFA
tunnel.
metric 10
Configures IS-IS metric for Interface.
mpls ldp sync
Enables MPLS LDP sync to ensure LDP comes up on link
before Link is used for forwarding to avoid packet loss.
!
!
router bgp 101
Enters Router BGP configuration mode.
!
address-family ipv4 unicast
Enters IPv4 address-family.
additional-paths receive
Configures receive capability of multiple paths for a prefix to
the capable peers.
additional-paths send
Configures send capability of multiple paths for a prefix to the
capable peers.
additional-paths selection route-policy
add-path-to-ibgp
Enables BGP PIC functionality with appropriate route-policy
to calculate back up paths.
!
session-group intra-as
Configures session-group to define parameters that are
address-family independent.
remote-as 101
Specifies remote-as as AS number of route-reflector.
update-source Loopback0
Specifies update-source as Loopback0 for BGP
communication.
!
!
Cisco ASR9000 Enterprise L2VPN for Metro-Ethernet, DC-WAN, WAN-Core, and Government and Public Networks
2-18
Implementation Guide
Chapter 2
Enterprise L2VPN Transport Design
Large Scale Network Design and Implementation
Table 2-7
CORE Route Reflectors Transport Configuration
CORE Route Reflectors Transport Configuration
Description
Enters neighbor-group provider edge configuration mode.
neighbor-group ABR
use session-group intra-as
Importing session-group address-family-independent
parameters.
address-family ipv4 labeled-unicast
Enables Labeled BGP address-family for neighbor group.
route-reflector-client
Configures peer-group for ABR as route-reflector client.
!
neighbor 100.111.11.3
use neighbor-group ABR
Configured ABR loopback as neighbor.
Inheriting neighbor-group provider edge parameters.
!
!
Enters MPLS LDP configuration mode.
mpls ldp
log
neighbor
graceful-restart
Configures router-id for LDP.
!
router-id 100.111.2.1
interface TenGigE0/2/0/0
Enables LDP on TenGig0/0/0/0.
!
The previous section describes how to implement hierarchical transport network using Labeled-BGP as
a scalable solution in a large scale network with fast failure detection and fast convergence mechanisms.
This solution avoids unrequired resource usage, simplifies network implementation, and achieves faster
convergence for large networks.
Transport configuration including rLFA, Transport QoS, and provider configuration remains the same in
concept and configuration as described in Small Scale Network Design and Implementation, page 2-1.
Cisco ASR9000 Enterprise L2VPN for Metro-Ethernet, DC-WAN, WAN-Core, and Government and Public Networks
Implementation Guide
2-19
Chapter 2
Enterprise L2VPN Transport Design
Large Scale Network Design and Implementation
Cisco ASR9000 Enterprise L2VPN for Metro-Ethernet, DC-WAN, WAN-Core, and Government and Public Networks
2-20
Implementation Guide
CH A P T E R
3
Enterprise L2VPN Services Design
This chapter describes how to implement enterprise L2VPN services over MPLS based transport
infrastructure described in the previous chapter. Enterprise L2VPN services provide end-to-end L2
connectivity between two or more locations of an enterprise. The UNI interface connecting provider
edge and customer edge devices is called an Attachment Circuit (AC) and can be a physical or virtual
port. In a virtualized L2 network, the ingress provider edge router receives L2 ethernet frames from the
branch or campus router on the attachment circuit and encapsulates them with the MPLS labels before
sending to remote provider edge(s). Remote provider edge(s) in turn remove the labels and extract the
original L2 frames and forward them to the destination interface. This type of L2 connectivity across the
MPLS domain can be point-to-point or multipoint and can be achieved as described below
Pseudo-wire
A pseudo-wire provides a point-to-point connection between two enterprise locations and emulates a
wire that is carrying L2 frames over an underlying core MPLS network. A pseudo-wire is instantiated
on the provider edge devices and the attachment circuits are attached to it. Whenever a pseudo-wire is
configured between a pair of provider edge routers, a targeted LDP session is established between them.
Provider edge routers exchange virtual circuit (VC) labels using this targeted LDP session. When an
MPLS packet is received from the core, this VC label is used by the egress provider edge router to
identify the pseudo-wire and forward the frame to the corresponding AC. When an ingress provider edge
router receives L2 ethernet frames on the AC connecting to a branch, campus, or Data Center router, it
encapsulates them with two labels. The bottom label is a VC label and the top label, called transport
label, belongs to the remote provider edge’s loopback interface. When the egress provider edge router
receives the MPLS packets, it checks the pseudo-wire VC label, removes mpls header and forwards the
original L2 frames to the corresponding AC connecting to a branch, campus, or Data Center router. The
customer edge routers can see each other as CDP neighbors and can run IGP adjacency between then.
The pseudo-wire can be implemented as described below (Figure 3-1).
Figure 3-1
CPE
(Branch/Campus
Router)
Pseudo-Wire
PE
(ASR 9000)
PE
(ASR 9000)
CPE
(Branch/Campus
Router)
Pseudowire
298747
MPLS Transport
Network
Table 3-1 details the provider edge router pseudo-wire configuration.
Cisco ASR9000 Enterprise L2VPN for Metro-Ethernet, DC-WAN, WAN-Core, and Government and Public Networks
Implementation Guide
3-1
Chapter 3
Enterprise L2VPN Services Design
Virtual Private LAN Service (VPLS)
Table 3-1
Provider Edge Pseudo-Wire Configuration
Provider Edge Pseudo-Eire Configuration
Description
interface GigabitEthernet100/0/0/40.100 l2transport
L2 Customer Attachment Circuit.
encapsulation default
!
Enters L2vpn configuration mode.
l2vpn
Enters the name of the cross-connect group.
xconnect group PW
p2p PW
Enters a name for the point-to-point cross-connect.
interface GigabitEthernet100/0/0/40.100
Specifies the attachment circuit.
neighbor ipv4 100.111.3.1 pw-id 100
Configures PW neighbor and VC id.
Similar configuration can be done on the remote provider edge with the corresponding neighbor address.
Note
VC id is pseudo-wire identity and is unique per pseudo-wire on a provider edge. It should be same on
both provider edges.
A pseudo-wire provides a point-to-point connection between two enterprise locations. To achieve
multipoint connectivity between various enterprises locations, VPLS and PPB-EVPN are deployed as
described below.
Virtual Private LAN Service (VPLS)
VPLS is a multipoint L2VPN technology that connects two or more enterprise locations in a single LAN
like bridge domain over the MPLS transport infrastructure. Multiple enterprise locations in a VPLS
domain can communicate with each other over VPLS core. This is achieved by using VFI, attaching local
ACs and full mesh of pseudo-wires between provider edges to the VFI as described below (Figure 3-2).
Figure 3-2
VPLS
CPE
(Branch/Campus
Router)
PE (ASR 9000)
PE
(ASR 9000)
PE
(ASR 9000)
VPLS Core
(EoMPLS PW Full Mesh)
CPE
(Branch/Campus
Router)
298748
CPE
(Branch/Campus
Router)
Cisco ASR9000 Enterprise L2VPN for Metro-Ethernet, DC-WAN, WAN-Core, and Government and Public Networks
3-2
Implementation Guide
Chapter 3
Enterprise L2VPN Services Design
Virtual Private LAN Service (VPLS)
Virtual Forwarding Instance (VFI)
A VFI is created on the provider edge router for each L2VPN instance or VPLS instance. It acts like a
virtual bridge for a given VPLS instance. Provider edge routers establish a full mesh of pseudo-wires
and exchange VC labels (using targeted LDP session), with all the other provider edge routers in the
same VPLS instance and attach these pseudo-wires to the VFI. Provider edge routers also connect local
ACs in that VPLS instance to the same VFI by adding the VFI instance and customer attachment circuits
to the same bridge domain.
When a frame is received from the Attachment circuit, the ingress provider edge router learns its source
mac address and updates its mac-address-table for the associated VFI. The provider edge router then
performs destination mac based forwarding such that frames with unknown unicast or broadcast and
multicast destination mac addresses are flooded to all the remote provider edges in the same VFI.
Flooding is achieved by sending one copy of the packet to each remote provider edge, in the same VPLS
instance, on its corresponding point-to-point pseudo-wire. Ingress provider edge encapsulates L2
ethernet frame with two MPLS labels for each remote provider edge, bottom label is MPLS VC label
and top label belonging to the remote provider edge loopback respectively. When the remote provider
edges receive the mpls packet, they check the VC label to map it with the correct VFI, remove the mpls
header, look at the original L2 header, update the source mac address of the frame in the corresponding
VFI mac-address-table with the egress interface as pseudo-wire on which it was received and then the
L2 frame is forwarded to the attachment circuits. After initial flooding the all provider edge routers get
their mac-address-table populated that they use for future forwarding.
When a provider edge receives L2 frames with known unicast destination mac addresses, it forwards
them to the pseudo-wire corresponding remote provider edge router by encapsulating L2 ethernet frame
with mpls VC label, as bottom label, and label belonging to remote provider edge, as top label,
respectively. The different enterprise locations connect to the same VPLS-based virtual bridge domain
and function as if they are in a shared, LAN-like environment.
Note
As a split horizon rule the remote provider edges never advertise a frame received from a pseudo-wire
in a VFI back to the same pseudo-wire or other pseudo-wires in the same VFI. This is to prevent the
loops.
Table 3-2 details the provider edge router VPLS configuration.
Table 3-2
Provider Edge VPLS Configuration
Provider Edge VPLS Configuration
Description
interface TenGigE0/0/0/2.876 l2transport
L2 Customer Attachment Circuit.
encapsulation dot1q 876
!
l2vpn
Enters L2vpn configuration mode.
bridge group L2VPN
Enters configuration mode for the “L2VPN” named bridge group. A bridge
group organizes bridge domains.
Enters configuration mode for the “VPLS” named bridge domain.
bridge-domain VPLS
interface Te0/0/0/2.876
vfi VPLS
Specifies the attachment circuit.
Configures the virtual forwarding interface (VFI) and enters L2VPN bridge
group bridge domain VFI configuration mode.
Cisco ASR9000 Enterprise L2VPN for Metro-Ethernet, DC-WAN, WAN-Core, and Government and Public Networks
Implementation Guide
3-3
Chapter 3
Enterprise L2VPN Services Design
Provider Backbone Bridging Ethernet VPN (PBB-EVPN)
Table 3-2
Provider Edge VPLS Configuration (continued)
Provider Edge VPLS Configuration
Description
Specify the IP address of the cross-connect peers. The pseudo-wire-ID can be
same for all peers.
neighbor 100.111.3.1 pw-id 876
neighbor 100.111.11.1 pw-id 876
neighbor 100.111.11.2 pw-id 876
!
Note
pseudo-wire-ID per VFI should be unique on the provider edge and should be same on all the provider
edges for the same VPLS instance
VPLS provides multipoint connectivity between the Enterprise locations, however it requires full mesh
of pseudo-wires setup between provider edge routers in same VPLS instance. PBB-EVPN provides an
alternate solution that achieves multipoint connectivity between the provider edge devices by using the
BGP control plane with the introduction of another address-family “evpn”. See the section before for
details.
Provider Backbone Bridging Ethernet VPN (PBB-EVPN)
PBB-EVPN provides multipoint L2VPN connectivity between different enterprise locations.
PBB-EVPN provides a scalable solution by using BGP control plane to establish multipoint connectivity
across MPLS transport instead of using full mesh of pseudo-wires as in case of VPLS.
Provider edge routers in the PBB-EVPN network have unique mac addresses called backbone mac
(B-MAC) and they advertise these B-MACs with their corresponding labels using BGP address-family
"evpn." Provider edge devices populate the B-MAC addresses learned through BGP, as well as their
corresponding labels and the next-hop in their forwarding tables. L2 frames received from the customers
are encapsulated with PBB header. The PBB header's source and destination mac addresses are B-MACs.
These B-MAC addresses correspond to the source and destination provider edge routers, respectively.
MPLS labels corresponding to the destination B-MAC address are imposed and forwarded to the remote
provider edge. Customer L2 information including source mac, destination mac, and VLANs are kept
intact.
Figure 3-3
PBB-EVPN
CPE
(Branch/Campus
Router)
BEB
BCB
CPE
(Branch/Campus
Router)
PE
(ASR 9000)
PE
(ASR 9000)
PBB-EVPN Core
BCB (B-MAC Addresses) BCB
CPE
(Branch/Campus
Router)
BEB
298749
BEB
PE (ASR 9000)
Cisco ASR9000 Enterprise L2VPN for Metro-Ethernet, DC-WAN, WAN-Core, and Government and Public Networks
3-4
Implementation Guide
Chapter 3
Enterprise L2VPN Services Design
Provider Backbone Bridging Ethernet VPN (PBB-EVPN)
Note
Advertising backbone mac addresses (B-MAC) in BGP instead of customer mac addresses (C-MAC)
helps in reducing the number of BGP MAC advertisement routes.
PBB-EVPN has three components, Backbone Edge Bridge(BEB), Backbone Core Bridge(BCB) and
Ethernet VPN instance (EVI) . Each of these components are described in detail as follows
Backbone Edge Bridge (BEB)
Backbone Edge Bridge (BEB) is the bridge domain on the provider edge router towards the customer.
Customer edge connecting interfaces on the provider edge routers are part of the BEB. BEB is connected
to BCB (described below) with a service instance identifier called ISID. BEB adds PBB header to the
L2 frames received from the customer that includes source B-MAC (local provider edge backbone mac),
destination B-MAC (destination provider edge B-MAC) and configured ISID for the BEB. It then
forwards the frame to the BCB. BEBs belonging to the same L2VPN network are configured with the
same ISID value across all the provider edges. BEB is responsible of learning mac addresses of the local
and remote enterprise users and accordingly forward the traffic as described below.
•
If a L2 frame is learnt from local interface connected to the customer edge then its source mac
address is updated in the mac-address-table with next-hop as local interface.
•
If a PBB encapsulated L2 frame is received from the PBB Core then the corresponding C-MAC to
B-MAC entry is updated in the mac-address-table for future forwarding.
•
If destination of a L2 frame is known then it is encapsulated with PBB header with destination as
remote provider edge B-MAC, source as local B-MAC and ISID corresponding to BEB, and the sent
to the BCB for further forwarding.
•
If the destination of a L2 frame is unknown then it is flooded to all the remote provider edges as
unknown unicast such that the destination B-MAC address is derived from the ISID and then
forwarded to the BCB. BCB floods the packet to all remote provider edges in same VPN by
imposing multicast labels advertised by remote provider edges.
Backbone Core Bridge (BCB)
Backbone Core Bridge is the bridge domain responsible for populating the BGP learnt B-MAC addresses
and maintains B-MAC to its label and BGP next-hop mapping in the forwarding table. It is also
responsible for forwarding PBB encapsulated packets received from the local BEB to the MPLS network
or from the MPLS network to the local BEB. When a packet is received from the MPLS network, BEB
is identified by checking the ISID value of the PBB header for further forwarding of the packet.
Ethernet VPN Instance (EVI)
E-VPN Instance (EVI) identifies an Ethernet VPN in the MPLS network. There can only be one EVI per
Core Bridge. Just like L3VPN VRF, an EVI is configured with RD and RT values that are attached to
the B-MAC addresses while they are advertised to the BGP neighbors. The provider edge routers import
required, real-time values. It also introduces another address-family “evpn” to the BGP.
When a provider edge receives the customer traffic destined to a remote enterprise location and needs
to be sent across the PBB core, BEB adds a PBB header to the customer traffic. This PBB header
includes BEB’s ISID value and source and destination B-MAC addresses and forwards it to the
Cisco ASR9000 Enterprise L2VPN for Metro-Ethernet, DC-WAN, WAN-Core, and Government and Public Networks
Implementation Guide
3-5
Chapter 3
Enterprise L2VPN Services Design
Provider Backbone Bridging Ethernet VPN (PBB-EVPN)
core-bridge. Core-bridge adds transport VLAN, if any, and labels for destination B-MAC and BGP
next-hop respectively. Label swapping happens in the MPLS network based on the top label for
BGP-next hop. Once the packet reaches the remote provider edge, the corresponding B-MAC label
identifies the core bridge domain. Core Bridge checks PBB header for the ISID value to forward traffic
to correct local BEB where the destination customer edge interface is connected.
Table 3-3 details the provider edge PBB-EVPN configuration.
Table 3-3
Provider Edge PBB-EVPN Configuration
Provider Edge PBB-EVPN Configuration
Description
interface GigabitEthernet0/0/1/7.605 l2transport
L2 Customer Attachment Circuit.
encapsulation dot1q 605
Specifies specific customer VLAN.
!
Enters L2vpn configuration mode.
l2vpn
bridge group PBB_EVPN
bridge-domain CORE
Enters configuration mode for the “PBB_EVPN” named bridge
group. A bridge group organizes bridge domains.
Enters configuration mode for the “CORE” named bridge domain.
Configures the bridge domain as PBB core (BCB).
pbb core
evpn evi 602
Configures the Ethernet VPN ID.
!
!
bridge-domain EDGE
interface GigabitEthernet0/0/1/7.602
Enters configuration mode for the “EDGE” named bridge domain.
Specifies the attachment circuit.
!
pbb edge i-sid 602 core-bridge CORE
Configures the bridge domain as PBB edge (BEB) with the ISID
and the assigned core bridge domain.
!
Enters EVPN configuration mode.
evpn
evi 602
Configures Ethernet VPN ID.
Enters EVPN BGP configuration mode.
Bgp
rd 101:500
Configures RD for the EVI.
route-target import 601:601
Import RT to import B-MACs with RT 601:601.
route-target export 601:601
Export local B-MACs with export RT 601:601.
!
!
router bgp 1000
address-family l2vpn evpn
retain route-target all
Enters router BGP configuration mode.
Enables EVPN address family under BGP routing process and
enters EVPN address family configuration sub-mode.
Retains all routes for all route targets.
!
session-group intra-as
Configures session-group to define parameters that are
address-family independent.
Cisco ASR9000 Enterprise L2VPN for Metro-Ethernet, DC-WAN, WAN-Core, and Government and Public Networks
3-6
Implementation Guide
Chapter 3
Enterprise L2VPN Services Design
Provider Backbone Bridging Ethernet VPN (PBB-EVPN)
Table 3-3
Provider Edge PBB-EVPN Configuration (continued)
Provider Edge PBB-EVPN Configuration
Description
remote-as 1000
Specifies remote-as as AS number of route-reflector.
update-source Loopback0
Specifies update-source as Loopback0 for BGP communication.
!
Enters neighbor-group configuration mode.
neighbor-group cn-rr
use session-group intra-as
Importing Session-group AF independent parameters.
address-family l2vpn evpn
Enables EVPN BGP address-family for neighbor group.
!
!
Configured Route reflector’s loopback as neighbor.
neighbor 100.111.15.50
Inheriting neighbor-group cn-rr parameters.
use neighbor-group cn-rr
As discussed in previous chapter “Enterprise L2 Transport Design”, to avoid full mesh of IBGP
neighbor-ship and relax bgp split-horizon rule, route-reflectors are used to advertise the transport
prefixes. Similarly for PBB-EVPN, instead of running full mesh of IBGP sessions to distribute B-MAC
addresses, route-reflectors are deployed.
Table 3-4 details the PBB-EVPN route-reflector configuration.
Table 3-4
PBB-EVPN Route-Reflector Configuration
PBB-EVPN Route-Reflector Configuration Description
Enters Router BGP configuration mode.
router bgp 1000
address-family l2vpn evpn
retain route-target all
Enables EVPN address family under BGP routing process and enters EVPN
address family configuration sub-mode.
Retains all routes for all RTs.
!
Enters neighbor-group configuration mode.
neighbor-group pan
address-family l2vpn evpn
route-reflector-client
Enables EVPN BGP address-family for neighbor group.
Configures neighbor-group as route-reflector client.
!
neighbor 100.111.5.7
use neighbor-group pan
Configures provider edge loopback as neighbor.
Inherits neighbor-group PAN parameters.
We have discussed various methods to achieve point-to-point and multipoint connectivity for L2
networks over common MPLS transport infrastructure. The next step is to define metro-Ethernet forum
(MEF) services using these methods described in the following sections.
Cisco ASR9000 Enterprise L2VPN for Metro-Ethernet, DC-WAN, WAN-Core, and Government and Public Networks
Implementation Guide
3-7
Chapter 3
Enterprise L2VPN Services Design
E-Line (EPL & EVPL)
E-Line (EPL & EVPL)
E-Line service provides point-to-point EVC between two branch, campus, or Data Center sites. A
pseudo-wire is established between a pair of provider edge routers to achieve point-to-point connectivity
between the two enterprise locations. E-Line service can be port-based (Ethernet private line or EPL) or
VLAN-based (Ethernet virtual private line or EVPL).
•
For EPL service, all traffic received from the customer edge connecting interface is carried over a
single pseudo-wire between a pair of provider edges. There is one-to-one mapping of physical port
and pseudo-wire connecting to remote provider edge.
•
For EVPL service, traffic received from an individual VLAN on a customer edge connecting
interface is carried on its respective separate pseudo-wire. One physical port can be shared among
multiple VLANs that in turn connect to their respective pseudo-wires to different remote provider
edges. Thus multiple EVPL services can be implemented on the same UNI port.
Table 3-5 and Table 3-6 detail EPL and EVPL configurations.
Table 3-5
EPL Configuration
Provider edge EPL Configuration
Description
interface TenGigE0/1/1/2.210 l2transport L2 Customer Attachment Circuit.
encapsulation default
Matching all customer traffic .
!
l2vpn
Enters L2vpn configuration mode.
xconnect group EPL
Enters the name of the cross-connect group as “EPL”.
p2p EPL
Enters a name for the point-to-point cross-connect as “EPL”.
interface TenGigE0/1/1/2.210
Specifies the Attachment circuit.
neighbor ipv4 100.111.14.4 pw-id 210
Configures pseudo-wire neighbor and VC id.
!
Note
Table 3-6
Similar configuration can be done on the remote provider edge with the corresponding neighbor address.
EVPL Configuration
Provider edge EVPL Configuration
Description
interface TenGigE0/1/1/2.210 l2transport L2 Customer Attachment Circuit
encapsulation dot1q 210
Matching specific VLAN traffic
!
l2vpn
Enters L2vpn configuration mode
xconnect group EVPLAN
p2p EVPLAN
Enters the name of the cross-connect group as “EPL”
Enters a name for the point-to-point cross-connect as “EPL”
interface TenGigE0/1/1/2.210
Specifies the Attachment circuit
neighbor ipv4 100.111.14.4 pw-id 210
Configures pseudo-wire neighbor and VC id
!
Cisco ASR9000 Enterprise L2VPN for Metro-Ethernet, DC-WAN, WAN-Core, and Government and Public Networks
3-8
Implementation Guide
Chapter 3
Enterprise L2VPN Services Design
Ethernet Private LAN (EP-LAN) and Ethernet Virtual Private LAN (EVP-LAN)
Ethernet Private LAN (EP-LAN) and Ethernet Virtual Private LAN
(EVP-LAN)
Ethernet LAN (ELAN) service provides multipoint connectivity between two or more enterprise
locations. Data received from one AC can be sent to one (for known unicast destinations) or more (for
unknown unicast, multicast, or broadcast destinations) remote locations based on the destination mac
address. Using either VPLS or PBB-EVPN core technologies, you can achieve multi-point ELAN
connectivity between Enterprise locations over a MPLS network. Similar to E-line, ELAN can also be
implemented as either port-based (Ethernet Private LAN or EPLAN) or VLAN-based (Ethernet Virtual
Private LAN or EVP-LAN)
•
For EP-LAN service, all traffic received from a customer edge connecting interface is part of the
same L2 network and is sent to one or remote enterprise locations across the same ELAN based L2
network depending on the destination mac address. There is one-to-one mapping of physical port
and VPLS instance connecting to remote provider edge.
•
For EVP-LAN service, one or multiple locations on a customer edge connecting interface are part
of the same L2 network. Different VLANs on the same customer edge interface can be part of
different L2 networks. Traffic received on a VLAN from a customer edge connecting interface is
sent to one or more remote enterprise locations across its corresponding ELAN based L2 network
depending on the destination mac address. One physical port can be shared among multiple VLANs
that in turn connect to their respective EVP-LAN-based L2 networks.
Implementing EP-LAN and EVP-LAN service with VPLS core includes configuring a VFI instance,
attaching the full mesh of pseudo-wires with remote provider edges to the VFI and adding customer
attachment circuits and VFI in the same bridge domain.
Table 3-7 and Table 3-8 details EP-LAN and EVP-LAN implementation with VPLS core.
Table 3-7
Provider Edge EP-LAN Configuration with VPLS Core
Provider Edge EP-LAN Configuration with VPLS Core Description
interface GigabitEthernet100/0/0/40.100
l2transport
encapsulation default
L2 Customer Attachment Circuit
Matching all traffic
!
l2vpn
Enters L2vpn configuration mode
bridge group EPLAN
Enters configuration mode for the “EPLAN” named bridge group. A
bridge group organizes bridge domains.
Enters configuration mode for the “EPLAN” named bridge domain.
bridge-domain EPLAN
interface Te0/0/0/2.876
Specifies the Attachment circuit
Configures the virtual forwarding interface (VFI) name “EPLAN”
vfi EPLAN
neighbor 100.111.5.4 pw-id 876
neighbor 100.111.11.1 pw-id 876
Specify the IP address of the cross-connect peers. The pseudo-wire-ID
can be same for all peers.
neighbor 100.111.11.2 pw-id 876
Cisco ASR9000 Enterprise L2VPN for Metro-Ethernet, DC-WAN, WAN-Core, and Government and Public Networks
Implementation Guide
3-9
Chapter 3
Enterprise L2VPN Services Design
Ethernet Private LAN (EP-LAN) and Ethernet Virtual Private LAN (EVP-LAN)
Table 3-8
Provider Edge EVP-LAN Configuration with VPLS Core
Provider Edge EVP-LAN Configuration with VPLS Core Description
interface TenGigE0/0/0/2.876 l2transport
encapsulation dot1q 876
L2 Customer Attachment Circuit.
Matching specific VLAN traffic.
!
l2vpn
Enters L2VPN configuration mode.
bridge group L2VPN
Enters configuration mode for the “EVPLAN” named bridge group.
A bridge group organizes bridge domains.
bridge-domain EVPLAN
interface Te0/0/0/2.876
Enters configuration mode for the “EVPLAN” named bridge domain.
Specifies the Attachment circuit
Configures the virtual forwarding interface (VFI) name “EVPLAN”
vfi EVPLAN
neighbor 100.111.5.4 pw-id 876
neighbor 100.111.11.1 pw-id 876
Specify the IP address of the cross-connect peers. The
pseudo-wire-ID can be same for all peers.
neighbor 100.111.11.2 pw-id 876
Implementing EP-LAN and EVP-LAN service with PBB-EVPN core includes configuring BEB and
BCB bridge domains, attaching customer attachment circuit to BEB bridge domain, configuring EVI
with corresponding RD and RT values for B-MAC advertisement in BGP.
Table 3-9 details EP-LAN and EVP-LAN with PBB-EVPN core implementation.
Table 3-9
EP-LAN and EVP-LAN with PBB-EVPN Core Implementation
Provider edge EP-LAN Configuration with PBB-EVPN Core Description
interface Te0/3/0/0.500 l2transport
encapsulation dot1q default
L2 Customer Attachment Circuit.
Matching all customer traffic.
!
Enters L2VPN configuration mode.
l2vpn
bridge group PBB-EVPN
bridge-domain CORE
Enters configuration mode for the “PBB_EVPN” named bridge
group. A bridge group organizes bridge domains.
Enters configuration mode for the “CORE” named bridge
domain.
Configures the bridge domain as PBB core (BCB).
pbb core
evpn evi 500
Configures the EVPN ID.
!
!
bridge-domain EDGE
interface Te0/3/0/0.500
Enters configuration mode for the “EDGE” named bridge
domain.
Specifies the attachment circuit
!
pbb edge i-sid 500 core-bridge CORE-500
Configures the bridge domain as PBB edge (BEB) with the ISID
and the assigned core bridge domain
Cisco ASR9000 Enterprise L2VPN for Metro-Ethernet, DC-WAN, WAN-Core, and Government and Public Networks
3-10
Implementation Guide
Chapter 3
Enterprise L2VPN Services Design
E-TREE (EP-TREE/ EVP-TREE)
Table 3-9
EP-LAN and EVP-LAN with PBB-EVPN Core Implementation (continued)
Provider edge EP-LAN Configuration with PBB-EVPN Core Description
!
Enters EVPN configuration mode.
evpn
Configures EVPN ID.
evi 500
Enters EVPN BGP configuration mode.
bgp
rd 101:500
Configures RD for the EVI.
route-target import 101:500
Import RT to import B-MACs with RT 101:500.
route-target import 101:500
Export local B-MACs with export RT 101:500.
!
Enters Router BGP configuration mode.
router bgp 101
address-family l2vpn evpn
retain route-target all
Enables EVPN address family under BGP routing process and
enters EVPN address family configuration sub-mode.
Retains all routes for all RTs.
EVP-LAN service implementation has identical configuration; however, only specific service VLAN is
a part of the L2VPN network instead all the VLANs on the UNI connected to enterprise customer
premise equipment (CPE).
interface Te0/3/0/0.500 l2transport
encapsulation dot1q 500
!
E-TREE (EP-TREE/ EVP-TREE)
E-Tree provides a tree based multipoint L2 connectivity such that the leaf nodes can only communicate
with the root node and leaf nodes cannot communicate with each other. A network can have one or more
root nodes. Root nodes can communicate with each other. PBB-EVPN is particularly well suited for
E-Tree services applications in that it brings the filtering intelligence based on BGP route targets (RTs).
With PBB-EVPN, each provider edge node imports selectively only the RTs of interest to achieve the
desired type of connectivity such that
•
Leaf Node imports Root node route target and exports leaf node route target
•
Root node imports Leaf node route target and exports root node route target
•
E-Tree can also be implemented as port-based (Ethernet private tree or EP-TREE) or VLAN-based
(Ethernet virtual private tree or EVP-TREE)
•
For EP-TREE service, all traffic received from a customer edge connecting interface is part of the
same L2 network and is sent to remote leafs (in case provider edge is root) or root node (in case
provider edge is Leaf) across the same EP-TREE based L2 network.
•
For EVP-Tree service, one or multiple VLANs on a customer edge connecting interface are part of
the same L2 network. Different VLANs on the same customer edge interface can be part of different
L2 networks. Traffic received on a VLAN from customer edge connecting interface is sent to remote
Cisco ASR9000 Enterprise L2VPN for Metro-Ethernet, DC-WAN, WAN-Core, and Government and Public Networks
Implementation Guide
3-11
Chapter 3
Enterprise L2VPN Services Design
E-TREE (EP-TREE/ EVP-TREE)
leafs (in case provider edge is root) or root node (in case provider edge is Leaf) across the same
EP-TREE based L2 network. One physical port can be shared among multiple VLANs that in turn
connect to their respective EVP-TREE based L2 networks.
Table 3-10 details leaf EP-TREE and EVP-TREE with PBB-EVPN core implementation.
Table 3-10
Leaf EP-TREE and EVP-TREE with PBB-EVPN Core Implementation
Leaf PE EVP-Tree Configuration with PBB-EVPN Core
Description
interface TenGigE0/2/1/3.310 l2transport
L2 Customer Attachment Circuit.
encapsulation dot1q 310
Matching specific customer VLAN.
Enters EVPN configuration mode.
evpn
Configures Ethernet VPN ID.
evi 600
Enters EVPN BGP configuration mode.
bgp
route-target import 1000:1000
Import Root RT to import B-MACs with RT 1000:1000.
route-target export 1001:1001
Export local B-MACs with leaf RT 1001:1001.
!
!
!
router bgp 101
Enters Router BGP configuration mode.
address-family l2vpn evpn
Enables EVPN address family under BGP routing. process and
enters EVPN address family configuration sub-mode.
!
Enters L2VPN configuration mode.
l2vpn
bridge group PBB_Etree
bridge-domain PBB_Etree_core
Enters configuration mode for the “PBB_Etree” named bridge
group. A bridge group organizes bridge domains.
Enters configuration mode for the “PBB_Etree_core” named bridge
domain.
Configures the bridge domain as PBB core (BCB).
pbb core
Configures the Ethernet VPN ID.
evpn evi 600
!
!
bridge-domain PBB_Etree_edge
interface TenGigE0/2/1/3.310
Enters configuration mode for the “PBB_Etree_edge” named
bridge domain.
Specifies the attachment circuit.
!
pbb edge i-sid 600 core-bridge PBB_Etree_core
Configures the bridge domain as PBB edge (BEB) with the ISID
and the assigned core bridge domain.
Table 3-11 details root PE EVP-Tree configuration with PBB-EVPN Core.
Cisco ASR9000 Enterprise L2VPN for Metro-Ethernet, DC-WAN, WAN-Core, and Government and Public Networks
3-12
Implementation Guide
Chapter 3
Enterprise L2VPN Services Design
E-TREE (EP-TREE/ EVP-TREE)
Table 3-11
Root PE EVP-Tree Configuration with PBB-EVPN Core
Root provider edge EVP-Tree Configuration with PBB-EVPN Core Description
interface TenGigE0/1/0/1.310 l2transport
encapsulation dot1q 310
L2 Customer Attachment Circuit.
Matching specific customer VLAN.
!
Enters EVPN configuration mode.
evpn
Configures Ethernet VPN ID.
evi 600
Enters EVPN BGP configuration mode.
bgp
route-target import 1000:1000
Import leaf RT to import B-MACs with RT 1000:1000
route-target export 1001:1001
Export local B-MACs with Root RT 1001:1001.
!
!
router bgp 101
Enters Router BGP configuration mode.
address-family l2vpn evpn
Enables E-VPN address family under BGP routing process
and enters E-VPN address family configuration sub-mode.
router bgp 101
Enters Router BGP configuration mode
address-family l2vpn evpn
Enables E-VPN address family under BGP routing process
and enters E-VPN address family configuration sub-mode.
!
Enters L2VPN configuration mode.
l2vpn
!
bridge group PBB_Etree
Enters configuration mode for the “PBB_Etree” named
bridge group. A bridge group organizes bridge domains.
bridge-domain PBB_Etree_core
Enters configuration mode for the “PBB_Etree_core”
named bridge domain.
pbb core
Configures the bridge domain as PBB core (BCB).
Configures the Ethernet VPN ID.
evpn evi 600
!
!
bridge-domain PBB_Etree_edge
interface TenGigE0/1/0/1.310
Enters configuration mode for the “PBB_Etree_edge”
named bridge domain.
Specifies the attachment circuit.
!
pbb edge i-sid 600 core-bridge PBB_Etree_core
Configures the bridge domain as PBB edge (BEB) with the
ISID and the assigned core bridge domain.
Cisco ASR9000 Enterprise L2VPN for Metro-Ethernet, DC-WAN, WAN-Core, and Government and Public Networks
Implementation Guide
3-13
Chapter 3
Enterprise L2VPN Services Design
E-TREE (EP-TREE/ EVP-TREE)
Cisco ASR9000 Enterprise L2VPN for Metro-Ethernet, DC-WAN, WAN-Core, and Government and Public Networks
3-14
Implementation Guide
CH A P T E R
4
Provider Edge-Customer Edge Design Options
The domain creating the MPLS L2VPN service consisting of provider and provider edge routers remains
the same regardless of access technologies. The technologies and designs used to connect the provider
edge-to-customer edge device vary considerably based on technology preference, installed base, and
operational expertise.
Common characteristics exist for each of the options. Each design needs to consider the following:
•
Topology implemented, either hub-and-spoke or rings;
•
How redundancy is configured; and,
•
QoS implementation.
Network availability is critical for enterprises in order to prevent revenue loss. To improve network
reliability, branch routers, campus routers, and Data Centers are multi-housed on provider edge devices
using one of the various access topologies to achieve provider edge node redundancy. Each topology
needs reliability and resilience to provide seamless connectivity. This chapter describes how to achieve
seamless connectivity.
Inter-Chassis Communication Protocol
The provider edge nodes connecting to the dual-homed customer edge work in active or standby mode.
The active provider edge forwards traffic while the standby provider edge monitors the active provider
edge status. The standby provider edge takes over forwarding if the active provider edge fails. The nodes
require a mechanism to communicate local connectivity failure to the customer edge; and, a mechanism
to detect peer-node failure in order to move traffic to the standby provider edge. Inter-Chassis
Communication Protocol (ICCP) provides the control channel to communicate this information.
ICCP allows active and standby provider edges, connecting to dual-homed CPE, to exchange
information regarding local link failure to CPE and detect peer node failure or it's Core Isolation. This
critical information helps to move forwarding from active to standby provider edge within milliseconds.
The provider edges can be co-located or geo-redundant. ICCP communication occurs between provider
edges either using dedicated link between provider edges or using the core network. ICCP configuration
includes configuring redundancy group (RG) on both of the premise equipment devices with each other’s
address for ICCP communication. Using this information, provider edges set up ICCP control
connection and different applications like Multi-Chassis Link Aggregation Group (MC-LAG) and
Network Virtualization (nV) described in the next sections use this control connection to share state
information.
Table 4-1 shows how to configure ICCP.
Cisco ASR9000 Enterprise L2VPN for Metro-Ethernet, DC-WAN, WAN-Core, and Government and Public Networks
Implementation Guide
4-1
Chapter 4
Provider Edge-Customer Edge Design Options
Ethernet Access
Table 4-1
ICCP Configuration
ICCP Configuration
Description
redundancy
Adds an ICCP redundancy group with mentioned group-id.
iccp
group group-id
member
neighbor neighbor-ip-address
This is the ICCP peer for this redundancy group. Only one neighbor can be
configured per redundancy group. The IP address is the LDP router-ID of
the neighbor. This configuration is required for ICCP to function.
!
backbone
backbone interface interface-type-id
!
Configures ICCP backbone interfaces to detect isolation from the network
core, and triggers switchover to the peer provider edge if the provider edge
on which the failure is occurring is active. Multiple backbone interfaces
can be configured for each redundancy group. When all backbone
interfaces are not UP, this is an indication of core isolation.
The next section discusses various access topologies that can be implemented between branch, campus,
or Data Center devices in an Enterprise L2VPN network. Each topology ensures redundancy and fast
failure detection and convergence mechanisms to provide seamless last mile connectivity.
Ethernet Access
The following sections describe how Ethernet access is implemented in hub-and-spoke or ring access.
Hub and Spoke Using MC-LAG Active/Active
In hub-and-spoke access topology, the customer edge device is dual homed to provider edge devices in
the MPLS VPN network. The MC-LAG feature provides an end-to-end inter-chassis redundancy
solution for Enterprise. MC-LAG involves provider edge devices collaborating through ICCP
connection to act as a single Link Aggregation Group (LAG) from the perspective of customer edge
device, thus providing device-level and link-level redundancy. To achieve this, provider edge devices
use the ICCP connection to coordinate with each other to present a single LACP bundle (spanning the
two devices) to the customer edge device. In addition, service multi-homing enables both provider edge
nodes to load share traffic based on VLAN ranges. The provider edge nodes negotiate their active or
standby role for a specific VLAN using the ICCP-SM protocol. Negotiation is based on locally-defined
priority.
While the two ASR 9000 provider edge nodes share a common Bundle interface, the access node uplinks
are grouped together on a per-provider edge-node basis only, or they can be unbundled in the case only
a single uplink per provider edge exists.
The provider edge nodes enable the L2VPN functionality, mapping the inter-chassis bundle
sub-interface to the VFI/Edge Bridge associated to the core VPLS/PBB-EVPN service.
Once MAC learning has completed only the active provider edge node for a specific VLAN will receive
traffic. L2VPN service is configured on this bundle interface or sub-interface on provider edge. provider
edge devices coordinate through the ICCP connection to perform a switchover while presenting an
unchanged bundle interface to the customer edge for the following failure events:
Cisco ASR9000 Enterprise L2VPN for Metro-Ethernet, DC-WAN, WAN-Core, and Government and Public Networks
4-2
Implementation Guide
Chapter 4
Provider Edge-Customer Edge Design Options
Ethernet Access
•
Link failure—A port or link between the customer edge and one of the provider edges fails.
•
Device failure-Meltdown or reload of one of the provider edges, with total loss of connectivity to
the customer edge, the core and the other provider edge.
•
Core isolation—A provider edge loses its connectivity to the core network and therefore is of no
value, being unable to forward traffic to or from the customer edge.
Figure 4-1
Hub and Spoke access with MLACP
PE-1
(ASR 9000)
G0/9
/12
TenG0/2/0/0
TenG0/2/0/1
/3/1
CPE
(Branch/Campus
/10
Router)
G0
G0
LAG
MC-LAG
ICCP
BE222
Active Port for VLAN Green
Active Port for VLAN Blue
MPLS
G0
/11
G0
/12
PE-2
(ASR 9000)
298750
TenG0/2/0/0
TenG0/2/0/1
/3/1
When a loss of connectivity between the provider edges, both devices may assume that the other has
failed. This will cause both devices to attempt to take on the Active role resulting in a loop. The customer
edge device can mitigate this situation by limiting number of links so that those links are connected to
one, active provider edge at a time. Hub and Spoke access configuration is described in Table 4-2,
Table 4-3, and Table 4-4.
Note
Table 4-2
PE-1 is configured active for VLAN 100 and PE-2 is configured active for VLAN 102.
Customer Edge Configuration
Customer Edge Configuration
Description
interface GigabitEthernet0/9
Interface connected to local LAN.
switchport trunk allowed vlan 100-101
switchport mode trunk
spanning-tree portfast trunk
load-interval 30
interface GigabitEthernet0/10
Interface connected to PE1.
port-type nni
switchport mode trunk
interface GigabitEthernet0/11
Interface connected to PE2.
port-type nni
switchport mode trunk
Cisco ASR9000 Enterprise L2VPN for Metro-Ethernet, DC-WAN, WAN-Core, and Government and Public Networks
Implementation Guide
4-3
Chapter 4
Provider Edge-Customer Edge Design Options
Ethernet Access
Table 4-3
Provider Edge-1 Configuration
Provider Edge-1 Configuration
Description
interface GigabitEthernet0/3/1/12
Configures customer edge connecting interface in bundle1.
description Bundle-Ether1
bundle id 1 mode on
cdp
load-interval 30
transceiver permit pid all
!
interface bundle-ether1
Configures Bundle interface.
!
interface bundle-ether1.100 l2transport
Configures bundle sub-interface with specific VLAN.
encapsulation dot1q 100
!
interface bundle-ether1.101 l2transport
Configures bundle sub-interface with specific VLAN.
encapsulation dot1q 101
!
Adds an ICCP redundancy group 1
redundancy
iccp
group 1
member
neighbor 100.111.3.2
Configures ICCP members as PE-2
!
Configures ICCP backbone interfaces.
backbone
interface Ten0/2/0/0
interface Ten0/2/0/1
!
l2vpn
bridge group L2VPN
bridge-domain CE-EPLAN-100
Bridge domain configuration for C-VLAN 100
!
interface bundle-ether1.100
Adds attachment circuit for VLAN 100 to BD
!
vfi CE-EPLAN-100
Creates VFI instance with VPLS neighbors
neighbor 100.111.3.2 pw-id 100
!
neighbor 100.111.11.1 pw-id 100
!
neighbor 100.111.11.2 pw-id 100
!
Cisco ASR9000 Enterprise L2VPN for Metro-Ethernet, DC-WAN, WAN-Core, and Government and Public Networks
4-4
Implementation Guide
Chapter 4
Provider Edge-Customer Edge Design Options
Ethernet Access
Table 4-3
Provider Edge-1 Configuration (continued)
Provider Edge-1 Configuration
Description
!
bridge-domain CE-EPLAN-101
interface Bundle-Ether1.101
Bridge domain configuration for C-VLAN 101
Adds attachment circuit for VLAN 101 to BD
!
Creates virtual fragment interface (VFI) instance with VPLS neighbors
vfi CE-EPLAN-101
neighbor 100.111.3.2 pw-id 101
!
neighbor 100.111.11.1 pw-id 101
!
neighbor 100.111.11.2 pw-id 101
!
!
!
Enables L2VPN redundancy mode and enters redundancy configuration
sub-mode. Adds an ICCP redundancy group.
redundancy
iccp group 1
multi-homing node-id 1
Enter the pseudo MLACP node ID. Enables the ICCP based multi-homing
service. The node-ID is used for ICCP signaling arbitration.
interface Bundle-Ether1
Specifies the bundle interface
primary vlan 100
Configures the list of VLANs under the bundle port, which default to active
(forwarding) when there are no faults detected.
secondary vlan 101
Configures the list of VLANs under the bundle port, which default to standby
(blocked) when there are no faults detected.
recovery delay 60
Recovery delay timer is started once the core isolation condition has cleared.
When the timer expires, the can take over as the active provider edge.
Table 4-4
Provider Edge-2 Configuration
Provider Edge-2 Configuration
Description
interface GigabitEthernet0/3/1/12
Configures customer edge connecting interface in bundle1.
description Bundle-Ether1
bundle id 1 mode on
cdp
load-interval 30
transceiver permit pid all
!
interface Bundle-Ether1
Configures Bundle interface.
!
interface Bundle-Ether1.100 l2transport
Configures bundle sub-interface with specific VLAN.
encapsulation dot1q 100
Cisco ASR9000 Enterprise L2VPN for Metro-Ethernet, DC-WAN, WAN-Core, and Government and Public Networks
Implementation Guide
4-5
Chapter 4
Provider Edge-Customer Edge Design Options
Ethernet Access
Table 4-4
Provider Edge-2 Configuration (continued)
Provider Edge-2 Configuration
Description
!
interface Bundle-Ether1.101 l2transport
Configured bundle sub-interface with specific VLAN.
encapsulation dot1q 101
!
Adds an ICCP redundancy group 1.
redundancy
iccp
group 1
member
neighbor 100.111.3.1
Configures ICCP members as Provider Edge-1.
!
Configures ICCP backbone interfaces.
backbone
interface TenGigE0/2/0/0
interface TenGigE0/2/0/1
!
!
!
l2vpn
bridge group L2VPN
bridge-domain CE-EPLAN-100
Bridge domain configuration for C-VLAN 100.
!
interface Bundle-Ether1.100
Adds attachment circuit for VLAN 100 to BD.
!
vfi CE-EPLAN-100
Creates VFI instance with VPLS neighbors.
neighbor 100.111.3.1 pw-id 100
!
neighbor 100.111.11.1 pw-id 100
!
neighbor 100.111.11.2 pw-id 100
!
!
!
bridge-domain CE-EPLAN-101
Bridge domain configuration for C-VLAN 101.
!
interface Bundle-Ether1.101
Adds attachment circuit for VLAN 101 to BD.
!
Cisco ASR9000 Enterprise L2VPN for Metro-Ethernet, DC-WAN, WAN-Core, and Government and Public Networks
4-6
Implementation Guide
Chapter 4
Provider Edge-Customer Edge Design Options
Ethernet Access
Table 4-4
Provider Edge-2 Configuration (continued)
Provider Edge-2 Configuration
Description
Creates VFI instance with VPLS neighbors.
vfi CE-EPLAN-101
neighbor 100.111.3.1 pw-id 101
!
neighbor 100.111.11.1 pw-id 101
!
neighbor 100.111.11.2 pw-id 101
!
!
!
l2vpn
Enables L2VPN redundancy mode and enters redundancy configuration
sub-mode. Adds an ICCP redundancy group.
redundancy
iccp group 1
multi-homing node-id 2
interface Bundle-Ether1
Enter the pseudo MLACP node ID. Enables the ICCP based multi-homing
service. The node-ID is used for ICCP signaling arbitration.
primary vlan 101
Specifies the bundle interface.
secondary vlan 100
Configures the list of VLANs under the bundle port, which default to active
(forwarding) when there are no faults detected.
!
Configures the list of VLANs under the bundle port, which default to standby
(blocked) when there are no faults detected.
Recovery delay timer is started once the core isolation condition has cleared.
When the timer expires, the can take over as the active provider edge.
Note
The model above can be implemented by configuring interfaces Bundle-Ether1.100 and
Bundle-Ether1.101 for point-to-point E-line or multipoint E-LAN/E-TREE service using VPLS or
PBB-EVPN core.
MC-LAG provides inter-chassis redundancy based on Hub and Spoke provider edge model. For Ring
based topologies G.8032 access method is deployed as described below.
G.8032 Ring Access
In this access topology, provider edges are connected to a G.8032 Ethernet ring formed by connecting
Ethernet access nodes to each other in a ring form. The G.8032 Ethernet ring protection switching
protocol elects a specific link to protect the entire ring from loops. Such a link, which is called the Ring
Protection Link (RPL), is typically maintained in disabled state by the protocol to prevent loops. The
device connecting to the RPL link is called the RPL owner responsible for blocking RPL link. Upon a
node or a link failure in the ring, the RPL link is activated allowing forwarding to resume over the ring.
G.8032 uses Ring Automatic Protection Switching (R-APS) messages to coordinate the activities of
switching the RPL on and off using a specified VLAN for the APS channel.
Cisco ASR9000 Enterprise L2VPN for Metro-Ethernet, DC-WAN, WAN-Core, and Government and Public Networks
Implementation Guide
4-7
Chapter 4
Provider Edge-Customer Edge Design Options
Ethernet Access
The G.8032 protocol also allows superimposing multiple logical rings over the same physical topology
by using different instances. Each instance contains an inclusion list of VLAN IDs and defines different
RPL links. In this guide, we are using two G.8032 instances with odd-numbered and even-numbered
VLANs. ASR9000's provider edges also participate in the ring and act as the RPL owner. One provider
edge acts as RPL owner for RPL for even-numbered VLAN's instance and the other provider edge as
RPL owner for RPL for odd-numbered VLAN's instance so one provider edge remains in blocking state
for one instance and other provider edge for other instance. Hence, load balancing and redundancy are
achieved by making use of two RPLs, each RPL serving one instance. Additionally, each instance will
have one VLAN dedicated to carry the automatic protection switching (APS) traffic.
In the G.8032 configuration, provider edge devices, which are configured as RPL owner nodes for one
of the two instances, are specified with the interface connected to the ring. Two instances are configured
for odd and even VLANs. provider edges are configured as RPL owner for one of the instances each to
achieve load balancing and redundancy. Both instances are configured with dot1q sub-interface for the
respective APS channel communication.
Figure 4-2
Ethernet Access with G.8032 Ring
Ethernet
Access Node
Blocked for Instance 2
(Odd VLANs)
Teng0/3/0/0
CPE
(Branch/Campus
Router)
Ethernet
Access Node
PE
(ASR 9000)
G.8032
Ethernet Ring
Access
MPLS
Network
G0/15
PE
(ASR 9000)
Blocked for Instance 1
(Even VLANs)
Ethernet
Access Node
297265
Teng0/3/0/0
Table 4-5 details customer edge configuration.
Table 4-5
Customer Edge Configuration
Customer Edge Configuration
Description
interface GigabitEthernet0/7
Customer edge Interface.
switchport trunk allowed vlan 118-119
Allows VLAN 118 and 119 on the trunk port.
switchport mode trunk
Configures interface as trunk port.
load-interval 30
!
Table 4-6 details E-Access node customer edge-facing interface configuration (UNI).
Table 4-6
Ethernet Access Node Customer Edge-Facing Interface Configuration (UNI)
E-Access Node CE-Facing Interface Configuration (UNI) Description
interface GigabitEthernet0/1
Customer edge connecting interface on Ethernet access node.
switchport trunk allowed vlan none
Cisco ASR9000 Enterprise L2VPN for Metro-Ethernet, DC-WAN, WAN-Core, and Government and Public Networks
4-8
Implementation Guide
Chapter 4
Provider Edge-Customer Edge Design Options
Ethernet Access
Table 4-6
Ethernet Access Node Customer Edge-Facing Interface Configuration (UNI) (continued)
E-Access Node CE-Facing Interface Configuration (UNI) Description
switchport mode trunk
load-interval 30
Configures EVC for VLAN 118.
service instance 118 ethernet EVC-118
encapsulation dot1q 118
!
Configures EVC for VLAN 119.
service instance 119 ethernet EVC-119
encapsulation dot1q 119
Table 4-7 details Ethernet access node configuration.
Table 4-7
Ethernet Access Node Configuration
Ethernet Access Node Configuration
Description
ethernet ring g8032 profile ring_profile
Configures Ethernet Ring profile.
timer wtr 10
Configures G.8032 WTR timer.
timer guard 100
Configures Guard timer.
!
ethernet ring g8032 ring_test
Configures G.8032 ring named ring_test.
open-ring
Configures ring as G.8032 ring as open ring.
exclusion-list vlan-ids 1000
Excludes VLAN 100.
port0 interface TenGigabitEthernet0/0/0
Mentions port0 as ten 0/0/0/0 for ring.
port1 interface TenGigabitEthernet0/1/0
Mentions port1 as ten 0/0/0/0 for ring.
instance 1
Configures Instance 1.
profile ring_profile
inclusion-list vlan-ids
99,106,108,118,301-302,310-311,1001-2000
Configures instance with ring profile.
Configures VLANs included in Instance 1.
Configures aps channel.
aps-channel
port0 service instance 99
Assigns service instance for APS messages on port0 and Port 1.
port1 service instance 99
!
!
Configures Instance 2.
instance 2
profile ring_profile
Configures instance with ring profile.
rpl port1 next-neighbor
Configures Device interface as next neighbor to RPL link owner.
inclusion-list vlan-ids
107,109,119,199,351,2001-3000
Configures VLANs included in Instance 2.
Configures aps channel.
aps-channel
port0 service instance 199
Assigns service instance for APS messages on port0 and Port 1.
port1 service instance 199
Cisco ASR9000 Enterprise L2VPN for Metro-Ethernet, DC-WAN, WAN-Core, and Government and Public Networks
Implementation Guide
4-9
Chapter 4
Provider Edge-Customer Edge Design Options
Ethernet Access
Table 4-7
Ethernet Access Node Configuration (continued)
Ethernet Access Node Configuration
Description
!
!
!
interface TenGigabitEthernet0/0/0
Configures interface connected to ring.
!
service instance 99 ethernet
encapsulation dot1q 99
Configures service instance used for APS messages on G.8032 ring for both
instances.
rewrite ingress tag pop 1 symmetric
bridge-domain 99
!
service instance 199 ethernet
encapsulation dot1q 199
rewrite ingress tag pop 1 symmetric
bridge-domain 199
!
interface TenGigabitEthernet0/1/0
service instance 99 ethernet
encapsulation dot1q 99
Configures interface connected to ring.
Configures service instance used for APS messages on G.8032 ring for both
instances.
rewrite ingress tag pop 1 symmetric
bridge-domain 99
!
service instance 199 ethernet
encapsulation dot1q 199
rewrite ingress tag pop 1 symmetric
bridge-domain 199
!
!
Table 4-8 details provider edge configuration.
Table 4-8
Provider Edge Configuration
Provider edge Configuration
Description
interface TenGigE0/3/0/0.118 l2transport
L2 Customer Attachment Circuit.
encapsulation dot1q 118
Matching specific customer VLAN 118.
!
interface TenGigE0/3/0/0.119 l2transport
L2 Customer Attachment Circuit.
encapsulation dot1q 119
Matching specific customer VLAN 119.
!
ethernet ring g8032 profile ring_profile
Configures Ethernet Ring profile
Cisco ASR9000 Enterprise L2VPN for Metro-Ethernet, DC-WAN, WAN-Core, and Government and Public Networks
4-10
Implementation Guide
Chapter 4
Provider Edge-Customer Edge Design Options
Ethernet Access
Table 4-8
Provider Edge Configuration (continued)
Provider edge Configuration
Description
timer wtr 10
Configures G.8032 WTR timer.
timer guard 100
Configures Guard timer.
timer hold-off 0
Configures hold-off timer.
!
Enters L2VPN Configuration mode
l2vpn
Configures bridge group named L2VPN.
bridge group L2VPN
bridge-domain CE-L3VPN-118
Configures bridge domain named customer edge-L3VPN-118.
interface TenGigE0/3/0/0.118
Enables sub-interface connected to ring towards customer edge under bridge
domain CE-L3VPN-118.
neighbor 100.111.3.2 pw-id 118
Configures pseudo-wire to neighbor provider edge in the same bridge domain.
bridge-domain CE-L3VPN-119
Configures another bridge domain customer edge-L3VPN-119.
interface TenGigE0/3/0/0.119
Enables sub-interface connected to ring towards customer edge under same
bridge domain customer edge-L3VPN-119
neighbor 100.111.3.2 pw-id 119
Configures pseudo-wire to neighbor provider edge in the same bridge domain
customer edge-L3VPN-119.
!
ethernet ring g8032 ring_test
port0 interface TenGigE0/3/0/0
Configures G.8032 ring named ring_test.
Configures port0 for g.8032 ring.
!
Mentions port 1 as none and G.8032 ring as open ring.
port1 none
open-ring
Enter instance 1 configuration.
Instance 1
Inclusion-list vlan-ids
99,106,108,118,500,64,604,1001-2000
Configures VLANs in the inclusion list of instance 1.
Enters APS channel configuration mode.
aps-channel
port0 interface TenGigE0/3/0/0.99
Configures sub-interface used for APS channel communication.
port1 none
!
!
instance 2
Enter instance 2 configuration.
profile ring_profile
Configures instance with ring profile.
rpl port0 owner
Configures provider edge as RPL owner on port0 for instance 2.
inclusion-list vlan-ids
199,107,109,109,119,501,2001-3000
Configures VLANs in the inclusion list of instance 1.
Enters aps channel configuration mode
aps-channel
port0 interface TenGigE0/3/0/0.199
Configures sub-interface used for APS channel communication.
port1 none
Cisco ASR9000 Enterprise L2VPN for Metro-Ethernet, DC-WAN, WAN-Core, and Government and Public Networks
Implementation Guide
4-11
Chapter 4
Provider Edge-Customer Edge Design Options
nV Access
Note
The model above can be implemented by configuring interfaces TenGigE0/3/0/0.118 and
TenGigE0/3/0/0.199 for point-to-point E-line or multipoint E-LAN/E-TREE service using VPLS or
PBB-EVPN core.
nV Access
The nV Satellite enables a system-wide solution in which one or more remotely-located devices or
“satellites” complement a pair of host provider edge devices to collectively realize a single virtual
switching entity in which the satellites act under the management and control of the host provider edge
devices. Satellites and Hosts provider edges communicate using a Cisco proprietary protocol that offers
discovery and remote management functions, thus turning the satellites from standalone devices into
distributed logical line cards of the host.
The technology allows enterprises to virtualize access devices to which branch or campus the routers
terminate, converting them into nV Satellite devices, and to manage them through provider edge nodes
that operate as nV hosts. By doing so, the access devices transform from standalone devices with
separate management and control planes into low profile devices that simply move user traffic from a
port connecting branch or campus router towards a virtual counterpart at the host, where all network
control plane protocols and advanced features are applied. The satellite only provides simple functions
such as local connectivity and limited (and optional) local intelligence that includes ingress QoS,
EOAM, performance measurements, and timing synchronization.
The satellites and the hosts exchange data and control traffic over point-to-point virtual connections
known as Fabric Links. Branch or Campus Ethernet traffic carried over the fabric links is specially
encapsulated using 802.1ah. A per-Satellite-Access-Port derived ISID value is used to map a given
satellite node physical port to its virtual counterpart at the host for traffic flowing in the upstream and
downstream direction. Satellite access ports are mapped as local ports at the host using the following
naming convention:
<port type><Satellite-ID>/<satellite-slot>/<satellite-bay>/<satellite-port>
Where:
•
<port type>—is GigabitEthernet for all existing Satellite models.
•
<Satellite-ID>—is the satellite number as defined at the Host.
•
<satellite-slot>/<satellite-bay>/ <satellite-port>—is the access port information as known at the
Satellite node.
These satellite virtual interfaces on the Host provider edges are configured with L2VPN service.
The satellite architecture encompasses multiple connectivity models between the host and the satellite
nodes. The guide discusses release support for:
•
nV Satellite Simple Rings
•
nV Satellite L2 Fabric
In all nV access topologies, host nodes load share traffic on a per-satellite basis. The active/standby role
of a host node for a specific satellite is determined by a locally-defined priority and negotiated between
the hosts via ICCP.
ASR 9000v and ASR 901 are implemented as a Satellite Devices:
Cisco ASR9000 Enterprise L2VPN for Metro-Ethernet, DC-WAN, WAN-Core, and Government and Public Networks
4-12
Implementation Guide
Chapter 4
Provider Edge-Customer Edge Design Options
nV Access
•
ASR 9000v has four 10 GbE ports that can be used as ICL.
•
ASR901 has two GbE ports that can be used as ICL and that can be used as ICL and ASR 903 can
have up to two 10 GbE ports can be used as ICL.
nV Satellite Simple Rings
In this topology, satellite access nodes connecting branch or campus are connected in an open ring
topology terminating at the provider edge host devices as shown in Figure 4-3.
Figure 4-3
nV with L1 Fabric access
PE
(ASR 9000)
PE
(ASR 9000)
nV Host
nV Host
Host Fabric Port
Satellite
Satellite
CPE
(Branch/Campus
Router)
Satellite Ring
Satellite
Satellite
Satellite Fabric
Port Toward
Active nV Host
Satellite Fabric
Port Toward
Standby nV Host
Satellite Fabric
Port Toward
Standby nV Host
Satellite Fabric
Port Toward
Active nV Host
297266
G0/0/40
The provider edge device advertises multicast discovery messages periodically over a dedicated VLAN
over fabric links. Each satellite access device in the ring listens for discovery messages on all its ports
and dynamically detects the Fabric link port toward the host.
The satellite uses this auto-discovered port for the establishment of a management session and for the
exchange of all the upstream and the downstream traffic with each of the hosts (data and control). At the
host, incoming and outgoing traffic is associated to the corresponding satellite node using the satellite
mac address, which was also dynamically learned during the discovery process. Discovery messages are
propagated from one satellite node to another and from either side of the ring so that all nodes can
establish a management session with both hosts. The is described below.
Table 4-9 details nV L1 fabric access configuration.
Table 4-9
nV L1 Fabric Configuration
nV L1 Fabric Configuration
Description
interface TenGigE0/2/0/3
Interface acting as Fabric link connecting to nV ring.
ipv4 point-to-point
ipv4 unnumbered Loopback10
Enters nV configuration mode under interface.
nv
satellite-fabric-link network
redundancy
Defines fabric link connectivity to simple ring using keyword
“Network.”
Enters Redundancy configuration mode for ICP group 210.
iccp-group 210
!
Cisco ASR9000 Enterprise L2VPN for Metro-Ethernet, DC-WAN, WAN-Core, and Government and Public Networks
Implementation Guide
4-13
Chapter 4
Provider Edge-Customer Edge Design Options
nV Access
Table 4-9
nV L1 Fabric Configuration (continued)
nV L1 Fabric Configuration
satellite 100
Description
Defines the Access ports of satellite ID 100.
remote-ports GigabitEthernet 0/0/0-30,31-43
!
satellite 101
Defines the Access ports of satellite ID 101.
remote-ports GigabitEthernet 0/0/0-43
!
satellite 102
Defines the Access ports of satellite ID 101.
remote-ports GigabitEthernet 0/0/0-43
!
!
!
!
interface GigabitEthernet100/0/0/40
negotiation auto
load-interval 30
Virtual Interface configuration corresponding to satellite 100. This
interface can be configured in L2VPN service (E-Line, E-LAN, or
E-Tree).
!
interface GigabitEthernet100/0/0/40.502
l2transport
encapsulation dot1q 49
!
!
Configures ICCP redundancy group 210 and defines peer provider
edge address in the redundancy group.
redundancy
iccp
group 210
member
neighbor 100.111.11.2
!
nv satellite
Configures system mac for nV communication.
system-mac cccc.cccc.cccc
!
!
!
!
Enters nV configuration mode to define satellites.
nv
satellite 100
type asr9000v
ipv4 address 100.100.1.10
Defines the Satellite ID.
Defines ASR 9000v device as satellite device.
Configures satellite address used for Communication.
Cisco ASR9000 Enterprise L2VPN for Metro-Ethernet, DC-WAN, WAN-Core, and Government and Public Networks
4-14
Implementation Guide
Chapter 4
Provider Edge-Customer Edge Design Options
nV Access
Table 4-9
nV L1 Fabric Configuration (continued)
nV L1 Fabric Configuration
Description
redundancy
Defines the priority for the Host provider edge.
Host-priority 20
!
serial-number CAT1729U3BF
Satellite chassis serial number to identify satellite.
!
!
Defines the Satellite ID.
satellite 101
type asr9000v
Defines ASR 9000v device as satellite device.
ipv4 address 100.100.1.3
Configures satellite address used for Communication.
redundancy
Defines the priority for the Host provider edge.
host-priority 20
!
serial-number CAT1729U3BB
Satellite chassis serial number to identify satellite.
!
Defines the Satellite ID.
satellite 102
type asr9000v
Defines ASR 9000v device as satellite device.
ipv4 address 100.100.1.20
Configures satellite address used for Communication.
redundancy
Defines the priority for the Host provider edge.
Host-priority 20
!
serial-number CAT1729U3AU
Satellite chassis serial number to identify satellite.
!
Note
The model above can be implemented by configuring interface GigabitEthernet100/0/0/40.502 for
point-to-point E-line or multipoint E-LAN/E-TREE service using VPLS or PBB-EVPN core.
nV Satellite L2 Fabric
In this model, satellite nodes connecting to branch or campus are connected to the host(s) over any L2
Ethernet network. Such a network can be implemented as a native or as an overlay Ethernet transport to
fit enterprise access network designs.
Cisco ASR9000 Enterprise L2VPN for Metro-Ethernet, DC-WAN, WAN-Core, and Government and Public Networks
Implementation Guide
4-15
Chapter 4
Provider Edge-Customer Edge Design Options
nV Access
nV with L2 Fabric Access using Native or Overlay Transport
nV L2 Fabric with
Native Ethernet Transport
PE
(ASR 9000)
PE
(ASR 9000)
nV Host
PE
(ASR 9000)
tsoH Vn
Host Fabric
Port
Host Fabric
Port
Host Fabric
Subinterface
Host Fabric
Subinterface
E3
PW
Native
L2 Fabric
Satellite Fabric
Port and Sub
Interfaces
CPE
Satellite
(Branch/Campus
Router)
IP/MPLS
L2 Fabric
Satellite Fabric
Port and Sub
Interfaces
Unique Satellite
VLANs Toward Hosts
Satellite
CPE
(Branch/Campus
Router)
297267
PE
(ASR 9000)
nV L2 Fabric with
EoMPLS Transport
PW
E3
Figure 4-4
In the case of L2 Fabric, a unique VLAN is allocated for the point-to-point emulated connection between
the Host and each Satellite device. The host uses such VLAN for the advertisement of multicast
discovery messages.
Satellite devices listen for discovery messages on all the ports and dynamically create a sub-interface
based on the port and VLAN pair on which the discovery messages were received. VLAN configuration
at the satellite is not required.
The satellite uses this auto-discovered sub-interface for the establishment of a management session and
for the exchange of all upstream and downstream traffic with each of the hosts (data and control). At the
host, incoming and outgoing traffic is associated to the corresponding satellite node based on VLAN
assignment.
Table 4-9 details nV L2 fabric access configuration.
Table 4-10
nV L2 Fabric Configuration
Network Virtualization L2 Fabric Configuration Description
interface TenGigE0/1/1/3
Interface acting as Fabric link connecting to nV ring.
load-interval 30
transceiver permit pid all
!
interface TenGigE0/1/1/3.210
Interface acting as Fabric link connecting to nV ring.
ipv4 point-to-point
ipv4 unnumbered Loopback200
encapsulation dot1q 210
Enters nV configuration mode under interface.
nv
satellite-fabric-link satellite 210
ethernet cfm
Defines fabric link connectivity to satellite 210.
Configures Ethernet cfm to detect connectivity failure to the fabric link.
continuity-check interval 10ms
!
Cisco ASR9000 Enterprise L2VPN for Metro-Ethernet, DC-WAN, WAN-Core, and Government and Public Networks
4-16
Implementation Guide
Chapter 4
Provider Edge-Customer Edge Design Options
nV Access
Table 4-10
nV L2 Fabric Configuration
Network Virtualization L2 Fabric Configuration Description
Enters redundancy configuration mode for ICP group 210.
redundancy
iccp-group 210
!
remote-ports GigabitEthernet 0/0/0-9
Defines the Access ports of satellite ID 100.
!
!
!
interface GigabitEthernet210/0/0/0
negotiation auto
Virtual Interface configuration corresponding to satellite 100 . This interface
can be configured in L2VPN service (E-Line, E-LAN or E-Tree).
load-interval 30
!
interface GigabitEthernet210/0/0/0.49
l2transport
encapsulation dot1q 49
!
Configures ICCP redundancy group 210 and defines peer provider edge
address in the redundancy group.
redundancy
iccp
group 210
member
neighbor 100.111.11.2
!
Configures system mac for nV communication.
nv satellite
system-mac cccc.cccc.cccc
!
!
!
!
nV
Enters nV configuration mode to define satellites.
satellite 210
Define the Satellite ID 210 and type of platform ASR 901
type asr901
ipv4 address 27.27.27.40
redundancy
Defines the priority for the Host provider edge.
host-priority 17
!
serial-number CAT1650U00D
Satellite chassis serial number to identify satellite.
!
!
Cisco ASR9000 Enterprise L2VPN for Metro-Ethernet, DC-WAN, WAN-Core, and Government and Public Networks
Implementation Guide
4-17
Chapter 4
Provider Edge-Customer Edge Design Options
nV Access
Note
The model above can be implemented by configuring interface GigabitEthernet210/0/0/0.49 for
point-to-point E-line or multipoint E-LAN/E-TREE service using VPLS or PBB-EVPN core.
nV Cluster
In this physical topology, we tested and measured fast convergence of VPLS-BGP LSM using P2MP-TE
with ASR 9000 nV cluster technology and compared it against MC-LAG for dual-homing redundancy
use cases.
The UNI customer edge switch (left side) has normal LAG running LACP connected to an nV cluster
ASR 9000 system for dual-homing redundancy instead of MC-LAG. The nV cluster acts as a single
VPLS provider edge with 1 control plane and 1 data plane. VPLS-BGP LSM service is provisioned to
the remote ASR9k VPLS provider edge. The provider router (BUD node) has dual-roles as a VPLS-BGP
LSM provider edge and provider transit node connected to the nV cluster VPLS provider edge. MC-LAG
convergence numbers were separately in this topology for comparison without nV cluster configuration.
VPLS-BGP LSM Cluster Convergence Test Topology
Tested:
1. Both LDP-VPLS, BGP-VPLS
2. 100 VFIs, 100 P2MP-PWs
3. BUM + known unicast bi-directional traffic
4. Head, Bud, Tail NV cluster node resiliency
Tail
ASR 9000 nV
Cluster
Up Stream
Down Stream
100BDs with P2MP PWs over
BGP-AD with ISIS
Bud
100 P2MP PWs
with FRR
100 VFIs
Tail
100 P2MP PWs
100 VFIs
100 VFIs
Head
Tail
100 ACs
100 ACs
100 ACs
Tester
Tester
Tester
298751
Figure 4-5
The logical VPLS service configuration and scale is described in Figure 14. We configured 100 VFIs
with 100 P2MP-TEs to carry BUM + known unicast bi-directional traffic. The nV cluster provider edge
is both Head-end and Tail-end provider edge of VPLS-BGP LSM and we tested and validated Head, Tail
and Bud nV cluster node resiliency.
Cisco ASR9000 Enterprise L2VPN for Metro-Ethernet, DC-WAN, WAN-Core, and Government and Public Networks
4-18
Implementation Guide
Chapter 4
Provider Edge-Customer Edge Design Options
nV Access
Figure 4-6
VPLS-BGP LSM Logical Configuration and Traffic Path
ASR 9006
Cluster PE
Head/Tail Node
UNI
NNI
ASR 9006
P, Bud Node
ASR 9006
CE Switch
AC
IRLs
Tester
EOBC
ASR 9010 PE
Head/Tail Node
10G Interfaces
AC
AC
Tester
Tester
Cluster Hardware:
Bundle towards P2MP : Rack0 LC1, Rack1 LC2
Bundle towards Access : Rack0 LC2, Rack1 LC1
298752
Rack 0 LC1: A9K-MOD160-SE [A9K-MPA-8X10GE A9K-MPA-8X10GE]
Rack 0 LC2: A9K-MOD80-SE [A9K-MPA-20X1GE A9K-MPA-2X10GE]
Rack 1 LC1: A9K-MOD80-SE [A9K-MPA-4X10GE]
Rack 1 LC2: A9K-MOD160-TR [A9K-MPA-2X10GE A9K-MPA-2X10GE]
The convergence results of VPLS-BGP LSM nV cluster system Vs MC-LAG are summarized in
Figure 4-7 and Figure 4-8, respectively. The 6 types of failure tests listed below. Note, each test is
repeated 3 times and the worst case numbers of 3 trials are reported.
1.
Core FRR failure between Head and Bud node: test 1-4
2.
Core isolation failure: test 5-8
3.
IRL link failure: test 9-12
4.
EOBC link failure: test 13-16
5.
DSC and RP redundancy switchover: test 17-18
6.
Power off Primary DSC failover: test 21-24
For nV cluster deployment of L2VPN, the XR 5.2.2 release or above for deployments is recommended.
Cisco ASR9000 Enterprise L2VPN for Metro-Ethernet, DC-WAN, WAN-Core, and Government and Public Networks
Implementation Guide
4-19
Chapter 4
Provider Edge-Customer Edge Design Options
nV Access
VPLS-BGP LSM Convergence Results Part 1
ASR 9006
Cluster PE
UNI
ASR 9006
CE Switch
IRLs
EOBC
AC
3
1
2
10G Interfaces
Failure
Test #
Trigger
Core FRR failure between:
Head and Bud node
1
Cluster core facing LAG LOS
2
Cluster core facing link LOS
Core FRR failure between:
Head and Bud node
3
Core isolation failure: Force
all traffic over IRL to rack 0
or rack 1
Core isolation recovery:
Repair core links to force
all traffic off IRLs on rack 0
or rack 1
IRL Link addition with traffic
AC
AC
Tester
Tester
Upstream
Convergence
Downstream
Convergence
MC-LAG
Convergence
12 msec
21 msec
N/A
4 msec
12 msec
N/A
Cluster core facing LAG repair
12 msec
0.4 msec
N/A
4
Cluster core facing link repair
24 msec
4 msec
N/A
5
Rack 0: Remove LC with all core facing
link and LAG member (ie. LOS)
85 msec
16 msec
6 sec/5 sec
6
Rack 1: Remove LC with all core facing
and LAG member (ie. LOS)
216 msec
17 msec
5 sec/7 sec
7
Rack 0: Insert LC with all core facing
link and LAG member
0.6 msec
0.36 msec
0/0
8
Rack 1: Insert LC with all core facing
LAG member
30 mec
66 msec
0/0
9
Remove IRLs 1 by 1 manually
0
0
N/A
10
Remove LC with all IRLs
236 msec
163 msec
N/A
11
Add IRLs 1 by 1 manually
0
0
N/A
12
Add LC with all IRLs manually
0.5 msec
42 msec
N/A
VPLS-BGP LSM Convergence Results Part 2
ASR 9006
Cluster PE
ASR 9006
CE Switch
NNI
ASR 9006
P, Bud Node
ASR 9010
PE
Tester
AC
EOBC
6 5
4
AC
AC
Tester
Tester
6 5
10G Interfaces
Failure
Test #
Trigger
EOBC redundancy link down
13
Pull out EOBC link on Primary-DSC
14
Pull out EOBC link on Backup-DSC
15
EOBC redundancy link up
DSC redundancy switchover
DSC RP reload
Power Off: Primary DSC
Power On: Primary DSC
Upstream
Convergence
Downstream
Convergence
MC-LAG
Convergence
0
0
N/A
0
0
N/A
Insert EOBC link on Primary-DSC
0
0
N/A
16
Insert EOBC link on Backup-DSC
0
0
N/A
17
Rack 0: Primary DSCs RP failover
0
0
0/0
18
Rack 1: Primary DSCs RP failover
0
0
0/0
19
Rack 0: Reload RP of Primary DSC
30 msec
90 msec
5 sec/4 sec
20
Rack 1: Reload RP of Primary DSC
27 msec
46 msec
3 sec/8.5 sec
21
Rack 0 = Primary DSC,
Rack 1 = Backup DSC
45 sec (ddts filed)
200 msec (w/. fix)
60 msec
120 msec
6 sec/3 sec
22
Rack 1 = Primary DSC,
Rack 0 = Backup DSC
43 sec (ddts filed)
204 msec (w/fix)
80 msec
170 msec
7 sec/4.5 sec
23
Rack 0 = Primary DSC,
Rack 1 = Backup DSC
40 msec
60 msec
0/0
24
Rack 1 = Primary DSC,
Rack 0 = Backup DSC
45 msec
10 msec
0/0
298754
UNI
IRLs
Figure 4-8
ASR 9010
PE
1
2
Tester
IRL Link failure with traffic
ASR 9006
P, Bud Node
NNI
298753
Figure 4-7
Cisco ASR9000 Enterprise L2VPN for Metro-Ethernet, DC-WAN, WAN-Core, and Government and Public Networks
4-20
Implementation Guide
Chapter 4
Provider Edge-Customer Edge Design Options
MPLS Access Using Pseudo-wire Head-end (PWHE)
MPLS Access Using Pseudo-wire Head-end (PWHE)
In MPLS Access, enterprise-access devices are connected to the ASR 9000 provider edge devices with
the MPLS-enabled network. The branch or campus router is connected to the access device via an
Ethernet 802.1Q-tagged interface. The access device is configured with a pseudo-wire terminating on
the provider edge device on a pseudo-wire head-end interface.
The pseudo-wire head-end (PWHE) is a technology that allows termination of access pseudo-wires into
an L3 (VRF or global) domain, therefore eliminating the requirement of keeping separate interfaces for
terminating pseudo-wire and L3VPN service. PWHE introduces the construct of a “pseudo-wire-ether”
interface on the provider edge device. This virtual pseudo-wire-ether interface terminates the
pseudo-wires carrying traffic from the CPE device and maps directly to an MPLS VPN VRF on the
provider edge device. Any QoS and ACLs are applied to the pseudo-wire-ether interface.
All traffic between the customer edge router and provider edge router is tunneled in this pseudo-wire.
Access network runs its LDP/IGP domain along with Labeled BGP, as mentioned in the Large Scale
Network Design and Implementation, page 2-10, and learns provider edge loopback address accordingly
for pseudo-wire connectivity. The access device can initiate this pseudo-wire using two methods:
•
Per access node method in which all customer edge-facing ports share a common bridge domain and
a pseudo-wire is configured using an Xconnect statement under the switched virtual interface (SVI)
associated to the bridge domain VLAN. The bridge domain VLAN is called service VLAN
(S-VLAN) and is pushed as a second VLAN on the top of customer VLAN (C-VLAN) received from
the enterprise CPE. On the ASR9000 provider edge device the pseudo-wire terminates on PWHE
main interface and individual PWHE sub-interfaces terminate the combination of common S-VLAN
and distinct C-VLAN.
•
Per Access Port method in which a pseudo-wire is directly configured on the interface connecting
to the CPE. No VLAN manipulation is required at the customer edge interface. Similar to the Per
Access Node Method, on the ASR 9000 node the pseudo-wire is terminated on a PWHE main
interface while a dedicated PWHE sub-interface terminates the specific VLANs.
The PWHE sub-interfaces are then mapped to the VPLS VFI or PBB-EVPN EVI associated to the
corresponding L2VPN service.
Figure 4-9 shows the PWHE configuration.
Figure 4-9
MPLS Access Using Pseudo-wire Head-End
CPE
(Branch/Campus
Router)
Access
PE
G0/5
PW-Ether 777
MPLS
Network
TenG0/0/0/0
TenG0/0/0/3
298755
G0/2
MPLS
PE
Access Network (ASR 9000)
Table 4-11 and Table 4-12 details the MPLS access implementation with per access port method.
Table 4-11
Access Provider Edge Configuration
Access Provider Edge Configuration
Description
interface GigabitEthernet0/5
Customer-facing interface.
mtu 1500
no ip address
Cisco ASR9000 Enterprise L2VPN for Metro-Ethernet, DC-WAN, WAN-Core, and Government and Public Networks
Implementation Guide
4-21
Chapter 4
Provider Edge-Customer Edge Design Options
MPLS Access Using Pseudo-wire Head-end (PWHE)
Table 4-11
Access Provider Edge Configuration (continued)
Access Provider Edge Configuration
Description
Xconnect with the provider edge device on the EVC.
service instance 555 ethernet
encapsulation 555
xconnect 100.111.11.1 15 encapsulation mpls
Table 4-12
Provider Edge Configuration
Provider Edge Configuration
Description
interface PW-Ether777
Configured PWHE main interface.
attach generic-interface-list pwhe_mux
Attaches interface list to the PWHE interface.
!
generic-interface-list pwhe_mux
interface TenGigE0/0/0/0
Creates generic-interface list.
Assigns interfaces to the list.
interface TenGigE0/0/0/3
!
interface PW-Ether777.555 l2transport
PWHE L2 sub-interface.
encapsulation dot1q 555
Matching the customer VLAN C-VLAN.
rewrite ingress tag pop 1 symmetric
Symmetric Pop Operation before associating with VFI.
!
Enters L2VPN configuration mode.
l2vpn
xconnect group pwhe_mux
Enters the name of the cross-connect group.
Enters a name for the point-to-point cross-connect.
p2p pwhe_mux
interface PW-Ether777
Specifies the attachment circuit.
neighbor ipv4 100.111.13.9 pw-id 15
Pseudo-wire to access node.
!
bridge group pwhemux
bridge-domain pwhemux
interface PW-Ether777.555
Configures bridge group named pwhemux.
Configures Bridge-domain named pwhemux.
Enables PWHE sub-interface connected towards CPE.
!
Creates VFI instance with VPLS neighbors.
vfi pwhemux
neighbor 100.111.3.2 pw-id 777
!
neighbor 100.111.5.5 pw-id 777
!
!
Table 4-13 and Table 4-14 details the MPLS access implementation with per access node method.
Cisco ASR9000 Enterprise L2VPN for Metro-Ethernet, DC-WAN, WAN-Core, and Government and Public Networks
4-22
Implementation Guide
Chapter 4
Provider Edge-Customer Edge Design Options
MPLS Access Using Pseudo-wire Head-end (PWHE)
Table 4-13
Access Provider Edge Configuration
Access Provider Edge Configuration
Description
interface GigabitEthernet0/15
Customer-connecting interface.
switchport trunk allowed vlan none
switchport mode trunk
service instance 555 ethernet
encapsulation dot1q 555
Matching customer VLAN C-VLAN 555.
rewrite ingress tag push dot1q 15 symmetric
Pushing service VLAN S-VLAN 15.
bridge-domain 15
Associating to common Bridge-domain 15.
!
Configured VLAN associated to the Bridge domain 15.
interface VLAN15
no ip address
xconnect 100.111.5.5 15 encapsulation mpls
Table 4-14
SVI based Xconnect to SE Node
Provider Edge Configuration
Provider Edge Configuration
Description
interface PW-Ether777
Configured PWHE main interface.
attach generic-interface-list pwhe_mux
Attaches interface list to the PWHE interface.
!
generic-interface-list pwhe_mux
interface TenGigE0/0/0/1
Creates generic-interface list.
Assigns interfaces to the list.
interface TenGigE0/0/0/2
!
interface PW-Ether777.555 l2transport
PWHE L2 sub-interface.
encapsulation dot1q 15 second-dot1q 555
Matching for outer S-Tag and inner C-Tag.
rewrite ingress tag pop 2 symmetric
Symmetric Pop Operation before associating with VFI.
!
Enters L2VPN configuration mode.
l2vpn
xconnect group pwhe_mux
Enters the name of the cross-connect group.
Enters a name for the point-to-point cross-connect.
p2p pwhe_mux
interface PW-Ether777
Specifies the attachment circuit.
neighbor ipv4 100.111.7.3 pw-id 15
Pseudo-wire to access node.
!
Configures bridge-group named pwhemux.
bridge group pwhemux
bridge-domain pwhemux
interface PW-Ether777.555
Configures bridge-domain named pwhemux.
Enables PWHE sub-interface connected towards CPE.
!
Cisco ASR9000 Enterprise L2VPN for Metro-Ethernet, DC-WAN, WAN-Core, and Government and Public Networks
Implementation Guide
4-23
Chapter 4
Provider Edge-Customer Edge Design Options
MPLS Access Using Pseudo-wire Head-end (PWHE)
Table 4-14
Provider Edge Configuration
Provider Edge Configuration
Description
Creates VFI instance with VPLS neighbors.
vfi pwhemux
neighbor 100.111.3.2 pw-id 777
!
neighbor 100.111.11.1 pw-id 777
Note
The model above can be implemented by configuring interface PW-Ether777.555 for point-to-point
E-line or multipoint E-LAN/E-TREE service using VPLS or PBB-EVPN core.
Cisco ASR9000 Enterprise L2VPN for Metro-Ethernet, DC-WAN, WAN-Core, and Government and Public Networks
4-24
Implementation Guide
CH A P T E R
5
Provider Edge User Network Interface
Virtual enterprise networks consist of different traffic types that include voice, video,
critical-applications traffic, end user web traffic, and so on. All these traffic types require different
priorities and treatments based upon their nature and how critical to the business they are. While traffic
is sent and received between provider edge and customer edge, Quality of Service (QoS) implementation
on ASR 9000 provider edge uses the Cost of Service (CoS) field in the 802.1q header to ensure that
traffic is treated properly as per its CoS-defined priority. In nV access topologies, the Ingress QoS
function, configured on the host for virtual satellite access port, is off-loaded to satellite. This is so that
only committed traffic enters the nV and that fabric-link-over-subscription is avoided.
Table 5-1 shows the CoS mapping used for different traffic classes to DSCP.
Table 5-1
Traffic Class-to-Cost of Service (CoS) Mapping
Traffic Class
DSCP CoS MPLS EXP
Enterprise Voice and Real-time
EF
5
5
Enterprise Video Distribution
AF46 4
4
Enterprise Critical
AF32 3
3
AF16 2
2
In Contract
Out of Contract AF8
Enterprise Best Effort
BE
1
1
0
0
Provider edge router configuration for QoS includes configuring class-maps for respective Traffic
classes and mapping them to the appropriate CoS. While configuring the egress policy map, real-time
traffic class is configured with highest priority 1 and is also policed to ensure low latency Expedited
forwarding. Rest classes are assigned with respective required bandwidth. WRED is used as congestion
avoidance mechanism. Shaping is configured on the Parent egress policy to ensure overall traffic doesn’t
exceed committed rate.
QoS Implementation with MPLS access
Flat Ingress QoS policy is applied in the ingress direction on the Access node customer edge router
facing interface. QoS classification is based on the CoS and according EXP bits are updated on the
imposed MPLS headers based on Table 5-2.
Cisco ASR9000 Enterprise L2VPN for Metro-Ethernet, DC-WAN, WAN-Core, and Government and Public Networks
Implementation Guide
5-1
Chapter 5
Provider Edge User Network Interface
QoS Implementation with MPLS access
Table 5-2
Access Node Configuration
Access Node Configuration
Explanation
class-map match-any CMAP-BC-COS
Configures class-map for Business critical traffic.
match cos
1
Matches cos 1 and 2.
2
Configures class-map for telepresence/video traffic.
class-map match-any CMAP-BC-Tele-COS
Matches cos 3.
match cos 3
Configures class-map for real-time traffic.
class-map match-any CMAP-RT-COS
Matches CoS 5.
match cos 5
!
Ingress policy map.
policy-map PMAP-MEF-UNI-I
Configures RT class-map under policy-map.
class CMAP-RT-COS
police cir 10000000 bc 312500 conform-action
set-mpls-exp-imposition-transmit 5 exceed-action drop
Policing for 10 MBPS and setting MPLS EXP 5.
Configures business critical class under policy.
class CMAP-BC-COS
police 5000000 conform-action
set-mpls-exp-imposition-transmit 2 exceed-action drop
policing for 5 MBPS and setting MPLS EXP 5.
Configures Telepresence/Video class under policy.
class CMAP-BC-Tele-COS
police 10000000 conform-action
set-mpls-exp-imposition-transmit 3 exceed-action drop
policing for MBPS and setting MPLS EXP 3.
!
interface GigabitEthernet0/3
Customer-facing interface.
!
service instance 601 ethernet EVC-601
service-policy input PMAP-MEF-UNI-I
Customer EVC.
Ingress QoS policy attached to EVPL 601 Service.
!
Provider edge router node implements QoS on the PWHE sub interface on which the enterprise customer
terminates. Flat ingress policy is applied implementing LLQ. QoS classification is based on the CoS as
mentioned in the Table 5-3. The H-QOS is applied in egress direction class default in parent policy has
shaper configured and child classes implement LLQ.
Table 5-3
Provider Edge Configuration
Provider Edge Configuration
Explanation
class-map match-any CMAP-BC-COS
Configures class-map for Business critical traffic.
match cos 1 2
Matches CoS 1 and 2.
end-class-map
class-map match-any CMAP-RT-COS
match cos 5
Configures class-map for real-time traffic.
Matches CoS 5.
end-class-map
class-map match-any CMAP-BC-Tele-COS
Configures class-map for telepresence/video traffic.
Cisco ASR9000 Enterprise L2VPN for Metro-Ethernet, DC-WAN, WAN-Core, and Government and Public Networks
5-2
Implementation Guide
Chapter 5
Provider Edge User Network Interface
QoS Implementation with MPLS access
Table 5-3
Provider Edge Configuration (continued)
Provider Edge Configuration
Explanation
Matches CoS 3.
match cos 3
end-class-map
!
policy-map PMAP-pwhe-NNI-I
Upstream policy-map.
class CMAP-RT-COS
priority level 1
Setting priority Level-1.
police rate 100 mbps
conform-action set mpls experimental topmost 5
exceed-action drop
Policing to 100 MBPS and set MPLS EXP values.
!
!
Configures Business critical class under policy.
class CMAP-BC-COS
police rate 50 mbps
conform-action set mpls experimental topmost 2
exceed-action drop
Configures 50 MB policer in the class.
!
!
class CMAP-BC-Tele-COS
police rate 100 mbps
conform-action set mpls experimental topmost 3
exceed-action drop
Configures Telepresence/Video class under policy.
Configures 200 MB policer in the class.
!
!
class class-default
!
end-policy-map
!
policy-map PMAP-PWHEMUX-NNI-C-E
Downstream Policy-map / Child Policy-map.
Configures RT class-map under policy-map.
class CMAP-RT-COS
priority level 1
Assigns priority Level-1 to the class.
police rate 50 mbps
Configures 50 MB policer in the class.
!
!
class CMAP-BUS-Tele-COS
Configures Business critical class under policy.
priority level 2
Sets priority Level-2 for the class.
police rate 100 mbps
Configures 100 MB policer in the class.
!
Configures class-map telepresence/video under policy.
random-detect discard-class 3 80 ms 100 ms
Configured WRED to congestion avoidance for discard-class 3.
!
Cisco ASR9000 Enterprise L2VPN for Metro-Ethernet, DC-WAN, WAN-Core, and Government and Public Networks
Implementation Guide
5-3
Chapter 5
Provider Edge User Network Interface
QoS Implementation with Ethernet Hub and Spoke Access
Table 5-3
Provider Edge Configuration (continued)
Provider Edge Configuration
Explanation
Configures class-map telepresence/video under policy.
class CMAP-BC-COS
bandwidth remaining percent 60
Assigns 60 percent remaining bandwidth to the class.
random-detect discard-class 2 60 ms 70 ms
Configured WRED to congestion avoidance for discard-class 2.
random-detect discard-class 1 40 ms 50 ms
Configured WRED to congestion avoidance for discard-class 1.
!
class class-default
Class-default for the policy.
!
end-policy-map
!
policy-map PMAP-PWHEMUX-NNI-P-E
class class-default
Configures Egress parent policy-map.
Class-default for the class.
service-policy PMAP-PWHEMUX-NNI-C-E
Configures child policy-map under parent policy.
shape average 500000000 bps
Configures shaping bandwidth on 500 MB.
!
end-policy-map
!
interface PW-Ether888.555 l2transport
encapsulation dot1q 555
Customer facing sub-interface.
Customer VLAN.
rewrite ingress tag pop 1 symmetric
service-policy input PMAP-pwhe-NNI-I
Ingress policy attached to PWHE sub-interface.
service-policy output PMAP-PWHEMUX-NNI-P-E
Egress policy map attached to PWHE sub-interface.
!
QoS Implementation with Ethernet Hub and Spoke Access
Flat Ingress QoS policy is applied on the enterprise-facing interface on the access node. The H-QoS
policy is applied in the egress direction. QoS classification is based on the CoS as mentioned in
Table 5-4. Priority treatment is implemented for the real-time traffic and bandwidth reservation
implemented for the remaining classes.
Table 5-4
Access Node Configuration for Flat Ingress
Access Node Configuration for Flat Ingress
Explanation
class-map match-any CMAP-BC-COS
Configures class-map for business-critical traffic.
match cos
1
2
class-map match-any CMAP-RT-COS
match cos
5
class-map match-any CMAP-BC-Tele-COS
Matches CoS 1 and 2.
Configures class-map for real-time traffic.
Matches CoS 5.
Configures class-map for telepresence/video traffic.
Cisco ASR9000 Enterprise L2VPN for Metro-Ethernet, DC-WAN, WAN-Core, and Government and Public Networks
5-4
Implementation Guide
Chapter 5
Provider Edge User Network Interface
QoS Implementation with Ethernet Hub and Spoke Access
Table 5-4
Access Node Configuration for Flat Ingress (continued)
Access Node Configuration for Flat Ingress
match cos
Explanation
Matches CoS 3.
3
!
policy-map PMAP-NNI-INGRESS
Configures network-facing Ingress policy-map.
class CMAP-RT-COS
Configures round-trip class-map under policy-map.
police cir 100000000 bc 312500
conform-action transmit
exceed-action drop
Configures 100 MB policer in the class.
class CMAP-BC-COS
Configures Business critical class under policy.
police 50000000
exceed-action drop
conform-action transmit
Configures 50 MB policer in the class.
class CMAP-BC-Tele-COS
Configures telepresence/video class under policy.
police 200000000
conform-action
transmit
exceed-action drop
Configures 200 MB policer in the class.
!
policy-map PMAP-NNI-EGRESS
Configured Egress policy-map.
class CMAP-BC-COS
Configures Business critical class under policy.
bandwidth percent 5
class CMAP-BC-Tele-COS
bandwidth percent 10
Assigns 5 percent bandwidth to the class.
Configures class-map telepresence/video under policy.
Assigns 10 percent bandwidth to the class.
Configures round-trip class-map under policy-map.
class CMAP-RT-COS
police 100000000
Configures 100 MB policer in the class.
priority
Configures LLQ for the class.
policy-map PMAP-ACC-UNI-I
Configures customer facing ingress policy-map.
class CMAP-RT-COS
Configures RT class-map under policy-map.
police cir 10000000 bc 312500
conform-action transmit
exceed-action drop
Configures 10 MB policer in the class.
class CMAP-BC-COS
Configures Business critical class under policy.
police 5000000
exceed-action drop
conform-action transmit
Configures 50 MB policer in the class.
class CMAP-BC-Tele-COS
Configures telepresence/video class under policy.
police 20000000
exceed-action drop
Configures 20 MB policer in the class.
conform-action transmit
!
policy-map PMAP-ACC-UNI-E
Configures customer-facing egress policy-ma.p
class CMAP-RT-COS
Configures round-trip class-map under policy-map.
police 50000000
Configures 50 MB policer in the class.
priority
Configures LLQ for the class.
Configures business-critical class under policy.
class CMAP-BC-COS
bandwidth percent 5
Assigns 5 percent bandwidth to the class
Cisco ASR9000 Enterprise L2VPN for Metro-Ethernet, DC-WAN, WAN-Core, and Government and Public Networks
Implementation Guide
5-5
Chapter 5
Provider Edge User Network Interface
QoS Implementation with Ethernet Hub and Spoke Access
Table 5-4
Access Node Configuration for Flat Ingress (continued)
Access Node Configuration for Flat Ingress
Explanation
class CMAP-BC-Tele-COS
Configures class-map telepresence/video under policy.
bandwidth percent 10
Assigns 10 percent bandwidth to the class.
!
interface GigabitEthernet0/1
Customer-facing interface.
service-policy input PMAP-ACC-UNI-I
Applies customer facing ingress policy.
service-policy output PMAP-ACC-UNI-E
Applies customer facing egress policy.
!
interface GigabitEthernet0/13
Provider edge router-facing interface.
service-policy input PMAP-NNI-INGRESS
Configures network facing ingress policy.
service-policy output PMAP-NNI-EGRESS
Configures network facing egress policy.
!
interface GigabitEthernet0/14
Provider edge router-facing interface.
service-policy input PMAP-NNI-INGRESS
Configures network facing ingress policy.
service-policy output PMAP-NNI-EGRESS
Configures network facing egress policy.
In Table 5-5, the provider edge node implements the Ingress and Egress QoS policies on the
corresponding sub-interface to the enterprise service.
Table 5-5
Provider Edge Device Configuration
Provider Edge Device Configuration
Explanation
class-map match-any CMAP-BC-COS
Configures class-map for business-critical traffic.
match cos
Matches CoS 1 and 2.
1
2
class-map match-any CMAP-BC-Tele-COS
Configures class-map for telepresence/video traffic.
match cos
Matches CoS 3.
3
class-map match-any CMAP-RT-COS
Configures class-map for real-time traffic.
match cos
Matches CoS 5.
5
!
policy-map PMAP-MEF-CE-Child-I
class CMAP-RT-COS
Configures ingress child policy-map.
Configures RT class-map under policy-map.
priority level 1
Assigns priority Level-1 to the class.
police rate 200 mbps
Configures 200 MB policer in the class.
!
!
class CMAP-BC-COS
bandwidth percent 5
Configures business-critical class under policy.
Assigns 5 percent bandwidth to the class.
!
class CMAP-BC-Tele-COS
Configures class-map telepresence/video under policy.
Cisco ASR9000 Enterprise L2VPN for Metro-Ethernet, DC-WAN, WAN-Core, and Government and Public Networks
5-6
Implementation Guide
Chapter 5
Provider Edge User Network Interface
QOS Implementation with G.8032 Access
Table 5-5
Provider Edge Device Configuration (continued)
Provider Edge Device Configuration
bandwidth percent 10
Explanation
Assigns 10 percent bandwidth to the class.
!
class class-default
!
end-policy-map
!
policy-map PMAP-MEF-CE-Parent-I
Configures ingress parent policy-map.
Class-default for the class.
class class-default
service-policy PMAP-MEF-CE-Child-I
Configures child policy-map under parent policy.
shape average 500 mbps
Configures shaping bandwidth on 500 MB.
bandwidth 300 mbps
Assigns 300 MBPS bandwidth to the class.
!
end-policy-map
!
policy-map PMAP-MEF-CE-UNI-E-test
Configures egress policy-map.
Configures round-trip class-map under policy-map.
class CMAP-RT-COS
police rate 200 mbps
Configures 200 MB policer in the class.
Configures business-critical class under policy.
class CMAP-BC-COS
bandwidth percent 5
class CMAP-BC-Tele-COS
bandwidth percent 10
Assigns 5 percent bandwidth to the class.
Configures class-map telepresence/video under policy.
Assigns 10 percent bandwidth to the class.
!
class class-default
!
end-policy-map
!
interface Bundle-Ether1.1
Customer-facing interface.
service-policy input PMAP-MEF-CE-Parent-I
Configures ingress policy-map.
service-policy output PMAP-MEF-CE-UNI-E-test
Configures egress policy-map.
QOS Implementation with G.8032 Access
Table 5-6 details the G.8032 access node configuration.
Cisco ASR9000 Enterprise L2VPN for Metro-Ethernet, DC-WAN, WAN-Core, and Government and Public Networks
Implementation Guide
5-7
Chapter 5
Provider Edge User Network Interface
QOS Implementation with G.8032 Access
Table 5-6
G.8032 Access Node Configuration
G.8032 Access Node Configuration
Explanation
class-map match-any CMAP-BC-COS
Configures class-map for business-critical traffic.
match cos
1
Matches CoS 1 and 2.
2
Configures class-map for real-time traffic.
class-map match-any CMAP-RT-COS
match cos
Matches CoS 5.
5
Configures class-map for telepresence/video traffic.
class-map match-any CMAP-BC-Tele-COS
match cos
Matches CoS 3.
3
!
Configures Network facing ingress policy-map.
policy-map PMAP-NNI-INGRESS
Configures RT class-map under policy-map.
class CMAP-RT-COS
police cir 100000000 bc 312500 conform-action transmit
exceed-action drop
Configures 100 MB policer in the class.
Configures Business critical class under policy.
class CMAP-BC-COS
police 50000000 conform-action transmit
drop
exceed-action
Configures 50 MB policer in the class.
Configures Telepresence/Video class under policy.
class CMAP-BC-Tele-COS
police 200000000 conform-action transmit
drop
exceed-action
Configures 200 MB policer in the class.
!
Configured egress policy-map.
policy-map PMAP-NNI-EGRESS
Configures Business critical class under policy.
class CMAP-BC-COS
Assigns 5 percent bandwidth to the class.
bandwidth percent 5
Configures class-map telepresence/video under policy.
class CMAP-BC-Tele-COS
Assigns 10 percent bandwidth to the class.
bandwidth percent 10
Configures RT class-map under policy-map.
class CMAP-RT-COS
police 100000000
Configures 100 MB policer in the class.
priority
Configures LLQ for the class.
!
Configures customer facing ingress policy-map.
policy-map PMAP-ACC-UNI-I
Configures RT class-map under policy-map.
class CMAP-RT-COS
police cir 10000000 bc 312500 conform-action transmit
exceed-action drop
Configures Business critical class under policy.
class CMAP-BC-COS
police 5000000 conform-action transmit
drop
exceed-action
Configures 50 MB policer in the class.
Configures Telepresence/Video class under policy.
class CMAP-BC-Tele-COS
police 20000000 conform-action transmit
drop
Configures 10 MB policer in the class.
exceed-action
Configures 20 MB policer in the class.
!
interface GigabitEthernet0/1
Customer-facing interface.
Cisco ASR9000 Enterprise L2VPN for Metro-Ethernet, DC-WAN, WAN-Core, and Government and Public Networks
5-8
Implementation Guide
Chapter 5
Provider Edge User Network Interface
QOS Implementation with G.8032 Access
Table 5-6
G.8032 Access Node Configuration (continued)
G.8032 Access Node Configuration
Explanation
service instance 106 ethernet EVC-106
Applies customer-facing ingress policy.
service-policy input PMAP-ACC-UNI-I
Applies customer-facing egress policy.
!
!
G.8032 ring facing interface towards provider edge.
interface TenGigabitEthernet0/1
service-policy input PMAP-NNI-INGRESS
Configures network facing ingress policy.
service-policy output PMAP-NNI-EGRESS
Configures network facing egress policy.
!
!
G.8032 ring facing interface towards provider edge.
interface TenGigabitEthernet0/2
service-policy input PMAP-NNI-INGRESS
Configures network-facing ingress policy.
service-policy output PMAP-NNI-EGRESS
Configures network facing egress policy.
Table 5-7 details the provider edge router node configuration.
Table 5-7
Provider Edge Router Node Configuration
Provider Edge Router Node Configuration
Explanation
class-map match-any CMAP-BC-COS
Configures class-map for business-critical traffic.
match cos
Matches CoS 1 and 2.
1
2
class-map match-any CMAP-BC-Tele-COS
Configures class-map for telepresence/video traffic.
match cos
Matches CoS 3.
3
class-map match-any CMAP-RT-COS
Configures class-map for RT traffic.
match cos
Matches CoS 5.
5
!
policy-map PMAP-MEF-CE-Child-I
Configures ingress child policy-map.
Configures round-trip class-map under policy-map.
class CMAP-RT-COS
priority level 1
Assigns priority Level-1 to the class.
police rate 200 mbps
Configures 200 MB policer in the class.
!
!
Configures business-critical class under policy.
class CMAP-BC-COS
bandwidth percent 5
Assigns 5 percent bandwidth to the class.
!
class CMAP-BC-Tele-COS
bandwidth percent 10
Configures class-map telepresence/video under policy.
Assigns 10 percent bandwidth to the class.
!
Cisco ASR9000 Enterprise L2VPN for Metro-Ethernet, DC-WAN, WAN-Core, and Government and Public Networks
Implementation Guide
5-9
Chapter 5
Provider Edge User Network Interface
QOS Implementation with G.8032 Access
Table 5-7
Provider Edge Router Node Configuration (continued)
Provider Edge Router Node Configuration
Explanation
class class-default
!
end-policy-map
!
policy-map PMAP-MEF-CE-Parent-I
class class-default
Configures ingress parent policy-map.
Class-default for the class.
service-policy PMAP-MEF-CE-Child-I
Configures child policy-map under parent policy.
shape average 500 mbps
Configures shaping bandwidth on 500 MB.
bandwidth 300 mbps
Assigns 300 mbps bandwidth to the class
!
end-policy-map
!
policy-map PMAP-MEF-CE-UNI-E-test
class CMAP-RT-COS
police rate 200 mbps
Configures egress policy-map.
Configures RT class-map under policy-map.
Configures 200 MB policer in the class.
!
!
class CMAP-BC-COS
bandwidth percent 5
Configures business-critical class under policy.
Assigns 5 percent bandwidth to the class.
!
class CMAP-BC-Tele-COS
bandwidth percent 10
Configures class-map telepresence/video under policy.
Assigns 10 percent bandwidth to the class.
!
class class-default
!
end-policy-map
!
!
interface TenGigE0/3/0/0.106 l2transport
Customer-facing interface.
service-policy input PMAP-MEF-CE-Parent-I
Configures ingress policy-map.
service-policy output PMAP-MEF-CE-UNI-E-test
Configures egress policy-map.
!
Cisco ASR9000 Enterprise L2VPN for Metro-Ethernet, DC-WAN, WAN-Core, and Government and Public Networks
5-10
Implementation Guide
Chapter 5
Provider Edge User Network Interface
QoS Implementation with Network Virtualization Access
QoS Implementation with Network Virtualization Access
With nV access, the enterprise interface connected to the satellite is available on the nV host itself, which
is a local representation of the remote enterprise attachment point.
Both Ingress and Egress QoS policies are applied on this interface. While the Host implements all the
QoS functions defined by the egress policy, the implementation of ingress QoS functions is delegated to
the satellite device, as to preserve network bandwidth in the Satellite to Host direction.
Table 5-8 details the provider edge configuration for QoS implementation with network virtualization
access.
Table 5-8
Provider Edge Configuration
Provider Edge Configuration
Explanation
interface GigabitEthernet100/0/0/40
Customer-facing interface.
negotiation auto
load-interval 30
nv
service-policy input PMAP-NV
Ingress QoS offload to nV satellite.
!
!
interface GigabitEthernet100/0/0/40.100
l2transport
Customer-facing sub-interface.
encapsulation default
service-policy output PMAP-NV
Egress QoS on the customer-facing sub-interface.
!
!
Configures egress policy-map.
policy-map PMAP-NV
class CMAP-RT-COS-NV
police rate 100 mbps
Configures RT class-map under policy-map.
Configures 200 MB policer in the class.
exceed-action drop
!
!
class CMAP-BC-COS-NV
police rate 50 mbps
Configures business-critical class under policy.
Configures 50 MB policer in the class.
exceed-action drop
!
!
class CMAP-BC-Tele-COS-NV
police rate 100 mbps
Configures class-map telepresence/video under policy.
Configures 100 MB policer in the class.
exceed-action drop
!
Cisco ASR9000 Enterprise L2VPN for Metro-Ethernet, DC-WAN, WAN-Core, and Government and Public Networks
Implementation Guide
5-11
Chapter 5
Provider Edge User Network Interface
QoS Implementation with Network Virtualization Access
Table 5-8
Provider Edge Configuration
Provider Edge Configuration
Explanation
!
class class-default
!
end-policy-map
!
class-map match-any CMAP-RT-COS-NV
match cos 5
Configures class-map for real-time traffic.
Matches CoS 5.
end-class-map
!
class-map match-any CMAP-BC-COS-NV
match cos 1 2
Configures class-map for business-critical traffic.
Matches CoS 1 and 2.
end-class-map
!
class-map match-any CMAP-BC-Tele-COS-NV
match cos 3
Configures class-map for telepresence/video traffic.
Matches CoS 3.
end-class-map
!
Cisco ASR9000 Enterprise L2VPN for Metro-Ethernet, DC-WAN, WAN-Core, and Government and Public Networks
5-12
Implementation Guide
CH A P T E R
6
Virtual Private LAN Service (VPLS) Label-Switched
Multicast (LSM)
Virtual Private LAN Service (VPLS) emulates the LAN services across a MPLS core. A full mesh of
point-to-point (P2P) pseudo-wires is setup among all the provider edge routers participating in a VPLS
domain to provide VPLS emulation. For broadcast traffic, multicast and unknown unicast traffic, same
packet may be sent multiple times over the same link for each P2P pseudo-wire belonging to remote
provider edge in the same VPLS domain that is bandwidth inefficient as the same packet may be sent
multiple times over the same link for each P2P pseudo-wire. It can result in significant wasted link
bandwidth when there is heavy broadcast and multicast VPLS traffic. It is also resource intensive as the
ingress provider edge router bears the full burden of the replication.
VPLS Label-Switched Multicast (LSM) overcomes these drawbacks. The VPLS LSM solution employs
point-to-multipoint (P2MP) label-switched paths (LSPs) in the MPLS core to carry broadcast, multicast,
and unknown unicast traffic for a VPLS domain. P2MP LSP allows replication in the MPLS core at most
optimal node in the MPLS network and minimizes the number of packet replications in the network.
VPLS LSM solution sends only VPLS traffic that requires flooding over P2MP LSPs. Unicast VPLS
traffic is still sent over P2P pseudo-wires. Traffic sent over access pseudo-wires in the case of MPLS
access, continues to be sent using normal replication. P2MP pseudo-wires are unidirectional as opposed
to P2P pseudo-wires that are bidirectional.
The VPLS LSM solution involves creating a P2MP pseudo-wire-per-VPLS domain to emulate VPLS
P2MP service for core pseudo-wires in the VPLS domain. The P2MP pseudo-wire is supported over the
P2MP LSP called P-tree. This P-tree-based P2MP LSP is created by using RSVP signaling.
BGP is used to discover remote provider edges participating in a VPLS domain. Based on this function,
a provider edge router (head provider edge) signals using RSVP, a P2MP LSP towards the remote
provider edge devices (leaf provider edges) in the same VPLS domain. MAC learning on the leaf
provider edges for frame arriving on P2MP pseudo-wire is done as if the frame received from P2P
pseudo-wire leading to the Root provider edge for that P2MP pseudo-wire.
Figure 6-1 shows VPLS LSM support details for VPLS-BGP (XR 5.1.0) and VPLS-LDP (XR 5.2.2) on
ASR 9000.
Cisco ASR9000 Enterprise L2VPN for Metro-Ethernet, DC-WAN, WAN-Core, and Government and Public Networks
Implementation Guide
6-1
Chapter 6
Figure 6-1
Virtual Private LAN Service (VPLS) Label-Switched Multicast (LSM)
ASR 9000 VPLS-LSM Solution Support Summary
Figure 6-2 shows RFC compliance of ASR 9000 VPLS-LSM implementation for interoperability with
other vendor VPLS LSM deployments. It shows that ASR 9000 VPLS LSM fully interoperates with
other vendor equipment.
Figure 6-2
ASR 9000 VPLS LSM RFC Compliance Matrix
Table 6-1 shows a VPLS LSM sample configuration of one provider edge for deployment.
Cisco ASR9000 Enterprise L2VPN for Metro-Ethernet, DC-WAN, WAN-Core, and Government and Public Networks
6-2
Implementation Guide
Chapter 6
Virtual Private LAN Service (VPLS) Label-Switched Multicast (LSM)
Table 6-1
Provider Edge Configuration for VPLS LSM
Provider Edge Configuration
Explanation
interface GigabitEthernet0/0/0/0.1 l2transport
Configure QinQ Attachment Circuit.
encapsulation dot1q 10 second-dot1q any
rewrite ingress tag pop 1 symmetric
!
interface GigabitEthernet0/0/0/1.1 l2transport
encapsulation dot1q 10 second-dot1q any
rewrite ingress tag pop 1 symmetric
!
igmp snooping profile vpls-lsm
default-bridge-domain all enable
Configure IGMP Snooping for all BD.
Enables IGMP Snooping for all BDs and BVIs.
Configure VPLS-BGP LSM.
l2vpn
bridge group vpls-lsm
bridge-domain vpls-lsm
igmp snooping profile vpls-lsm
Enable IGMP snooping for AC.
interface GigabitEthernet0/0/0/0.1
!
interface GigabitEthernet0/0/0/1.1
!
vfi vpls-lsm
vpn-id 1000
autodiscovery bgp
Enable BGP-AD for VPLS-BGP.
rd auto
route-target 10.10.10.10:100
signaling-protocol bgp
Enable BGP Signaling for VPLS-BGP, or LDP for VPLS-LDP.
ve-id 1
!
!
Enable p2mp-te for VPLS-LSM core tree.
multicast p2mp
signaling-protocol bgp
Enable BGP-AD to setup VPLS-LSM core tree dynamically.
!
transport rsvp-te
attribute-set p2mp-te vpls-lsm-te
Selecting RSVP-TE as core tree of VPLS-LSM.
Good practice to enable attributes (see mpls traffic config section).
!
!
!
Cisco ASR9000 Enterprise L2VPN for Metro-Ethernet, DC-WAN, WAN-Core, and Government and Public Networks
Implementation Guide
6-3
Chapter 6
Table 6-1
Virtual Private LAN Service (VPLS) Label-Switched Multicast (LSM)
Provider Edge Configuration for VPLS LSM (continued)
Provider Edge Configuration
Explanation
!
ipv4 unnumbered mpls traffic-eng loopback 0
Configure MPLS-TE and FRR for VPLS-LSM P2MP-TE LSPs.
mpls traffic-eng
interface TenGigE0/0/2/0
auto-tunnel backup
FRR backup tunnel for vpls-lsm on this interface.
nhop-only
FRR link protection with 50msec convergence.
attribute-set vpls-lsm-frr-lsp
Referencing backup tunnel attributes, see below.
!
!
interface TenGigE0/0/2/1
auto-tunnel backup
FRR backup tunnel for vpls-lsm on this interface.
nhop-only
FRR link protection with 50msec convergence.
attribute-set vpls-lsm-frr-lsp
Referencing backup tunnel attributes, see below.
!
!
interface TenGigE0/0/2/2
auto-tunnel backup
FRR backup tunnel for vpls-lsm on this interface.
nhop-only
FRR link protection with 50msec convergence.
attribute-set vpls-lsm-frr-lsp
Referencing backup tunnel attributes, see below.
!
!
interface TenGigE0/0/2/3
auto-tunnel backup
FRR backup tunnel for vpls-lsm on this interface.
FRR link protection with 50msec convergence.
nhop-only
attribute-set vpls-lsm-frr-lsp
Referencing backup tunnel attributes, see below.
!
!
auto-tunnel p2mp
tunnel-id min 1001 max 2000
VPLS LSM auto-tunnels means no need to create p2mp-te manually.
VPLS LSM tunnel range.
!
auto-tunnel backup
tunnel-id min 2001 max 3000
VPLS LSM auto-tunnel FRR backup LSPs.
VPLS LSM backup tunnel range.
!
reoptimize 0
attribute-set auto-backup vpls-lsm-frr-lsp
VPLS-LSM backup tunnel attributes.
logging events lsp-status reoptimize
Cisco ASR9000 Enterprise L2VPN for Metro-Ethernet, DC-WAN, WAN-Core, and Government and Public Networks
6-4
Implementation Guide
Chapter 6
Virtual Private LAN Service (VPLS) Label-Switched Multicast (LSM)
Table 6-1
Provider Edge Configuration for VPLS LSM (continued)
Provider Edge Configuration
Explanation
logging events lsp-status state
!
attribute-set p2mp-te vpls-lsm-te
VPLS-LSM p2mp-te attributes configured in VFI.
logging events lsp-status reoptimize
logging events lsp-status state
logging events lsp-status reroute
Enable FRR on p2mp-te sub-lsp.
fast-reroute
record-route
Configure RSVP-TE.
rsvp
interface TenGigE0/0/2/0
!
interface TenGigE0/0/2/1
!
interface TenGigE0/0/2/2
!
interface TenGigE0/0/2/3
!
signalling graceful-restart
!
Required for VPLS-LDP LSM, not required for VPLS-BGP LSM.
mpls ldp
nsr
graceful-restart
mldp
logging notifications
address-family ipv4
!
!
router-id 1.0.0.1
interface TenGigE0/0/2/0
!
interface TenGigE0/0/2/1
!
interface TenGigE0/0/2/2
!
interface TenGigE0/0/2/3
Cisco ASR9000 Enterprise L2VPN for Metro-Ethernet, DC-WAN, WAN-Core, and Government and Public Networks
Implementation Guide
6-5
Chapter 6
Table 6-1
Virtual Private LAN Service (VPLS) Label-Switched Multicast (LSM)
Provider Edge Configuration for VPLS LSM (continued)
Provider Edge Configuration
Explanation
!
!
multicast-routing
Configure Multicast.
address-family ipv4
oom-handling
rate-per-route
interface all enable
accounting per-prefix
interface Loopback0
ipv4 address 1.0.0.1 255.255.255.255
router ospf 100
Configure OSPF IGP (ISIS is similar):
nsr
router-id 1.0.0.1
bfd minimum-interval 30
bfd fast-detect
bfd multiplier 3
mpls ldp sync-igp-shortcuts
nsf ietf
timers throttle spf 10 50 100
timers lsa group-pacing 10
timers lsa min-arrival 10
area 0
mpls ldp sync
mpls ldp auto-config
fast-reroute per-prefix
mpls traffic-eng
interface Loopback0
!
interface TenGigE0/0/2/0
network point-to-point
!
interface TenGigE0/0/2/1
network point-to-point
!
Cisco ASR9000 Enterprise L2VPN for Metro-Ethernet, DC-WAN, WAN-Core, and Government and Public Networks
6-6
Implementation Guide
Chapter 6
Virtual Private LAN Service (VPLS) Label-Switched Multicast (LSM)
Table 6-1
Provider Edge Configuration for VPLS LSM (continued)
Provider Edge Configuration
Explanation
interface TenGigE0/0/2/2
network point-to-point
!
interface TenGigE0/0/2/3
network point-to-point
!
!
mpls traffic-eng router-id Loopback0
mpls traffic-eng multicast-intact
Configure BGP L2VPN address-family.
router bgp 100
bgp router-id 1.0.0.1
address-family ipv4 unicast
!
address-family l2vpn vpls-vpws
Enable l2vpn address-family for BGP-AD.
neighbor 1.0.0.2
remote-as 100
update-source Loopback0
address-family ipv4 unicast
!
address-family l2vpn vpls-vpws
Enable l2vpn address-family for BGP-AD.
Cisco ASR9000 Enterprise L2VPN for Metro-Ethernet, DC-WAN, WAN-Core, and Government and Public Networks
Implementation Guide
6-7
Chapter 6
Virtual Private LAN Service (VPLS) Label-Switched Multicast (LSM)
Cisco ASR9000 Enterprise L2VPN for Metro-Ethernet, DC-WAN, WAN-Core, and Government and Public Networks
6-8
Implementation Guide