Cisco ASR 9000 Enterprise L3VPN Design and Implementation Guide

Cisco ASR 9000 Enterprise L3VPN

Design and Implementation Guide

Authors: Chris Lewis, Saurabh Chopra, Javed Asghar

July 2014

Building Architectures to Solve Business Problems

About Cisco Validated Design (CVD) Program

The CVD program consists of systems and solutions designed, tested, and documented to facilitate faster, more reliable, and more predictable customer deployments. For more information visit http://www.cisco.com/go/designzone .

ALL DESIGNS, SPECIFICATIONS, STATEMENTS, INFORMATION, AND RECOMMENDATIONS (COLLEC-

TIVELY, "DESIGNS") IN THIS MANUAL ARE PRESENTED "AS IS," WITH ALL FAULTS. CISCO AND ITS

SUPPLIERS DISCLAIM ALL WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE WARRANTY OF

MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT OR ARISING

FROM A COURSE OF DEALING, USAGE, OR TRADE PRACTICE. IN NO EVENT SHALL CISCO OR ITS SUP-

PLIERS BE LIABLE FOR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, OR INCIDENTAL DAMAGES,

INCLUDING, WITHOUT LIMITATION, LOST PROFITS OR LOSS OR DAMAGE TO DATA ARISING OUT OF

THE USE OR INABILITY TO USE THE DESIGNS, EVEN IF CISCO OR ITS SUPPLIERS HAVE BEEN ADVISED

OF THE POSSIBILITY OF SUCH DAMAGES.

THE DESIGNS ARE SUBJECT TO CHANGE WITHOUT NOTICE. USERS ARE SOLELY RESPONSIBLE FOR

THEIR APPLICATION OF THE DESIGNS. THE DESIGNS DO NOT CONSTITUTE THE TECHNICAL OR

OTHER PROFESSIONAL ADVICE OF CISCO, ITS SUPPLIERS OR PARTNERS. USERS SHOULD CONSULT

THEIR OWN TECHNICAL ADVISORS BEFORE IMPLEMENTING THE DESIGNS. RESULTS MAY VARY

DEPENDING ON FACTORS NOT TESTED BY CISCO.

The Cisco implementation of TCP header compression is an adaptation of a program developed by the University of

California, Berkeley (UCB) as part of UCB’s public domain version of the UNIX operating system. All rights reserved.

Copyright © 1981, Regents of the University of California.

Cisco and the Cisco logo are trademarks or registered trademarks of Cisco and/or its affiliates in the U.S. and other countries. To view a list of

Cisco trademarks, go to this URL: www.cisco.com/go/trademarks . Third-party trademarks mentioned are the property of their respective owners. The use of the word partner does not imply a partnership relationship between Cisco and any other company. (1110R)

Any Internet Protocol (IP) addresses and phone numbers used in this document are not intended to be actual addresses and phone numbers. Any examples, command display output, network topology diagrams, and other figures included in the document are shown for illustrative purposes only. Any use of actual IP addresses or phone numbers in illustrative content is unintentional and coincidental.

Cisco ASR 9000 Enterprise L3VPN Design and Implementation Guide

© 2014 Cisco Systems, Inc. All rights reserved.

ii

About Cisco Validated Design (CVD) Program

C O N T E N T S

C H A P T E R

1

C H A P T E R

2

Introduction

1-1

Overview

2-1

Terminology

2-2

C H A P T E R

3

C H A P T E R

4

Enterprise Network Virtualization Design

3-1

Small Network Design and Implementation

3-1

PE Operation and Configuration

3-2

VRF Configuration

3-2

PE VRF Configuration

3-3

PE-CE Routing Protocol Configuration

3-4

PE eBGP Routing Configuration with CPE

3-4

Route Reflector Operation and Configuration

3-7

Route Reflector Configuration

3-7

PE and P Transport Configuration

3-8

Fast Failure Detection Using Bidirectional Forwarding Detection

3-8

Fast Convergence Using Remote Loop Free Alternate Fast Reroute

3-8

Fast Convergence Using BGP Prefix Independent Convergence

3-9

PE and P Transport Configuration

3-9

QoS Operation and Implementation in the Core Network

3-14

PE and P Core QoS Configuration

3-15

Large Scale Network Design and Implementation

3-16

Using Core Network Hierarchy to Improve Scale

3-17

Large Scale Hierarchical Core and Aggregation Networks with Hierarchy

3-18

PE Transport Configuration

3-19

ABR Transport Configuration

3-21

CORE RR Transport Configuration

3-25

PE-to-CE Design Options

4-1

Inter-Chassis Communication Protocol

4-1

ICCP Configuration

4-2

Ethernet Access

4-2

Hub-and-Spoke Using MC-LAG Active/Standby

4-2

Cisco ASR 9000 Enterprise L3VPN

Design and Implementation Guide iii

Contents

C H A P T E R

5

C H A P T E R

6

A P P E N D I X

A

Hub-and-Spoke with VRRP IPv4 and IPv6 Active/Active

4-4

PE Configuration

4-5

Access Switch Configuration

4-7

CPE Configuration

4-8

G.8032 Ring Access with VRRP IPv4 and IPv6

4-8

PE Configuration

4-9

CE Configuration

4-14

Ethernet Access Node Configuration

4-14

nV (Network Virtualization) Access

4-16

nV Satellite Simple Rings

4-17

nV L1 Fabric Configuration

4-18

nV Satellite Layer 2 Fabric

4-20

nV L2 Fabric Configuration

4-21

nV Cluster

4-22

nV Cluster Configuration

4-24

Native IP-Connected Access

4-25

MPLS Access using Pseudowire Headend

4-28

Access Device Configuration

4-28

PE Configuration

4-29

CE Configuration

4-31

PE UNI QoS

5-1

PE UNI QoS Configuration

5-2

PE UNI QoS Configuration with PWHE Access

5-4

Performance and Scale

6-1

Internet Peering Application

6-2

100G Edge and Core-Facing Ports

6-5

Related Documents

A-1 iv

Cisco ASR 9000 Enterprise L3VPN

Design and Implementation Guide

Chapter 1 Introduction

1

Introduction

Enterprise Layer 3 (L3) network virtualization enables one physical network to support multiple L3 virtual private networks (L3VPNs). To a group of end users, it appears as if each L3VPN is connected to a dedicated network with its own routing information, quality of service (QoS) parameters, and security and access policies.

This functionality has numerous applications, including:

• Requirements to separate departments and functions within an organization for security or compliance with statutes such as the Sarbanes-Oxley Act or Health Insurance Portability and

Accountability Act (HIPAA).

• requirements to separate guest networks from internal corporate networks.

For each use case requiring network separation, a L3VPN infrastructure offers the following key benefits over non-virtualized infrastructures or separate physical networks:

Mergers and acquisitions in which consolidating disparate networks into one physical infrastructure that supports existing IP address spaces and policies provides economic benefits.

Airports in which multiple airlines each require an independent network with unique policies, but the airport operator provides only one network infrastructure

Reduced costs—Multiple user groups with virtual networks benefit from greater statistical multiplexing to provide bandwidth with higher utilization of expensive WAN links.

A single network enables simpler management and operation of operations, administration, and management (OAM) protocols.

Security between virtual networks is built in without needing complex access control lists (ACLs) to restrict access for each user group.

Consolidating network resources into one higher-scale virtualized infrastructure enables more options for improved high availability (HA), including device clustering and multi-homing.

Cisco Enterprise L3 Virtualization

1-1 Design and Implementation Guide

C H A P T E R

2

Overview

End-to-end virtualization of an enterprise network infrastructure relies upon the following primary components:

• Virtual routing instances in edge routers, delivering service to each group that uses a virtualized infrastructure instance

Route-distinguishers, added to IPv4 addresses to support overlapping address spaces in the virtual infrastructure

Label-based forwarding in the network core so that forwarding does not rely on IP addresses in a virtual network, which can overlap with other virtual networks

Figure 2-1 summarizes the three most common options used to virtualize enterprise Layer 3 (L3) WANs.

Figure 2-1 Transport Options for L3 WAN Virtualization

1 Self Deployed IP/MPLS Backbone

Customer Managed Backbone

CE

Site 1

Site 2

CE

P

P

PE PE

P

Customer-deployed Backbone

(IP and/or MPLS)

2 SP Managed “Ethernet” Service

Customer Managed

Backbone

SP Managed Domain

CE

Site 1

Provider

Ethernet

Service

PE

PE

Site 2

CE

CE

Customer Managed

Backbone

CE

Site 3

Site 3

3 SP Managed “IP VPN” Service

Customer Managed

Backbone

SP Managed Domain

CE

Site 1

Provider

MPLS VPN

Service

PE

PE

Site 2

CE

VRFs

Customer Managed

Backbone

CE

Site 3

IP Routing Peer (BGP, Static, IGP)

Design and Implementation Guide

Cisco ASR 9000 Enterprise L3VPN

2-1

Chapter 2 Overview

Terminology

This guide focuses on Option 1 in

Figure 2-1

, the enterprise-owned and operated Multiprotocol Label

Switching (MPLS) L3VPN model.

Terminology

The following terminology is used in the MPLS L3VPN architecture:

Virtual routing and forwarding instance (VRF) —This entity in a physical router enables the implementation of separate routing and control planes for each client network in the physical infrastructure.

Label Distribution Protocol (LDP) —This protocol is used on each link in the MPLS L3VPN network to distribute labels associated with prefixes; labels are locally significant to each link.

Multiprotocol BGP (MP-BGP) —This protocol is used to append route distinguisher values to ensure unique addressing in the virtualized infrastructure, and imports and exports routes to each

VRF based on route target community value.

P (provider) router —This type of router, also called a Label Switching Router (LSR), runs an

Interior Gateway Protocol (IGP) and LDP.

PE (provider edge) router —This type of router, also called an edge router, imposes and removes

MPLS labels and runs IGP, LDP, and MP-BGP.

CE (customer edge) router —This type of router is the demarcation device in a provider-managed

VPN service. It is possible to connect a LAN to the PE directly. However, if multiple networks exist at a customer location, a CE router simplifies the task of connecting the networks to an L3VPN instance.

The PE router must import all client routes served by the associated CE router into the VRF of the PE router associated with that virtual network instance. This enables the MPLS L3VPN to distribute route information to enable route connectivity among branch, data center, and campus locations.

Figure 2-2 shows how the components combine to create an MPLS L3VPN service and support multiple

L3VPNs on the physical infrastructure. In the figure, a P router connects two PE routers. The packet flow is from left to right.

Figure 2-2

PE

Figure 2 Major MPLS L3VPN Components and Packet Flow

P P PE

PE

P

PE

IGP

Label

VPN

Label

Data

4 Byte

IGP Label

4 Byte

VPN Label

Original Packet

The PE on the left has three groups, each using its own virtual network. Each PE has three VRFs (red, green and blue); each VRF is for the exclusive use of one group using a virtual infrastructure.

Design and Implementation Guide

Cisco ASR 9000 Enterprise L3VPN

2-2

Terminology

Chapter 2 Overview

When an IP packet comes to the PE router on the left, the PE appends two labels to the packet. BGP appends the inner (VPN) label and its value is constant as the packet traverses the network. The inner label value identifies the interface on the egress PE out of which the IP packet will be sent. LDP assigns the outer (IGP) label; its value changes as the packet traverses the network to the destination PE.

For more information about MPLS VPN configuration and operation, refer to “Configuring a Basic

MPLS VPN” at:

• http://www.cisco.com/c/en/us/support/docs/multiprotocol-label-switching-mpls/mpls/13733-mpls-vp n-basic.html

2-3

Cisco ASR 9000 Enterprise L3VPN

Design and Implementation Guide

C H A P T E R

3

Enterprise Network Virtualization Design

This Cisco Validated Design (CVD) focuses on the role of Cisco ASR 9000 Series Aggregation Services

Routers (ASR 9000) as P and PE devices in the Multiprotocol Label Switching (MPLS) L3VPN architecture described in

Figure 2-2 on page 2-2 . Providers can use this architecture to implement

network infrastructures that connect virtual networks among data centers, branch offices, and campuses using all types of WAN connectivity.

In this architecture, data centers (branch or campus) are considered customer edge (CE) devices. The design considers provider (P) and provider edge (PE) router configuration with the following connectivity control and data plane options between PE and CE routers:

Ethernet hub-and-spoke or ring

IP

Network virtualization (nV)

Pseudowire Headend (PWHE) for MPLS CE routers •

Two options are considered for the MPLS L3VPN infrastructure incorporating P and PE routers:

• A flat LDP domain option, which is appropriate for smaller MPLS VPN deployments (700-1000 devices).

• A hierarchical design using RFC 3107-labeled BGP to segment P and PE domains into IGP domains to help scale the infrastructure well beyond 50,000 devices.

This chapter first examines topics common to small and large network implementations. These topics are discussed in the context of small network design. Later, it looks at additional technologies needed to enable small networks to support many more users. This chapter includes the following major topics:

Small Network Design and Implementation, page 3-1

Large Scale Network Design and Implementation, page 3-16

Small Network Design and Implementation

Figure 3-1 shows the small network deployment topology.

Cisco Enterprise L3 Virtualization

3-1 Design and Implementation Guide

Chapter 3 Enterprise Network Virtualization Design

Small Network Design and Implementation

Figure 3-1 Small Deployment Topology

Data

Center

Pre-Aggregation

Node

Pre-Aggregation

Node

Core

Node

Core and

Aggregation

IP/MPLS Domain

Core

Node

Pre-Aggregation

Node

Ethernet nV

Pre-Aggregation

Node

Core

Node

Core

Node

Pre-Aggregation

Node

Pre-Aggregation

Node

Campus/

Branch

Core and aggregation networks form one IGP and LDP domain.

– Scale target for this architecture is less than 700 IGP/LDP nodes

All VPN configuration is on the PE nodes.

Connectivity between the PE Node and the branch/campus router includes the following options:

Ethernet hub-and-spoke or ring

IP between PE and CE

Network virtualization

PWHE to collapse CE into PE as nV alternative

The domain of P and PE routers, which is no greater than a few hundred, can be implemented using single IGP and LDP instances. On the left is the data center, with the network extending across the WAN to branch and campus locations.

PE Operation and Configuration

PE routers must perform multiple tasks, separating individual group control and data planes, and advertising routes between sites in the same VPN.

This functionality is achieved by creating VRF instances to provide separate data and control plane for the L3VPN. VRFs are configured with route distinguishers, which are unique for a particular VRF on the PE device. MP-BGP, which is configured on PEs, advertises and receives VRF prefixes appended with route distinguishers, which are also called VPNv4 prefixes.

Each VRF is also configured with a route target, which is a BGP-extended community representing a

VPN that is tagged to VPNv4 prefixes when a route is advertised or exported from PE. Remote PEs selectively import only those VPNV4 prefixes into their VRF, which are tagged with the RT that matches the configured VRF-imported RT. PE can use static routing or run routing protocols with CPE at branches to learn prefixes. Unless there is a compelling reason to do otherwise in the design, route targets and route distinguishers are set to the same values to simplify configuration.

VRF Configuration

VRF configuration comprises the following major steps, which are described in detail in the subsequent sections:

• Defining a unique VRF name on the PE.

• Configuring a route distinguisher value for the VRF under router BGP so that VRF prefixes can be appended with RD value to make VPNv4 prefixes.

3-2

Cisco Enterprise L3 Virtualization

Design and Implementation Guide

Chapter 3 Enterprise Network Virtualization Design

Small Network Design and Implementation

Importing and exporting route targets corresponding to the VPN in the VRF configuration so that

PE can advertise routes with the assigned export route target and download prefixes tagged with configured import route target into the VRF table.

Applying the VRF on the corresponding interface connected to CPE.

PE VRF Configuration

Step 1

Step 2

Step 3

Step 4

Step 5

Configure a VRF named BUS-VPN2.

vrf BUS-VPN2

Enter IPv4 address-family configuration mode for VRF.

address-family ipv4 unicast

Configure the import route target to selectively import IPv4 routes into the VRF matching the route target.

import route-target

8000:8002

Configure the export route target to tag IPv4 routes having this route target while advertising to remote

PE routers.

export route-target

8000:8002

Enter IPv6 address-family configuration mode for VRF.

address-family ipv6 unicast

Step 6

Step 7

Step 8

Step 9

Configure the import route target to selectively import IPv6 routes into the VRF matching the route target.

import route-target

8000:8002

!

Configure the export route target to tag IPv6 routes having this route target while advertising to remote

PE routers.

export route-target

8000:8002

!

!

Enter router BGP configuration mode.

router bgp 101

Enter VRF BGP configuration mode.

vrf BUS-VPN2

Step 10 Define the route distinguisher value for the VRF. The route distinguisher is unique for each VRF in each

PE router.

rd 8000:8002

Step 11 Enter VRF IPv4 address-family configuration mode.

address-family ipv4 unicast

Design and Implementation Guide

Cisco Enterprise L3 Virtualization

3-3

Chapter 3 Enterprise Network Virtualization Design

Small Network Design and Implementation

Step 12 Redistribute directly-connected IPv4 prefixes.

redistribute connected

Step 13 Enter VRF IPv6 address-family configuration mode.

address-family ipv6 unicast

Step 14 Redistribute directly-connected IPv6 prefixes.

redistribute connected

Step 15 Enter CPE-facing interface configuration mode.

interface GigabitEthernet0/0/1/7

Step 16 Configure VRF on the interface.

vrf BUS-VPN2? ipv4 address 100.192.30.1 255.255.255.0 ipv6 address 2001:100:192:30::1/64

!

At this stage, the L3 VRF and the route distinguisher are configured to append to routes coming into the

VRF. The route distinguisher enables multiple VPN clients to use overlapping IP address spaces. The

L3VPN core can differentiate overlapping addresses because each IP address is appended with a route distinguisher and therefore is globally unique. Combined client IP addresses and route distinguishers are referred to as VPNv4 addresses.

To get routes from a client site at the CE (branch or campus router) into the VRF, either static routing or a routing protocol is used. Examples of the most common static routing and eBGP scenarios follow.

PE-CE Routing Protocol Configuration

This section describes how to configure PE-CE routing protocols.

PE eBGP Routing Configuration with CPE

PE is configured with an Exterior Border Gateway protocol (eBGP) session with CPE in the VRF under address-family IPv4 to exchange IPv4 prefixes with CPE. Routes learned from CPE are advertised to remote PEs using MP-BGP.

The following procedure illustrates the configuration.

Step 1

Step 2

Step 3

Step 4

Enter router BGP configuration mode.

router bgp 101

Enter VRF BGP configuration mode.

vrf BUS-VPN2?

Configure the CPE IP address as a BGP peer and its autonomous system (AS) as remote-as.

neighbor 100.192.30.3 remote-as 65002?

Enter VRF IPv4 address-family configuration mode for BGP.

3-4

Cisco Enterprise L3 Virtualization

Design and Implementation Guide

Chapter 3 Enterprise Network Virtualization Design

Small Network Design and Implementation address-family ipv4 unicast

!

PE Static Routing Configuration with CPE

PE is configured using static routes in the VRF, with next-hop as the CPE address. Configuration use

IPv4 address-family to configure IPv4 static routes. The static routes are then advertised to remote PEs by redistributing under BGP.

The following procedure illustrates the configuration.

Step 1

Step 2

Step 3

Step 4

Enter router static configuration mode for the VRF.

router static vrf BUS-VPN2

Enter VRF IPv4 address-family configuration mode for static.

address-family ipv4 unicast

Configure Static route 100.192.194.0/24 with next hop 100.192.40.3

100.192.194.0/24 100.192.40.3

router bgp 101

<snip> vrf BUS-VPN2 rd 8000:8002 address-family ipv4 unicast

Redistribute Static Prefixes under BGP VRF address-family IPv4 so that they are advertised to remote

PEs.

redistribute static

After routes from the branch or campus router are in the client VRF, the routes must be advertised to other sites in the L3VPN to enable reachability. Reachability is delivered using MP-BGP to advertise

VPNv4 addresses, associated with the VRF at the branch location, to members of the same VPN.

PE MP-BGP Configuration

MP-BGP configuration comprises BGP peering with route reflector for VPNv4 and VPNv6 address families to advertise and receive VPNv4 and VPNv6 prefixes. MP-BGP uses session-group to configure address-family independent (global) parameters; peers requiring the same parameters can inherit its configuration.

Session-group includes update-source, which specifies the interface whose address is used for BGP communication, and remote-as, which specifies the AS number to which the CPE belongs.

Neighbor-group is configured to import session-group for address-family independent parameters, and to configure address-family dependent parameters, such as next-hop-self, in the corresponding address-family.

The following procedure illustrates MP-BGP configuration on PE.

Step 1 Enter Router BGP configuration mode.Step TBD

Design and Implementation Guide

Cisco Enterprise L3 Virtualization

3-5

Chapter 3 Enterprise Network Virtualization Design

Small Network Design and Implementation

Step 2

Step 3

Step 4

Step 5

Step 6 router bgp 101

Step TBD Configured BGP Router-ID bgp router-id 100.111.11.2

Configure the VPNv4 unicast address-family to exchange VPNv4 prefixes.

address-family vpnv4 unicast

!

Configure the VPNv6 unicast address-family to exchange VPNv4 prefixes.

address-family vpnv6 unicast

!

Configure session-group to define address-family independent parameters.

session-group ibgp

Specify remote-as as the route reflector AS number.

remote-as 101

Step 7

Step 8

Step 9

Specify update-source as Loopback0 for BGP communication.

update-source Loopback0

!

Enter neighbor-group configuration mode.

neighbor-group rr

Import session-group address-family independent parameters.

use session-group ibgp

Step 10

Step 11

Step 12

Enable vpnv4 address-family for neighbor group and configure address-family dependent pa-rameters under VPNv4 address-family.

address-family vpnv4 unicast

!

Enable vpnv6 address-family for neighbor group and configure address-family dependent pa-rameters under VPNv6 AF.

address-family vpnv6 unicast

!

Import the neighbor-group route-reflector to define the route-reflector address as a VPNv4 and VPNv6 peer.

neighbor 100.111.4.3

use neighbor-group rr

!

The above sections described how we can configure virtual networks on a PE router. The network can have hundreds of PE routers connecting to Campus/Branch Routers and Data centers. A PE router in one location learns VRF prefixes of remote location using Multiprotocol IBGP. PEs cannot advertise VPNv4 prefix received from one IBGP peer to another due to IBGP split-horizon rule. IBGP requires a full mesh between all IBGP-speaking PEs. It can cause scalability and overhead issues as PE routers require maintaining the IBGP session with all remote PEs and sending updates to all IBGP peers; this causes causing duplication. To address this issue, route reflectors can be deployed, as explained below.

3-6

Cisco Enterprise L3 Virtualization

Design and Implementation Guide

Chapter 3 Enterprise Network Virtualization Design

Small Network Design and Implementation

Route Reflector Operation and Configuration

Route reflectors (RR) addresses the scalability and overhead issues of requiring full mesh of IBGP sessions because of the IBGP split-horizon rule. When a device is assigned as a RR, and PE devices are assigned as its clients, the split horizon rule is relaxed on the RR, enabling the route protector to the prefixes received from one client PE to another client PE. PEs must maintain IBGP sessions with the RR only to send and receive updates. The RR reflects updates received from one PE to other PEs in the network, eliminating the requirement for IBGP full mesh.

By default, a RR does not change next-hop or any other prefix attributes. Prefixes received by PEs still have remote PEs as next-hop, not the RR, so PEs can send traffic directly to remote PEs. This eliminates the requirement to have the RR in the data path and RR can only be used for RR function.

Route Reflector Configuration

This section describes ASR 1000 RR configuration, which includes configuring a peer-group for router

BGP. PEs having the same update policies (such as update-group, remote-as) can be grouped into the same peer group, which simplifies peer configuration and enables more efficient updating. The peer-group is made a RR client so that the RR can reflect routes received from a client PE to other client

PEs.

Step 1

Step 2

Step 3

Step 4

Step 5

Step 6

Step 7

Loopback interface for IBGP session.

interface loopback0 ip address 100.111.4.3 255.255.255.255

Enter Router BGP configuration mode.

router bgp 101 bgp router-id 100.111.4.3

Define Peer-group rr-client.

neighbor rr-client peer-group

Specify Update-source as Loopback0 for BGP communication.

neighbor rr-client update-source Loopback0

Specify remote-as as AS number of PE.

neighbor rr-client remote-as 101

Configure PE router as Peer-group member.

neighbor 100.111.11.2 peer-group rr-client

Enter VPNv4 address-family mode.

address-family vpnv4

Step 8

Step 9

Make peer-group members RR client.

neighbor rr-client route-reflector-client

Configure RR to send both standard and Extended community(RT) to Peer-group members.

neighbor rr-client send-community both

Step 10 Activate the PE as peer for VPNv4 peering under VPNv4 address-family.

Design and Implementation Guide

Cisco Enterprise L3 Virtualization

3-7

Chapter 3 Enterprise Network Virtualization Design

Small Network Design and Implementation neighbor 100.111.11.2 activate

After configuring PE with the required virtual network configuration described above, transport must be set up to carry virtual network traffic from one location to another. The next section describes how we can implement transport and optimize it with fast detection and convergence for seamless service delivery.

PE and P Transport Configuration

Transport networks, comprising PE and P routers, transport traffic from multiple L3VPNs from one location to another. To achieve seamless communication across virtual networks, transport networks require reachability and label-based forwarding across the transport domain, along with fast failure detection and convergence. Bidirectional Forwarding Detection (BFD) is used for fast failure detection.

Fast convergence uses Remote Loop Free Alternate Fast Reroute (rLFA FRR) and BGP Prefix

Independent Convergence (PIC). These methods are described in subsequent sections.

Transport implementation requires PE, P, and RR devices configured using IGP for reachability. These devices also use LDP to exchange labels for prefixes advertised and learned from IGP. The devices maintain a Label Forwarding Information Base (LFIB) to make forwarding decisions.

When sending VRF traffic from a branch or campus router to a remote location, PE encapsulates traffic in MPLS headers, using a label corresponding to the BGP next-hop (remote PE) for the traffic.

Intermediate devices, such as P devices, examine the top label on the MPLS header, perform label swapping, and use LFIB to forward traffic toward the remote PE. P devices can ignore the VRF traffic and forward packets using only labels. This enables the establishment and use of labeled-switched paths

(LSPs) when a PE device forwards VPN traffic to another location.

Fast Failure Detection Using Bidirectional Forwarding Detection

Link failure detection in the core normally occurs through loss of signal on the interface. This is not sufficient for BGP, however, because BGP neighbors are typically not on the same segment. A link failure (signal loss) at a BGP peer can remain undetected by another BGP peer. Absent some other failure detection method, reconvergence occurs only when BGP timers expire, which is too slow. BFD is a lightweight, fast hello protocol that speeds remote link failure detection.

PE and P devices use BFD as a failure detection mechanism on the CORE interfaces that informs IGP about link or node failure within a millisecond (ms). BFD peers send BFD control packets to each other on the interfaces enabled with BFD at negotiated intervals. If a BFD peer does not receive a control packet and the configured dead timer (in ms) expires, the BFD session is torn down and IGP is rapidly informed about the failure. IGP immediately tears down the session with the neighbor and switches traffic to an alternate path. This enables failure detection is achieved in ms.

Fast Convergence Using Remote Loop Free Alternate Fast Reroute

After BFD detects a failure, the next step is to "fast converge" the network to an alternate path. For IGP prefixes, LFAs enable fast. The type of LFA depends on the network topology. The first type, called simply LFA, is suitable for hub-and-spoke topologies. The second type is called remote LFA (rLFA) and is suitable for ring topologies.

3-8

Cisco Enterprise L3 Virtualization

Design and Implementation Guide

Chapter 3 Enterprise Network Virtualization Design

Small Network Design and Implementation

• LFA FRR calculates the backup path for each prefix in the IGP routing table; if a failure is detected, the router immediately switches to the appropriate backup path in about 50 ms. Only loop-free paths are candidates for backup paths.

• rLFA FRR works differently because it is designed for cases with a physical path, but no loop-free alternate paths. In the rLFA case, automatic LDP tunnels are set up to provide LFAs for all network nodes.

Without LFA or rLFA FRR, a router calculates the alternate path after a failure is detected, which results in delayed convergence. However, LFA FRR calculates the alternate paths in advance to enable faster convergence. P and PE devices have alternate paths calculated for all prefixes in the IGP table, and use rLFA FRR to fast reroute in case of failure in a primary path.

Fast Convergence Using BGP Prefix Independent Convergence

For BGP prefixes, fast convergence is achieved using BGP PIC, in which BGP calculates an alternate best path and primary best path and installs both paths in the routing table as primary and backup paths.

This functionality is similar to rLFA FRR, which is described in the preceding section. If the BGP next-hop remote PE becomes unreachable, BGP immediately switches to the alternate path using BGP

PIC instead of recalculating the path after the failure. If the BGP next-hop remote PE is alive but there is a path failure, IGP rLFA FRR handles fast reconvergence to the alternate path and BGP updates the

IGP next-hop for the remote PE.

PE and P Transport Configuration

This section describes how to configure PE and P transport to support fast failure detection and fast convergence.

PE Transport Configuration

PE configuration includes enabling IGP (IS-IS or OSPF can be used) to exchange core and aggregation reachability, and enabling LDP to exchange labels on core facing interfaces. A loopback interface is also advertised in IGP as the BGP VPNv4 session is created, using update-source Loopback0 as mentioned in

PE Operation and Configuration, page 3-2 . Using the loopback address to source updates and target

updates to remote peers improves reliability; the loopback interface is always up when the router is up, unlike physical interfaces that can have link failures.

BFD is configured on core-facing interfaces using a 15 ms hello interval and multiplier 3 to enable fast failure detection in the transport network. rLFA FRR is used under IS-IS level 2 for fast convergence if a transport network failure occurs. BGP PIC is configured under VPNv4 address-family for fast convergence of VPNv4 Prefixes if a remote PE becomes unreachable.

The following procedure describes PE transport configuration.

Step 1

Step 2

Loopback Interface for BGP VPNv4 neighbor ship.

interface Loopback0 ipv4 address 100.111.11.1 255.255.255.255

ipv6 address 2001:100:111:11::1/128

!

Core interface.

interface TenGigE0/0/0/0

ipv4 address 10.11.1.0 255.255.255.254

!

Design and Implementation Guide

Cisco Enterprise L3 Virtualization

3-9

Chapter 3 Enterprise Network Virtualization Design

Small Network Design and Implementation

Step 3

Step 4

Step 5

Enter Router IS-IS configuration.

router isis core

Assign NET address to the IS-IS process.

net 49.0100.1001.1101.1001.00

Enter IPv4 address-family for IS-IS.

address-family ipv4 unicast

Step 6

Step 7

Step 8

Step 9

Metric style Wide generates new-style TLV with wider metric fields for IPv4.

metric-style wide

!

Enter IPv6 address-family for IS-IS.

address-family ipv6 unicast

Metric-style Wide generates new-style TLV with wider metric fields for IPv6.

metric-style wide

!

Configure IS-IS for Loopback interface.

interface Loopback0

Step 10 Make loopback passive to avoid sending unnecessary hellos on it.

Passive

Step 11

Step 12

Step 13

Enter IPv4 Address-family for Loopback.

address-family ipv4 unicast

!

Enter IPv6 Address-family for Loopback.

address-family ipv6 unicast

!

!

Configure IS-IS for TenGigE0/0/0/0 interface.

interface TenGigE0/0/0/0

Step 14 Configure IS-IS Circuit-Type on the interface.

circuit-type level-2-only

Step 15 Configure Minimum Interval between sending BFD hello packets to the neighbor.

bfd minimum-interval 15

Step 16 Configure BFD multiplier.

bfd multiplier 3

Step 17 Enable BFD to detect failures in the path between adjacent forwarding engines.

bfd fast-detect ipv4

Step 18 Enter the IPv4 Address-family for TenGig interface.

address-family ipv4 unicast

Step 19 Configure IS-IS metric for Interface.

metric 10

3-10

Cisco Enterprise L3 Virtualization

Design and Implementation Guide

Chapter 3 Enterprise Network Virtualization Design

Small Network Design and Implementation

Step 20 Enable per prefix FRR for Level 2 prefixes.

fast-reroute per-prefix level 2

Step 21 Configure an FRR path that redirects traffic to a remote LFA tunnel.

fast-reroute per-prefix remote-lfa tunnel mpls-ldp

Step 22

Step 23

Enable mpls LDP sync to ensure LDP comes up on link before Link is used for forwarding to avoid packet loss.

mpls ldp sync

!

!

Enter MPLS LDP configuration mode.

mpls ldp

Step 24

Step 25

Step 26 log

graceful-restart

!

Configure router-id for LDP.

router-id 100.111.11.1

!

Enable LDP on TenGig0/0/0/0.

interface TenGigE0/0/0/0

address-family ipv4

!

Enter BGP configuration mode.

router bgp 101

Step 27 Enter VPNv4 address-family mode.

address-family vpnv4 unicast es

Step 28 Configure receive capability of multiple paths for a prefix to the capable peers.

additional-paths receive

Step 29 Configure send capability of multiple paths for a prefix to the capable peers. additional-paths send

Step 30

Step 31

Enable BGP PIC functionality with appropriate route-policy to calculate back up paths.

additional-paths selection route-policy add-path-to-ibgp

!

Configure route-policy used in BGP PIC.

route-policy add-path-to-ibgp

Step 32 Configure to install 1 backup path.

set path-selection backup 1 install end-policy

Design and Implementation Guide

Cisco Enterprise L3 Virtualization

3-11

Chapter 3 Enterprise Network Virtualization Design

Small Network Design and Implementation

P Transport Configuration

P transport configuration includes enabling IGP (IS-IS or OSPF) to exchange core and aggregation reachability, and enabling LDP to exchange labels on core-facing interfaces. P routers are not required because VRF is not configured on them and so they do not need VPNv4 and VPNv6 prefixes. P routers know only core and aggregation prefixes in the transport network and do not need to know prefixes belonging to VPNs. P swap labels based on the top packet label belonging to remote PEs, and use LFIB to accomplish PE-to-PE LSP. rLFA FRR is used under IS-IS level 2 for fast convergence if a transport network failure occurs.

Step 1 Core Interface connecting to PE.

interface TenGigE0/0/0/0

Step 2

Step 3

Step 4

Step 5

ipv4 address 10.11.1.1 255.255.255.254

!

Core Interface connecting to Core MPLS network.

interface TenGigE0/0/0/1 ipv4 address 10.2.1.4 255.255.255.254

!

Enter Router IS-IS configuration.

router isis core

Assign NET address to the IS-IS process.

net 49.0100.1001.1100.2001.00

Enter IPv4 address-family for IS-IS.

address-family ipv4 unicast

Step 6

Step 7

Step 8

Metric-style Wide generates new-style TLV with wider metric fields for IPv4.

metric-style wide

!

Configure IS-IS for Loopback interface.

interface Loopback0

Make loopback passive to avoid sending unnecessary hellos on it.

Passive

Step 9

Step 10

Enter IPv4 Address-family for Loopback.

address-family ipv4 unicast

!

!

Configure IS-IS for TenGigE0/0/0/0 interface.

interface TenGigE0/0/0/0

Step 11 Configure IS-IS Circuit-Type on the interface.

circuit-type level-2-only

Step 12 Configure Minimum Interval between sending BFD hello packets to the neighbor.

bfd minimum-interval 15

Step 13 Configure BFD multiplier.

bfd multiplier 3

3-12

Cisco Enterprise L3 Virtualization

Design and Implementation Guide

Chapter 3 Enterprise Network Virtualization Design

Small Network Design and Implementation

Step 14 Enable BFD to detect failures in the path between adjacent forwarding engines.

bfd fast-detect ipv4

Step 15 Enter the IPv4 Address-family for TenGig interface.

address-family ipv4 unicast

Step 16 Configure IS-IS metric for Interface.

metric 10

Step 17

Step 18

Enable mpls LDP sync to ensure LDP comes up on link before Link is used for forwarding to avoid packet loss.

mpls ldp sync

!

!

Configure IS-IS for TenGigE0/0/0/1 interface.

interface TenGigE0/0/0/1

Step 19 Configure IS-IS Circuit-Type on the interface.

circuit-type level-2-only

Step 20 Configure Minimum Interval between sending BFD hello packets to the neighbor.

bfd minimum-interval 15

Step 21 Configure BFD multiplier.

bfd multiplier 3

Step 22 Enable BFD to detect failures in the path between adjacent forwarding engines.

bfd fast-detect ipv4

Step 23 Enter the IPv4 Address-family for TenGig interface.

address-family ipv4 unicast

Step 24 Configure IS-IS metric for Interface.

metric 10

Step 25 Enable per prefix FRR for Level 2 prefixes.

fast-reroute per-prefix level 2

Step 26 Configure an FRR path that redirects traffic to a remote LFA tunnel.

fast-reroute per-prefix remote-lfa tunnel mpls-ldp

Step 27

Step 28

Enable mpls LDP sync to ensure LDP comes up on link before Link is used for forwarding to avoid packet loss.

mpls ldp sync

!

!

Enter MPLS LDP configuration mode.

mpls ldp log

neighbor

graceful-restart

Design and Implementation Guide

Cisco Enterprise L3 Virtualization

3-13

Chapter 3 Enterprise Network Virtualization Design

Small Network Design and Implementation

Step 29 Configure router-id for LDP.

router-id 100.111.2.1

Step 30

Step 31

Enable LDP on TenGig0/0/0/0.

interface TenGigE0/0/0/0

!

Enable LDP on TenGig0/0/0/1.

interface TenGigE0/0/0/1

!

QoS Operation and Implementation in the Core Network

Enterprise virtual networks consist of traffic types that include voice, video, critical applications traffic, and end user web traffic. This traffic requires different priorities and treatments based upon their characteristics and their criticality to the business. In the MPLS core network, QoS ensures proper treatment to the virtual network's traffic being transported. This is achieved as described in this section.

As discussed in previous sections, MPLS header is imposed on traffic in the Enterprise virtual network ingressing the MPLS network on PEs. When this labeled traffic is transported in the core network, QoS implementation uses 3-bit MPLS EXP bits field (0-7) present in the MPLS header for proper QoS treatment. DiffServ PHB, which defines packet-forwarding properties associated with different traffic classes, is divided into the following:

Expedited Forwarding (EF) —Used for traffic requiring low loss, low latency, low jitter, and assured bandwidth.

Assured Forwarding (AF) —Allows four classes with certain buffer and bandwidth.

• Best Effort (BE) —Best effort forwarding.

This guide focuses on the MPLS Uniform QoS model in which DSCP marking of received branch or campus router's traffic on PE is mapped to corresponding MPLS EXP bits. The mapping shown in

Table 3-1 is used for different traffic classes to DSCP and MPLS EXP.

Table 3-1 Traffic Class Mapping

Traffic Class

Network Management

Network Control Protocols

Enterprise Voice and Real-time

Enterprise Video Distribution

Enterprise Telepresence

Enterprise Critical: In Contract

Enterprise Critical: Out of Contract

Enterprise Best Effort

AF

AF

AF

AF

BE

PHB

AF

AF

EF

32

24

16

8

0

DSCP

56

48

46

2

1

0

4

3

6

5

MPLS EXP

7

The QoS configuration includes configuring class-maps created for the different traffic classes mentioned above assigned with the corresponding MPLS Exp. While configuring policy maps, real-time traffic class CMAP-RT-EXP is configured with highest priority 1; it is also policed to ensure low latency

3-14

Cisco Enterprise L3 Virtualization

Design and Implementation Guide

Chapter 3 Enterprise Network Virtualization Design

Small Network Design and Implementation expedited forwarding (EF). Rest classes are assigned with the respective required bandwidth. WRED is used as congestion avoidance mechanism for Exp 1 and 2 traffic in the Enterprise critical class

CMAP-EC-EXP. The Policy-map is applied to the PE and P Core interfaces in egress direction across the MPLS network.

PE and P Core QoS Configuration

Step 1

Step 2

Step 3

Step 4

Step 5

Step 6

Step 7

Step 8

Class-map for the Enterprise critical traffic.

class-map match-any CMAP-EC-EXP

Matching MPLS experimental 1 OR 2 from traffic topmost MPLS header.

match mpls experimental topmost 1 2 end-class-map

!

Class map for Enterprise Telepresence traffic.

class-map match-any CMAP-ENT-Tele-EXP

Matching MPLS experimental 3 from traffic topmost MPLS header.

match mpls experimental topmost 3 end-class-map

!

Class-map for video traffic.

class-map match-any CMAP-Video-EXP

Matching MPLS experimental 4 from traffic topmost MPLS header.

match mpls experimental topmost 4 end-class-map

!

Class-map for real-time traffic.

class-map match-any CMAP-RT-EXP

Match MPLS experimental 5 from traffic topmost MPLS header.

match mpls experimental topmost 5

Step 9 end-class-map

!

Class-map for control traffic.

class-map match-any CMAP-CTRL-EXP

Step 10

Step 11

Match MPLS experimental 6 from traffic topmost MPLS header.

match mpls experimental topmost 6 end-class-map

!

Class-map for Network Management traffic.

class-map match-any CMAP-NMgmt-EXP

Step 12

Step 13

Match MPLS experimental 7 from traffic topmost MPLS header.

match mpls experimental topmost 7 end-class-map

!

!

Policy-map configuration for 10gig Link.

Design and Implementation Guide

Cisco Enterprise L3 Virtualization

3-15

Chapter 3 Enterprise Network Virtualization Design

Large Scale Network Design and Implementation policy-map PMAP-NNI-E

Step 14 Match the RT class.

class CMAP-RT-EXP

Step 15 Define top priority 1 for the class for low-latency queuing.

priority level 1

Step 16 Police the priority class.

police rate 1 gbps

!

!

class CMAP-CTRL-EXP

Step 17

Step 18

Step 19

Assign the desired bandwidth to the class.

bandwidth 200 mbps

!

class CMAP-NMgmt-EXP bandwidth 500 mbps

!

class CMAP-Video-EXP bandwidth 2 gbps

!

class CMAP-EC-EXP bandwidth 1 gbps

!

Use WRED for Enterprise critical class for both Exp 1 and 2 for congestion avoidance. Experimental 1 will be dropped early.

random-detect exp 2 80 ms 100 ms random-detect exp 1 40 ms 50 ms

!

class CMAP-ENT-Tele-EXP bandwidth 2 gbps

!

class class-default

!

end-policy-map

!

Core interface on P or PE.

interface TenGigE0/0/0/0

Step 20 Egress service policy on the interface.

service-policy output PMAP-NNI-E

Large Scale Network Design and Implementation

When an MPLS network comprises more than 1000 devices, implementing a hierarchical network design is recommended. In this guide, the hierarchical network design uses labeled BGP, as defined in RFC

3107.

Figure 3-2 shows a network with hierarchy.

3-16

Cisco Enterprise L3 Virtualization

Design and Implementation Guide

Chapter 3 Enterprise Network Virtualization Design

Large Scale Network Design and Implementation

Figure 3-2 Large Network, Ethernet/SDH/nV Branch Connectivity

Aggregation

Node

Data

Center

Aggregation

Node

Aggregation Network

IP/MPLS Domain

Core

Node

Core Network

IP/MPLS Domain

Core

Node

Core

Node

Core

Node

Aggregation

Node

LDP LSP iBGP (eBGP) Hierarchical LSP

LDP LSP

Aggregation

Node

Aggregation Network

IP/MPLS Domain

Ethernet nV

Aggregation

Node

Aggregation

Node

LDP LSP

Campus/

Branch

The core and aggregation networks add hierarchy with 3107 ABR at border of core and aggregation.

The core and aggregation networks are organized as independent IGP/LDP domains.

The network domains are interconnected with hierarchical LSPs based on RFC 3107, BGP

IPv4+labels. Intra-domain connectivity is based on LDP LSPs.

Topologies between the PE Node and branch router can be Ethernet hub-and-spoke, IP, Ethernet ring, or nV.

Using Core Network Hierarchy to Improve Scale

The main challenges of large network implementation result from network size, such as the size of routing and forwarding tables in individual P and PE devices caused by the large number of network nodes, and trying to run all nodes in one IGP/LDP domain. In an MPLS environment, unlike in an all-IP environment, all service nodes need a /32 network address as a node identifier. /32 addresses, however, cannot be summarized, because link state databases grow in a linear fashion as devices are added to the

MPLS network.

The labeled BGP mechanism, defined in RFC 3107, can be used so that link state databases in core network devices do not have to learn the /32 addresses of all MPLS routers in the access and aggregation domains. The mechanism effectively moves prefixes from the IG link state database into the BGP table.

Labeled BGP, implemented in the MPLS transport network, introduces hierarchy in the network to provide better scalability and convergence. Labeled BGP ensures all devices only receive needed information to provide end-to-end transport.

Large-scale MPLS transport networks used to transport virtual network traffic can be divided into two

IGP areas. In the Open Shortest Path First (OSPF) backbone area, the core network is configured using

Intermediate System to Intermediate System (IS-IS) L2. In the OSPF non-backbone area, the aggregation network is configured with IS-IS L1. Another option is to run different IGP processes in the core and aggregation networks. No redistribution occurs between core and aggregation IGP levels/areas/processes, which helps to reduce the size of the routing and forwarding tables of the routers in each domain and provides better scalability and faster convergence. Running IGP in the area enables intra-area reachability, and LDP is used to build intra-area LSPs.

Because route information is not redistributed between different IGP levels/areas, PE devices need a mechanism to reach PE device loopbacks in other area/levels and send VPN traffic. Labeled BGP enables inter-area reachability and accomplish end-to-end LSP between PEs. Devices that are connected to both aggregation and core domains are called Area Border Routers (ABRs). ABRs run labeled Interior BGP

(iBGP) sessions with PEs in their local aggregation domain and serve as route reflectors for the PEs. PEs advertise their loopback addresses (used for VPNv4 peering) and their corresponding labels to local route reflector ABRs using labeled IBGP. ABRs run labeled IBGP sessions with a RR device in the core domain, which reflects PE loopback addresses and labels learned from one ABR client to other ABR

Design and Implementation Guide

Cisco Enterprise L3 Virtualization

3-17

Chapter 3 Enterprise Network Virtualization Design

Large Scale Network Design and Implementation clients without changing next-hop or other attributes. ABRs learn PE loopback addresses and labels from other aggregation domains and advertise them to PEs in their local aggregation domain. ABRs use next-hop-self while advertising routes to PEs in local aggregation domain and to RRs in the core domain.

This makes PEs learn remote PE loopback addresses and labels with local ABR as BGP next-hop and

ABRs learn remote PE loopback addresses with remote ABR as the BGP next-hop. PEs use two transport labels when sending labeled VPN traffic to the MPLS cloud: one label for remote PE and another label for its BGP next-hop (local ABR). The top label for BGP next-hop local ABR is learned from local

IGP/LDP. The label below that, for remote PE, is learned through labeled IBGP with the local ABR.

Intermediate devices across different domains perform label swapping based on the top label in received

MPLS packets. This achieves end-to-end hierarchical LSP without running the entire network in a single

IGP/LDP domain. Devices learn only necessary information, such as prefixes in local domains and remote PE loopback addresses, which makes labeled BGP scalable for large networks.

Figure 3-3 Large Network Control and Data Plane

Aggregation Network

ISIS Level 1 Or OSPF Non

Backbone Area

Core Network

ISIS Level 2 Or OSPF Backbone Area

Aggregation

Node

 next-hop-self 

Core RR

Aggregation Network

ISIS Level 1 Or OSPF Non

Backbone Area

Aggregation

 next-hop-self 

Node

Data

Center

Aggregation

Node

BGP IPv4+label

RR

ABR

Aggregation

Node

VPN

Label

Remote

PE Label

Local RR

ABR Label

BGP IPv4+label

RR

ABR

BGP IPv4+label

Ethernet nV

Aggregation

Node

Core

Node

VPN

Label

Remote

PE Label

Local RR

ABR Label

Core

Node

Aggregation

Node

VPN

Label

Remote

PE Label

Campus/

Branch

LDP LSP iBGP (eBGP) Hierarchical LSP

LDP LSP LDP LSP

• Aggregation domains run ISIS level-1/OSPF non-backbone area and core domain runs ISIS level-2/backbone area.

ABR connects to both aggregation and core domains.

ABR runs Labeled iBGP with PEs in local aggregation domain and core RR in core domain.

ABR uses next-hop-self while advertising routes to PEs and core RR.

Large Scale Hierarchical Core and Aggregation Networks with Hierarchy

PE routers are configured in IS-IS level-1 (OSPF non-backbone area) to implement ABR, PE, and core

RR transport configuration for large scale MPLS VPNs. ABR aggregation domain facing interfaces are configured using IS-IS level-1 (OSPF non-backbone area) and core domain-facing interface configured with IS-IS Level-2(OSPF backbone area). Core RR interfaces will remain in IS-IS Level-2 (Or OSPF backbone area). PE and local ABR are configured with Labeled IBGP session with ABR as RR. Core

RR is configured with Labeled BGP peering with all ABRs. LDP is configured in a similar way to the smaller network. ABR is configured with next-hop-self for both PE and core-labeled BGP peers to achieve hierarchical LSP. BFD is used on all interfaces as a fast failure detection mechanism. BGP PIC is configured for fast convergence of IPv4 prefixes learnt through labeled IBGP. rLFA FRR is configured under IS-IS for providing fast convergence of IGP learnt prefixes.

3-18

Cisco Enterprise L3 Virtualization

Design and Implementation Guide

Chapter 3 Enterprise Network Virtualization Design

Large Scale Network Design and Implementation

ABR's loopbacks are required in both aggregation and core domains since their loopbacks are used for labeled BGP peering with PEs in local aggregation domain as well as RR in the core domain. To achieve this, ABR loopbacks are kept in the IS-IS Level-1-2 or OSPF backbone area.

PE Transport Configuration

Step 1

Step 2

Step 3

Step 4

Enter router IS-IS configuration for PE.

router isis agg-acc

Define NET address.

net 49.0100.1001.1100.7008.00

Define is-type as level 1 for the PE in aggregation domain.

is-type level-1

Enter IPv4 address-family for IS-IS.

address-family ipv4 unicast

Step 5

Step 6

Step 7

Metric-style Wide generates new-style TLV with wider metric fields for IPv4.

metric-style wide

!

Configure IS-IS for Loopback interface.

interface Loopback0

Make loopback passive to avoid sending unnecessary hellos on it.

passive

point-to-point

Step 8

Step 9

Enter IPv4 Address-family for Loopback.

address-family ipv4 unicast

!

Configure IS-IS for TenGigE0/2/0/0 interface.

interface TenGigE0/2/0/0

Step 10 Configure minimum interval between sending BFD hello packets to the neighbor.

bfd minimum-interval 15

Step 11 Configure BFD multiplier.

bfd multiplier 3

Step 12 Enable BFD to detect failures in the path between adjacent forwarding engines.

bfd fast-detect ipv4

Step 13 Configure point-to-point IS-IS interface.

point-to-point

Step 14 Enter the IPv4 Address-family for TenGig interface.

address-family ipv4 unicast

Step 15 Enable per prefix FRR for Level 2 prefixes.

Design and Implementation Guide

Cisco Enterprise L3 Virtualization

3-19

Chapter 3 Enterprise Network Virtualization Design

Large Scale Network Design and Implementation

fast-reroute per-prefix level 2

Step 16 Configure an FRR path that redirects traffic to a remote LFA tunnel.

fast-reroute per-prefix remote-lfa tunnel mpls-ldp

Step 17 Configure IS-IS metric for Interface.

metric 10

Step 18

Step 19

Step 20

Enable mpls LDP sync to ensure LDP comes up on link before link is used for forwarding to avoid packet loss.

mpls ldp sync

!

!

Enter router BGP configuration mode.

router bgp 101

!

Enter IPv4 address-family.

address-family ipv4 unicast

Step 21 Configure receive capability of multiple paths for a prefix to the capable peers.

additional-paths receive

Step 22 Configure send capability of multiple paths for a prefix to the capablepeers.

additional-paths send

Step 23

Step 24

Enable BGP PIC functionality with appropriate route-policy to calculate back up paths.

additional-paths selection route-policy add-path-to-ibgp

!

Configure session-group to define parameters that are address-family independent.

session-group intra-as

Step 25 Specify remote-as as AS number of RR.

remote-as 101

Step 26

Step 27

Specify Update-source as Loopback0 for BGP communication.

update-source Loopback0

!

Enter neighbor-group configuration mode.

neighbor-group ABR

Step 28 Import Session-group AF-independent parameters.

use session-group intra-as

Step 29

Step 30

Enable Labeled BGP address-family for neighbor group.

address-family ipv4 labeled-unicast

!

Configure ABR loopback as neighbor.

neighbor 100.111.3.1

Step 31 Inherit neighbor-group ABR parameters.

use neighbor-group ABR

!

3-20

Cisco Enterprise L3 Virtualization

Design and Implementation Guide

Chapter 3 Enterprise Network Virtualization Design

Large Scale Network Design and Implementation

Step 32

!

Configure route-policy used in BGP PIC.

route-policy add-path-to-ibgp

Step 33 Configure to install 1 backup path.

set path-selection backup 1 install end-policy

Step 34 Enter MPLS LDP configuration mode.

mpls ldp log

neighbor

graceful-restart

Step 35 Configure router-id for LDP.

!

router-id 100.111.7.8

Step 36 Enable LDP on TenGig0/2/0/0.

interface TenGigE0/2/0/0

ABR Transport Configuration

Step 1

Step 2

Step 3

Step 4

Step 5

Step 6

Step 7

Step 8

Step 9

Enter Router IS-IS configuration for PE.

router isis agg-acc

Define NET address.

net 49.0100.1001.1100.3001.00

Enter IPv4 address-family for IS-IS.

address-family ipv4 unicast

Metric-style Wide generates new-style TLV with wider metric fields for IPv4.

metric-style wide

!

Configure IS-IS for Loopback interface.

interface Loopback0

Make loopback passive to avoid sending unnecessary hellos on it.

passive

point-to-point

Enter IPv4 address-family for Loopback.

address-family ipv4 unicast

!

Configure IS-IS for TenGigE0/2/0/0 interface.

interface TenGigE0/2/0/0

Configure aggregation-facing interface as IS-IS level-1 interface.

Design and Implementation Guide

Cisco Enterprise L3 Virtualization

3-21

Chapter 3 Enterprise Network Virtualization Design

Large Scale Network Design and Implementation

Step 10

circuit-type level-1

Configure minimum interval between sending BFD hello packets to the neighbor.

bfd minimum-interval 15

Step 11 Configure BFD multiplier

bfd multiplier 3

Step 12 Enable BFD to detect failures in the path between adjacent forwarding engines.

bfd fast-detect ipv4

Step 13 Configure point-to-point IS-IS interface.

point-to-point

address-family ipv4 unicast

Step 14 Enable per prefix FRR for Level 2 prefixes.

fast-reroute per-prefix level 2

Step 15 Configure an FRR path that redirects traffic to a remote LFA tunnel.

fast-reroute per-prefix remote-lfa tunnel mpls-ldp

Step 16 Configure IS-IS metric for Interface.

metric 10

Step 17

Step 18

Enable MPLS LDP sync to ensure LDP comes up on link before link is used for forwarding to avoid packet loss.

mpls ldp sync

!

!

Configure IS-IS for TenGigE0/2/0/1 interface.

interface TenGigE0/2/0/1

Step 19 Configure core-facing interface as IS-IS level-2 interface.

circuit-type level-2-only

Step 20 Configure minimum interval between sending BFD hello packets to the neighbor.

bfd minimum-interval 15

Step 21 Configure BFD multiplier.

bfd multiplier 3

Step 22 Enable BFD to detect failures in the path between adjacent forwarding engines.

bfd fast-detect ipv4

Step 23 Configure point-to-point IS-IS interface.

point-to-point

address-family ipv4 unicast

Step 24 Enable per prefix FRR for Level 2 prefixes.

fast-reroute per-prefix level 2

Step 25 Configure an FRR path that redirects traffic to a remote LFA tunnel.

3-22

Cisco Enterprise L3 Virtualization

Design and Implementation Guide

Chapter 3 Enterprise Network Virtualization Design

Large Scale Network Design and Implementation

fast-reroute per-prefix remote-lfa tunnel mpls-ldp

Step 26 Configure IS-IS metric for Interface.

metric 10

Step 27

Step 28

Step 29

Enable mpls LDP sync to ensure LDP comes up on link before link is used for forwarding to avoid packet loss.

mpls ldp sync

!

!

Enter Router BGP configuration mode.

router bgp 101

!

Enter IPv4 address-family.

address-family ipv4 unicast

Step 30 Configure receive capability of multiple paths for a prefix to the capable peers.

additional-paths receive

Step 31 Configure send capability of multiple paths for a prefix to the capable peers.

additional-paths send

Step 32

Step 33

Enable BGP PIC functionality with appropriate route-policy to calculate back up paths.

additional-paths selection route-policy add-path-to-ibgp

!

Configure session-group to define parameters that are address-family independent.

session-group intra-as

Step 34 Specify remote-as as AS number of RR.

remote-as 101

Step 35

Step 36

Specify update-source as Loopback0 for BGP communication.

update-source Loopback0

!

Enter neighbor-group PE configuration mode.

neighbor-group PE

Step 37 Import session-group AF-independent parameters.

use session-group intra-as

Step 38 Enable labeled BGP address-family for neighbor group.

address-family ipv4 labeled-unicast

Step 39 Configure peer-group for PE as RR client.

route-reflector-client

Step 40

Step 41

Set next-hop-self for advertised prefixes to PE.

next-hop-self

!

Enter neighbor-group core configuration mode. neighbor-group CORE

Design and Implementation Guide

Cisco Enterprise L3 Virtualization

3-23

Large Scale Network Design and Implementation

Step 42 Import session-group AF-independent parameters.

use session-group intra-as

Step 43 Enable Labeled BGP address-family for neighbor-group.

address-family ipv4 labeled-unicast

Step 44

Step 45

Set next-hop-self for advertised prefixes to CORE RR.

next-hop-self

!

Configure PE loopback as neighbor.

neighbor 100.111.7.8

Step 46

Step 47

Inherit neighbor-group PE parameters.

use neighbor-group PE

!

Configure core RR loopback as neighbor.

neighbor 100.111.11.3

Step 48 Inherit neighbor-group core parameters.

use neighbor-group CORE

!

!

Step 49 Configure route-policy used in BGP PIC.

route-policy add-path-to-ibgp

Step 50 Configure to install 1 backup path

set path-selection backup 1 install end-policy

Step 51 Enter MPLS LDP configuration mode.

mpls ldp log

neighbor

graceful-restart

Step 52 Configure router-id for LDP.

!

router-id 100.111.3.1

Step 53

Step 54

Enable LDP on TenGig0/0/0/0.

interface TenGigE0/2/0/0

!

Enable LDP on TenGig0/0/0/1.

interface TenGigE0/2/0/1

!

!

Chapter 3 Enterprise Network Virtualization Design

3-24

Cisco Enterprise L3 Virtualization

Design and Implementation Guide

Chapter 3 Enterprise Network Virtualization Design

Large Scale Network Design and Implementation

CORE RR Transport Configuration

Step 1

Step 2

Step 3

Step 4

Step 5

Step 6

Step 7

Enter router IS-IS configuration for PE.

router isis agg-acc

Define NET address.

net 49.0100.1001.1100.1103.00

Enter IPv4 address-family for IS-IS.

address-family ipv4 unicast

Metric-style Wide generates new-style TLV with wider metric fields for IPv4.

metric-style wide

!

Configure IS-IS for loopback interface.

interface Loopback0

Make loopback passive to avoid sending unnecessary hellos on it.

passive

point-to-point

Enter IPv4 address-family for Loopback.

address-family ipv4 unicast

!

Step 8

Step 9

Configure IS-IS for TenGigE0/2/0/0 interface.

interface TenGigE0/2/0/0

Configure core interface as IS-IS level-2 interface.

circuit-type level-2-only

Step 10 Configure minimum interval between sending BFD hello packets to the neighbor.

bfd minimum-interval 15

Step 11 Configure BFD multiplier.

bfd multiplier 3

Step 12 Enable BFD to detect failures in the path between adjacent forwarding engines.

bfd fast-detect ipv4

Step 13 Configure point-to-point IS-IS interface.

point-to-point

address-family ipv4 unicast

Step 14 Enable per-prefix FRR for Level 2 prefixes.

fast-reroute per-prefix level 2

Step 15 Configure an FRR path that redirects traffic to a remote LFA tunnel.

fast-reroute per-prefix remote-lfa tunnel mpls-ldp

Step 16 Configure IS-IS metric for interface.

Cisco Enterprise L3 Virtualization

Design and Implementation Guide 3-25

Chapter 3 Enterprise Network Virtualization Design

Large Scale Network Design and Implementation

metric 10

Step 17

Step 18

Step 19

Enable MPLS LDP sync to ensure LDP comes up on link before link is used for forwarding to avoid packet loss.

mpls ldp sync

!

!

Enter router BGP configuration mode.

router bgp 101

!

Enter IPv4 address-family.

address-family ipv4 unicast

Step 20 Configure receive capability of multiple paths for a prefix to the capable peers.

additional-paths receive

Step 21 Configure send capability of multiple paths for a prefix to the capable peers.

additional-paths send

Step 22

Step 23

Enable BGP PIC functionality with appropriate route-policy to calculate back-up paths.

additional-paths selection route-policy add-path-to-ibgp

!

Configure session-group to define parameters that are address-family independent.

session-group intra-as

Step 24 Specify remote-as as AS number of RR.

remote-as 101

Step 25

Step 26

Specify update-source as Loopback0 for BGP communication.

update-source Loopback0

!

!

Enter neighbor-group PE configuration mode.

neighbor-group ABR

Step 27 Import session-group AF-independent parameters.

use session-group intra-as

Step 28 Enable labeled BGP address-family for neighbor group.

address-family ipv4 labeled-unicast

Step 29

Step 30

Configure peer-group for ABR as RR client.

route-reflector-client

!

Configure ABR loopback as neighbor.

neighbor 100.111.11.3

Step 31

Step 32

Inherit neighbor-group PE parameters.

use neighbor-group ABR

!

!

Enter MPLS LDP configuration mode.

3-26

Cisco Enterprise L3 Virtualization

Design and Implementation Guide

Chapter 3 Enterprise Network Virtualization Design

Large Scale Network Design and Implementation mpls ldp log

neighbor

graceful-restart

Step 33 Configure router-id for LDP.

!

router-id 100.111.2.1

Step 34 Enable LDP on TenGig0/0/0/0.

interface TenGigE0/2/0/0

!

This section described how we can implement hierarchical transport network using Labeled BGP as a scalable solution in a large scale network with fast failure detection and fast convergence mechanisms.

This solution helps to avoid unnecessary resource usage, simplifies network implementation, and achieves faster convergence for large networks.

Virtual network implementation on PE including VRF creation, MP BGP, BGP PIC, rLFA, VPNv4 RR,

Transport QoS, and P configuration will remain the same in concept and configuration as described in

Small Network Design and Implementation, page 3-1 .

Design and Implementation Guide

Cisco Enterprise L3 Virtualization

3-27

PE-to-CE Design Options

C H A P T E R

4

While the domain creating the MPLS L3 service consisting of P and PE routers remains the same regardless of access technologies, the technologies and designs used to connect the PE to CE device varies considerably based on technology preference, installed base, and operational expertise.

Common characteristics, however, exist for each of the options. Each design needs to consider the following:

The topology implemented, either hub-and-spoke or rings

How redundancy is configured

• The type of QoS implementation

Network availability is critical for enterprises because network outages often lead to loss of revenue. In order to improve network reliability, branch/Campus routers and data centers are multihomed on PE devices using one of the various access topologies to achieve PE node redundancy. Each topology should, however, be reliable and resilient to provide seamless connectivity. This is achieved as described in this chapter, which includes the following major topics:

Inter-Chassis Communication Protocol, page 4-1

Ethernet Access, page 4-2

nV (Network Virtualization) Access, page 4-16

Native IP-Connected Access, page 4-25

MPLS Access using Pseudowire Headend, page 4-28

Inter-Chassis Communication Protocol

PE nodes connecting to dual-homed CE work in active/standby model with active PE taking care of forwarding and standby PE monitoring the active PE status to take over forwarding in case of active PE failure. The nodes require a mechanism to communicate local connectivity failure to the CE and to detect peer node failure condition so that traffic can be moved to the standby PE. Inter-Chassis Communication

Protocol (ICCP) provides the control channel to communicate this information.

ICCP allows active and standby PEs, connecting to dual-homed CPE, to exchange information regarding local link failure to CPE and detect peer node failure or its Core Isolation. This critical information helps to move forwarding from active to standby PE within milliseconds. PEs can be co-located or geo-redundant. ICCP communication between PEs occurs either using dedicated link between PEs or using the core network. ICCP configuration includes configuring redundancy group (RG) on both PEs with each other's address for ICCP communication. Using this information, PEs set up ICCP control

Design and Implementation Guide

Cisco Enterprise L3 Virtualization

4-1

Chapter 4 PE-to-CE Design Options

Ethernet Access connection and different applications like Multichassis Link Aggregation Group (MC-LAG) and

Network Virtualization (nV) described in the next sections use this control connection to share state information. ICCP is configured as described below.

ICCP Configuration

Step 1

Step 2

Step 3

Add an ICCP redundancy group with the mentioned group-id.

redundancy

iccp

group group-id

This is the ICCP peer for this redundancy group. Only one neighbor can be configured per redundancy group. The IP address is the LDP router-ID of the neighbor. This configuration is required for ICCP to function.

member

neighbor neighbor-ip-address

!

Configure ICCP backbone interfaces to detect isolation from the network core, and trigger switchover to the peer PE in case the core isolation is occurred on the active PE. Multiple backbone interfaces can be configured for each redundancy group. When all the backbone in-terfaces are not UP, this is an indication of core iso-lation.

backbone

backbone interface interface-type-id

!

We discussed ICCP providing control channel between PEs to communicate state information to provide resilient access infrastructure which can be used by different topologies. The next section discusses various access topologies that can be implemented among branch, campus or data center devices, and the Enterprise L3VPN network. Each topology ensures redundancy and fast failure detection and convergence mechanisms to provide seamless last mile connectivity.

Ethernet Access

Ethernet access can be implemented in hub-and-spoke OR ring access as described below.

Hub-and-Spoke Using MC-LAG Active/Standby

In hub-and-spoke access topology, CE device is dual homed to PE devices in the MPLS VPN network.

The MC-LAG feature provides an end-to-end interchassis redundancy solution for Enterprise. MC-LAG involves PE devices collaborating through ICCP connection to act as a single Link Aggregation Group

(LAG) from the perspective of CE device, thus providing device-level and link-level redundancy. To achieve this, PE devices use ICCP connection to coordinate with each other to present a single LACP bundle (spanning the two devices) to the CE device. Only one of the PE devices forwards traffic at any one time, eliminating the risk of forwarding loops. L3VPN service is configured on this bundle interface or subinterface on PE. PE devices coordinate through the ICCP connection to perform a switchover while presenting an unchanged bundle interface to the CE for the following failure events:

Cisco Enterprise L3 Virtualization

Design and Implementation Guide 4-2

Ethernet Access

Chapter 4 PE-to-CE Design Options

Link failure —A port or link between the CE and one of the PEs fails.

Device failure —Meltdown or reload of one of the PEs, with total loss of connectivity to the CE, the core and the other PE.

Core isolation —A PE loses its connectivity to the core network and therefore is of no value, being unable to forward traffic to or from the CE.

Figure 4-1 Figure X. Hub-and-Spoke Access with MLACP

PE

(ASR 9000)

CPE

(Branch/Campus

Router)

G0/10

Po1 LAG

G0/11

G0/1/0/0

MC-LAG

G0/1/0/0

ICCP

BE222

TenG0/0/0/0

TenG0/0/0/2

MPLS

Active Port

Hot Standby Port

(All VLANs)

TenG0/0/0/0

TenG0/0/0/2

PE

(ASR 9000)

A loss of connectivity between the PEs may lead both devices to assume that the other has experienced device failure; this causes them to attempt to take on the active role, which causes a loop. CE can mitigate this situation by limiting the number of links so that only links connected to one PE are active at a time.

Hub-and-spoke access configuration is described in

Table 4-1 .

Table 4-1 Hub-and-Spoke Access Configuration

PE1 Configuration redundancy

iccp

group 222

mlacp node 1

PE2 Configuration redundancy

iccp

group 222

mlacp node 2

Explanation

Adds Redundancy config mode for ICCP group 222

mlacp system mac

0000.000e.1100

mlacp system priority 1 mlacp system priority 1

member neighbor 100.111.11.2

mlacp system mac

0000.000e.1100

member neighbor 100.111.11.1

Sets the LACP system priority to be used in this ICCP Group. Should be unique for each PE.

Configures the LACP system ID to be used in this ICCP Group. Should be same on both PEs.

Sets the LACP system priority to be used in this ICCP Group. Recommended to configure higher priority (lower value) on

PEs.

Configures neighbor PE for Redundancy group

4-3

Cisco Enterprise L3 Virtualization

Design and Implementation Guide

Chapter 4 PE-to-CE Design Options

Ethernet Access

Table 4-1 Hub-and-Spoke Access Configuration (continued)

PE1 Configuration PE2 Configuration Explanation backbone

TenGigE0/0/0/0 interface TenGigE0/0/0/2 backbone

TenGigE0/0/0/0 interface TenGigE0/0/0/2

Configures ICCP backbone interfaces.

When all backbone interfaces are not UP, this is an indication of core isolation.

When one or more backbone interfaces are UP, then the POA is not isolated from the network core.

Configures Bundle interface.

! interface

Bundle-Ether222 ! interface GigE0/1/0/0 bundle id 222 mode active

!

! interface

Bundle-Ether222 ! interface GigE0/1/0/0 bundle id 222 mode active

!

Table 4-2

describes CE configuration.

Table 4-2 CE Configuration

CE Configuration interface gig 0/10 channel-group 1 mode active

! interface gig 0/11 channel-group1 mode active

! interface port-channel 1 lacp max-bundle 1

!

Explanation

Configures CE interface towards PE in port-channel.

Defines maximum number of active bundled LACP ports allowed in a port channel. In our case, both PEs have one link each to CPE and only one link remains active.

MC-LAG provides interchassis redundancy based on the active/standby PE model. In order to achieve the active/active PE model for both load balancing and redundancy, we can use VRRP as described below.

Hub-and-Spoke with VRRP IPv4 and IPv6 Active/Active

In hub-and-spoke access topology, the CE device is dual homed to PE devices in the MPLS VPN network. VRRP is used to provide VLAN-based redundancy and load balancing between PEs by configuring VRRP groups for multiple data VLANs on PEs. Each PE acts as a VRRP master for a set of

VLANs. CE uses VRRP address as the default gateway. Half of the VLAN's traffic uses one VRRP master PE and the other half uses the other VRRP master PE. If any link or node fails on a PE, all traffic is switched to the other PE and it takes over the role of VRRP master for all the VLANs. This way both load balancing and redundancy between PEs is achieved using VRRP. BFD can be used to fast detect the

VRRP peer failure. In order to detect core isolation, VRRP can be configured with backbone interface tracking so that if the backbone interface goes down, PE will decrease its VRRP priority and the peer PE will take master ownership for all the VLANs and switchover the traffic.

The branch /campus router CE is configured so that each of its uplinks to PEs is configured to forward all local VLANs. The data-path forwarding scheme causes the CE to automatically learn which PE or interface is active for a given VLAN. This learning occurs at an individual destination MAC address level.

Design and Implementation Guide

Cisco Enterprise L3 Virtualization

4-4

Chapter 4 PE-to-CE Design Options

Ethernet Access

Hub-and-spoke with VRRP configuration includes configuring bundle interface on both PE devices on the links connecting to the CE. In this case, although bundle interfaces are used, in contrast to MC-LAG, they are not aggregated across the two PEs. On PE ASR9000s, bundle subinterfaces are configured to match data VLANs, and VRF are configured on them for L3VPN service. VRRP is configured on these

L3 interfaces. For achieving ECMP, one PE is configured with a higher priority for one VLAN VRRP group and the other PE for another VLAN VRRP group. VRRP hello timers can be changed and set to a minimum available value of 100msec. BFD is configured for VRRP for fast failover and recovery. For core isolation tracking, VRRP is configured with backbone interface tracking for each group so that if all backbone interfaces go down, the overall VRRP priority will be lowered below peer PE VRRP priority and the peer PE can take the master ownership.

Figure 4-2

CPE

(Branch/Campus

Router)

Hub-and-Spoke Access with VRRP

VRRP 112 Active

VRRP 113 Standby

Bundle-Ether 1.12

112.1.1.2

PE

(ASR 9000)

Access

Switch

G0/13

G0/14

G0/3/1/12

VRRP

112.1.1.1 and

113.1.1.1

G0/3/1/12

MPLS

Network

Bundle-Ether 1.12

112.1.1.2

PE

(ASR 9000)

VRRP 113 Active

VRRP 112 Standby

PE Configuration

Step 1

Step 2

Step 3

Step 4

Step 5

Step 6

Enter VRRP Configuration Mode.

router vrrp

Enter bundle subinterface VRRP Configuration mode.

interface Bundle-Ether1.12

Enter VRRP IPv4 address family for bundle subinterface.

address-family ipv4

Configure VRRP group 112.

vrrp 112

Step 5Make high priority for VRRP group 112 to 254 so that PE becomes VRRP active for this group.

priority 254

Allow preemption to be delayed for a configurable time period, allowing the router to popu-late its routing table before becoming the active router.

preempt delay 15

Configure VRRP address for the VRRP group.

4-5

Cisco Enterprise L3 Virtualization

Design and Implementation Guide

Chapter 4 PE-to-CE Design Options

Ethernet Access

Step 7

address 112.1.1.1

Configure millisecond timers for advertisement with force keyword to force the timers.

timer msec 100 force

Step 8

Step 9

BFD enabled between PEs to detect fast failures.

bfd fast-detect peer ipv4 112.1.1.3

Enable backbone tracking so that if one interface goes down, VRRP priority will be lowered by 100 and if two interfaces go down, (core isolation) priority will be lowered by 200; that will be lower than peer default priority and switchover will take place.

track interface TenGigE0/0/0/0 100

track interface TenGigE0/0/0/0 100

!

!

Step 10 Enter VRRP IPv6 address family for bundle subinterface.

address-family ipv6

Step 11 Configure VRRP group 112.

vrrp 112

Step 12 Make high priority for VRRP group 112 to 254 so that PE becomes VRRP active for this group.

priority 254

Step 13 Allow preemption to be delayed for a configurable time period, allowing the router to popu-late its routing table before becoming the active router.

preempt delay 15

Step 14 Configure VRRP address for the VRRP group.

address global 2001:112:1:1::1

Step 15 Configure millisecond timers for advertisement with force keyword to force the timers.

timer msec 100 force

address linklocal autoconfig

Step 16 Enter Bundle subinterface VRRP Configuration Mode.

interface Bundle-Ether1.13

Step 17 Enter VRRP IPv4 address family for bundle subinterface.

address-family ipv4

Step 18 Configure VRRP group 113. Default priority for VRRP group 113 so that other PE with 254 priority becomes VRRP active for this group.

vrrp 113

Step 19 Configure VRRP address for the VRRP group.

address 113.1.1.1

Step 20 Configure millisecond timers for advertisement with force keyword to force the timers.

timer msec 100 force

Step 21 BFD enabled between PEs tp detect fast failures.

Cisco Enterprise L3 Virtualization

Design and Implementation Guide 4-6

Chapter 4 PE-to-CE Design Options

Ethernet Access

bfd fast-detect peer ipv4 113.1.1.3

!

!

Step 22 Enter VRRP IPv6 address family for bundle subinterface.

address-family ipv6

Step 23 Configure VRRP group 113. Default priority for VRRP group 113 so that other PE becomes VRRP active for this group.

vrrp 113

Step 24 Configure VRRP address for the VRRP group.

address global 2001:113:1:1::1

Step 25 Configure millisecond timers for advertisement with force keyword to force the timers.

address linklocal autoconfig

Step 26 BFD enabled between PEs tp detect fast failures.

timer msec 100 force

Step 27 Configure physical interface with Bundle.

interface GigabitEthernet0/3/1/12 bundle id 1 mode on interface Bundle-Ether1.12

Step 28 Configure VRF under interface for L3VPN service.

vrf BUS-VPN2

ipv4 address 112.1.1.2 255.255.255.0

ipv6 address 2001:112:1:1::2/64

encapsulation dot1q 112

!

interface Bundle-Ether1.13

Step 29 Configure VRF under interface for L3VPN service.

vrf BUS-VPN2 ipv4 address 113.1.1.2 255.255.255.0

ipv6 address 2001:113:1:1::2/64 encapsulation dot1q 113

Access switch is configured with data VLANs allowed on PE and CE-connecting interfaces. Spanning tree is disabled as Pseudo MLACP takes care of the loop prevention.

Access Switch Configuration

Step 1

Step 2

Disable spanning tree for data VLANs used in Pseudo MLACP.

no spanning-tree vlan 112-113

Trunk connecting to CE and PE has the same configuration allowing the data VLANs on trunks.

interface GigabitEthernet0/1 switchport trunk allowed vlan 100-103,112,113

4-7

Cisco Enterprise L3 Virtualization

Design and Implementation Guide

Chapter 4 PE-to-CE Design Options

Ethernet Access switchport mode trunk

!

interface GigabitEthernet0/13 switchport trunk allowed vlan 100-103,112,113 switchport mode trunk

!

interface GigabitEthernet0/14 switchport trunk allowed vlan 100-103,112,113 switchport mode trunk

CPE Configuration

Step 1

Step 2

Step 3

SVI configuration.

interface Vlan112 ip address 112.1.1.251 255.255.255.0 ipv6 address 2001:112:1:1::251/64

!

SVI configuration.

interface Vlan113 ip address 113.1.1.251 255.255.255.0 ipv6 address 2001:113:1:1::251/64

!

IPv4 and IPv6 static routes configured with next hop as VRRP address. One PE is master for one VRRP address and the other PE is master for other VRRP address.

ip route 112.2.1.0 255.255.255.0 112.1.1.1 ip route 113.2.1.0 255.255.255.0 113.1.1.1 ipv6 route 2001:112:2:1::/64 2001:112:1:1::1 ipv6 route 2001:113:2:1::/64 2001:113:1:1::1

G.8032 Ring Access with VRRP IPv4 and IPv6

In this access topology, PEs are connected to a G.8032 Ethernet ring formed by connecting Ethernet access nodes to each other in a ring form. The G.8032 Ethernet ring protection switching protocol elects a specific link to protect the entire ring from loops. Such a link, which is called the Ring Protection Link

(RPL), is typically maintained in disabled state by the protocol to prevent loops. The device connecting to the RPL link is called the RPL owner responsible for blocking RPL link. Upon a node or a link failure in the ring, the RPL link is activated allowing forwarding to resume over the ring. G.8032 uses Ring

Automatic Protection Switching (R-APS) messages to coordinate the activities of switching the RPL on and off using a specified VLAN for the APS channel.

The G.8032 protocol also allows superimposing multiple logical rings over the same physical topology by using different instances. Each instance contains an inclusion list of VLAN IDs and defines different

RPL links. In this guide, we are using two G.8032 instances with odd-numbered and even-numbered

VLANs. ASR9000's PEs also participate in the ring and act as the RPL owner. One PE acts as RPL owner for RPL for even-numbered VLAN's instance and the other PE as RPL owner for RPL for odd-numbered

VLAN's instance so one PE remains in blocking state for one instance and other PE for other instance.

Hence, load balancing and redundancy are achieved by making use of two RPLs, each RPL serving one instance.

Cisco Enterprise L3 Virtualization

Design and Implementation Guide 4-8

Chapter 4 PE-to-CE Design Options

Ethernet Access

In the G.8032 configuration, PE devices, which are configured as RPL owner nodes for one of the two instances, are specified with the interface connected to the ring. Two instances are configured for odd and even VLANs. PEs are configured as RPL owner for one of the instances each to achieve load balancing and redundancy. Both instances are configured with dot1q subinterface for the respective APS channel communication.

PEs are configured with BVI interfaces for VLANs in both instances and VRF is configured on BVI interfaces for L3VPN service. CE interface connecting to G.8032 ring is configured with trunk allowing all VLANs on it and SVIs configured on CE for L3 communication. BVIs are configured with First Hop

Redundancy Protocol (FHRP) and CE uses FHRP address as default gateway. In our example, we are using VRRP on PEs as FHRP although we can use any available FHRP protocol. PEs are configured with high VRRP priority for VLANs in the case for which they are not RPL owner. CE uses VRRP address as default gateway. Since VRRP communication between PEs will be blocked along the ring due to

G.8032 loop prevention mechanism, a pseudowire configured between PEs exists that enables VRRP communication. In normal condition, CE sends traffic directly along the ring to VRRP active PE gateway. Two failure conditions exist:

• In the case of link failure in ring, both PEs will open their RPL links for both instances and retain their VRRP states as VRRP communication between them is still up using pseudowire. Due to the broken ring, CE will have direct connectivity to only one PE along the ring, depending on which section (right or left) of G8032 ring has failed. In that case, CE connectivity to other PE will use the path to reachable PE along the ring and then use pseudowire between PEs.

• In the case of PE Node failure, pseudowire connectivity between PEs will go down causing VRRP communication to also go down. The PE that is UP to become VRRP Active for all VLANs and all traffic from CE will be sent to that PE.

Figure 4-3 Ethernet Access with G.8032 Ring

Ethernet

Access Node Blocked for Instance 2

(Odd VLANs)

Teng0/3/0/0

PE

(ASR 9000)

CPE

(Branch/Campus

Router)

G0/15

Ethernet

Access Node G.8032

Ethernet Ring

Access

MPLS

Network

Teng0/3/0/0

PE

(ASR 9000)

Ethernet

Access Node

Blocked for Instance 1

(Even VLANs)

PE's dot-1q subinterface for data VLAN communication with CE, pseudowire connecting both PEs and

BVI interface are configured in the same bridge domain, which allows both PEs and CE in same broadcast domain for that data VLAN. So if the link fails, the CE can still communicate to both PEs along the available path and pseudowire.

PE Configuration

Step 1 Interface connecting to G.8032 interface.

4-9

Cisco Enterprise L3 Virtualization

Design and Implementation Guide

Chapter 4 PE-to-CE Design Options

Ethernet Access

Step 2

Step 3

Step 4

Step 5

Step 6

Step 7

Step 8 interface TenGigE0/3/0/0

!

Subinterface for data VLAN 118.

interface TenGigE0/3/0/0.118 l2transport encapsulation dot1q 118 rewrite ingress tag pop 1 symmetric

!

Subinterface for data VLAN 119.

interface TenGigE0/3/0/0.119 l2transport encapsulation dot1q 119

Symmetrically POP 1 tag while receiving the packet and PUSH 1 tag while sending the traffic from interface.

rewrite ingress tag pop 1 symmetric

!

Interface BVI configuration mode.

interface BVI118

Configuring VRF under interface.

vrf BUS-VPN2 ipv4 address 118.1.1.2 255.255.255.0

ipv6 address 2001:118:1:1::2/64

!

Interface BVI configuration mode.

interface BVI119

Configuring VRF under interface.

vrf CE-VPN-RING-2 ipv4 address 119.1.1.2 255.255.255.0

ipv6 address 2001:119:1:1::2/64

!

!

Step 9 Enter VRRP Configuration Mode.

router vrrp

Step 10 Enter Bundle subinterface VRRP Configuration mode.

interface BVI118

Step 11 Enter VRRP IPv4 address family for bundle subinterface.

address-family ipv4

Step 12 Configure VRRP group 118.

vrrp 118

Step 13 Make high priority for VRRP group 118 to 254 so that PE becomes VRRP active for this group.

priority 254

Step 14 Allow preemption to be delayed for a configurable time period, allowing the router to popu-late its routing table before becoming the active router.

Design and Implementation Guide

Cisco Enterprise L3 Virtualization

4-10

Chapter 4 PE-to-CE Design Options

Ethernet Access

preempt delay 15

Step 15 Configure VRRP address for the VRRP group.

address 118.1.1.1

Step 16 Configure millisecond timers for advertisement with force keyword to force the timers.

timer msec 100 force

Step 17 BFD enabled between PEs to detect fast failures.

bfd fast-detect peer ipv4 118.1.1.3

Step 18 Enter VRRP IPv6 address family for bundle subinterface.

address-family ipv6

Step 19 Configure VRRP group 118.

vrrp 118

Step 20 Make high priority for VRRP group 118 to 254 so that PE becomes VRRP active for this group.

priority 254

Step 21 Allow preemption to be delayed for a configurable time period, allowing the router to popu-late its routing table before becoming the active router.

preempt delay 15

Step 22 Configure VRRP address for the VRRP group.

address global 2001:118:1:1::1

address linklocal autoconfig

Step 23 Configure millisecond timers for advertisement with force keyword to force the timers.

timer msec 100 force

Step 24 Enter Bundle subinterface VRRP Configuration mode.

interface BVI119

Step 25 Enter VRRP IPv4 address family for bundle subinterface.

address-family ipv4

Step 26 Configure VRRP group 119. Default priority for VRRP group 119 such that other PE with 254 priority becomes VRRP active for this group.

vrrp 119

Step 27 Configure VRRP address for the VRRP group.

address 119.1.1.1

Step 28 Configure millisecond timers for advertisement with force keyword to force the timers.

timer msec 100 force

Step 29 BFD enabled between PEs to detect fast failures.

bfd fast-detect peer ipv4 119.1.1.3

Step 30 Enter VRRP IPv6 address family for bundle subinterface.

address-family ipv6

4-11

Cisco Enterprise L3 Virtualization

Design and Implementation Guide

Chapter 4 PE-to-CE Design Options

Ethernet Access

Step 31 Configure VRRP group 113. Default priority for VRRP group 113 so that other PE becomes VRRP active for this group.

vrrp 119

Step 32 Configure VRRP address for the VRRP group.

address global 2001:119:1:1::1

address linklocal autoconfig

Step 33 Configure millisecond timers for advertisement with force keyword to force the timers

timer msec 100 force

!

!

Step 34 Enter L2VPN Configuration mode.

l2vpn

Step 35 Configure bridge group named L2VPN.

bridge group L2VPN

Step 36 Configure Bridge-domain named CE-L3VPN-118.

bridge-domain CE-L3VPN-118

Step 37 Enable subinterface connected to ring towards CE under bridge domain CE-L3VPN-118.

interface TenGigE0/3/0/0.118

Step 38 Configure pseudowire to neighbor PE in the same bridge domain.

neighbor 100.111.3.2 pw-id 118

Step 39 Configure L3 interface BVI in the same bridge domain CE-L3VPN-118.

routed interface BVI118

Step 40 Configure another bridge domain CE-L3VPN-119.

bridge-domain CE-L3VPN-119

Step 41 Enable subinterface connected to ring towards CE under same bridge domain CE-L3VPN-119.

interface TenGigE0/3/0/0.119

Step 42 Configure pseudowire to neighbor PE in the same bridge domain CE-L3VPN-119.

neighbor 100.111.3.2 pw-id 119

Step 43

routed interface BVI119

!

Step 44 Configure G.8032 ring named ring_test.

ethernet ring g8032 ring_test

Step 45 Configure port0 for g.8032 ring.

port0 interface TenGigE0/3/0/0

!

Step 46 Mention port 1 as none and G.8032 ring as open ring.

Cisco Enterprise L3 Virtualization

Design and Implementation Guide 4-12

Chapter 4 PE-to-CE Design Options

Ethernet Access

Step 47

port1 none

open-ring

Enter instance 1 configuration.

Instance 1

Step 48 Configure VLANs in the inclusion list of instance 1.

Inclusion-list vlan-ids 99,106,108,118,500,64,604,1001-2000

Step 49 Enter APS channel configuration mode.

aps-channel

Step 50 Configure subinterface used for APS channel communication.

port0 interface TenGigE0/3/0/0.99

port1 none

!

!

Step 51 Enter instance 2 configuration.

instance 2

Step 52 Configure instance with ring profile.

profile ring_profile

Step 53 Configure PE as RPL owner on port0 for instance 2.

rpl port0 owner

Step 54 Configure VLANs in the inclusion list of instance 1.

inclusion-list vlan-ids 199,107,109,109,119,501,2001-3000

Step 55 Enter APS channel configuration mode.

aps-channel

Step 56 Configure subinterface used for APS channel communication.

port0 interface TenGigE0/3/0/0.199

port1 none

Step 57 Configure Ethernet Ring profile.

ethernet ring g8032 profile ring_profile

Step 58 Configure G.8032 WTR timer.

timer wtr 10

Step 59 Configure Guard timer.

timer guard 100

Step 60 Configure hold-off timer.

timer hold-off 0

!

4-13

Cisco Enterprise L3 Virtualization

Design and Implementation Guide

Chapter 4 PE-to-CE Design Options

Ethernet Access

CE Configuration

Step 1

Step 2

Step 3

Step 4

Step 5

Step 6

Step 7

Step 8

Enable VKA 118 and 119.

vlan 118,119

!

Configure data SVI on CE.

interface Vlan118 ip address 118.1.1.251 255.255.255.0 ipv6 address 2001:118:1:1::251/64

!

Configure data SVI on CE.

interface Vlan119 ip address 119.1.1.251 255.255.255.0

ipv6 address 2001:119:1:1::251/64

!

Enable G.8032 ring facing trunk to allow data VLANs.

interface GigabitEthernet0/15 switchport trunk allowed vlan 106-109,118,119 switchport mode trunk

!

Configure IPv4 Static route towards VRRP address for VLAN 118.

ip route 118.2.1.0 255.255.255.0 118.1.1.1

Configure IPv4 Static route towards VRRP address for VLAN 119.

ip route 119.2.1.0 255.255.255.0 119.1.1.1

Configure IPv6 Static route towards VRRP address for VLAN 118.

ipv6 route 2001:118:2:1::/64 2001:118:1:1::1

Configure IPv6 Static route towards VRRP address for VLAN 119.

ipv6 route 2001:119:2:1::/64 2001:119:1:1::1

Ethernet Access Node Configuration

Step 1

Step 2

Step 3

Step 4

Configure Ethernet Ring profile.

ethernet ring g8032 profile ring_profile

Configures G.8032 WTR timer.

timer wtr 10

Configure Guard timer.

timer guard 100

!

Configure G.8032 ring named ring_test.

Design and Implementation Guide

Cisco Enterprise L3 Virtualization

4-14

Chapter 4 PE-to-CE Design Options

Ethernet Access

Step 5

Step 6

Step 7

Step 8

Step 9 ethernet ring g8032 ring_test

Configures ring as G.8032 ring as open ring.

open-ring

Exclude VLAN 100.

exclusion-list vlan-ids 1000

Mention port0 as ten 0/0/0/0 for ring.

port0 interface TenGigabitEthernet0/0/0

Mention port1 as ten 0/0/0/0 for ring

port1 interface TenGigabitEthernet0/1/0

Configure Instance 1.

instance 1

Step 10 Configure instance with ring profile.

profile ring_profile

Step 11 Configure VLANs included in Instance 1.

inclusion-list vlan-ids 99,106,108,118,301-302,310-311,1001-2000

Step 12 Configure APS channel.

aps-channel

Step 13

Step 14

Assign service instance for APS messages on port0 and Port 1.

port0 service instance 99

port1 service instance 99

!

!

Configure Instance 2.

instance 2

Step 15 Configure instance with ring profile.

profile ring_profile

Step 16 Configure device interface as next neighbor to RPL link owner.

rpl port1 next-neighbor

Step 17 Configure VLANs included in Instance 2.

inclusion-list vlan-ids 107,109,119,199,351,2001-3000

Step 18 Configure APS channel.

aps-channel

Step 19 Assign service instance for APS messages on port0 and Port 1.

port0 service instance 199

port1 service instance 199

!

!

!

4-15

Cisco Enterprise L3 Virtualization

Design and Implementation Guide

Chapter 4 PE-to-CE Design Options

nV (Network Virtualization) Access

Step 20 Configure interface connected to ring.

interface TenGigabitEthernet0/0/0

!

Step 21 Configure service instance used for APS messages on G.8032 ring for both instances.

service instance 99 ethernet

encapsulation dot1q 99

rewrite ingress tag pop 1 symmetric

bridge-domain 99

!

service instance 199 ethernet

encapsulation dot1q 199

rewrite ingress tag pop 1 symmetric

bridge-domain 199

!

Step 22 Configure interface connected to ring.

interface TenGigabitEthernet0/1/0

Step 23 Configure service instance used for APS messages on G.8032 ring for both instances.

service instance 99 ethernet

encapsulation dot1q 99

rewrite ingress tag pop 1 symmetric

bridge-domain 99

!

service instance 199 ethernet

encapsulation dot1q 199

rewrite ingress tag pop 1 symmetric

bridge-domain 199

!

!

nV (Network Virtualization) Access

nV Satellite enables a system-wide solution in which one or more remotely-located devices or

"satellites" complement a pair of host PE devices to collectively realize a single virtual switching entity in which the satellites act under the management and control of the host PE devices. Satellites and Hosts

PEs communicate using a Cisco proprietary protocol that offers discovery and remote management functions, thus turning the satellites from standalone devices into distributed logical line cards of the host.

The technology, therefore, allows Enterprises to virtualize access devices to which branch or campus the routers terminate, converting them into nV Satellite devices, and to manage them through PE nodes that operate as nV hosts. By doing so, the access devices transform from standalone devices with separate management and control planes into low profile devices that simply move user traffic from a port connecting branch or campus router towards a virtual counterpart at the host, where all network control plane protocols and advanced features are applied. The satellite only provides simple functions such as local connectivity and limited (and optional) local intelligence that includes ingress QoS, OAM, performance measurements, and timing synchronization.

The satellites and the hosts exchange data and control traffic over point-to-point virtual connections known as Fabric Links. Branch or Campus Ethernet traffic carried over the fabric links is specially encapsulated using 802.1ah. A per-Satellite-Access-Port derived ISID value is used to map a given

Design and Implementation Guide

Cisco Enterprise L3 Virtualization

4-16

Chapter 4 PE-to-CE Design Options

nV (Network Virtualization) Access satellite node physical port to its virtual counterpart at the host for traffic flowing in the upstream and downstream direction. Satellite access ports are mapped as local ports at the host using the following naming convention:

<port type><Satellite-ID>/<satellite-slot>/<satellite-bay>/<satellite-port> where:

<port type> is GigabitEthernet for all existing satellite models

<Satellite-ID> is the satellite number as defined at the Host

• <satellite-slot>/<satellite-bay>/<satellite-port> are the access port information as known at the satellite node.

These satellite virtual interfaces on the Host PEs are configured with VRF to enable L3VPN service.

The satellite architecture encompasses multiple connectivity models between the host and the satellite nodes. The guide will discuss release support for:

• nV Satellite Simple Rings

• nV Satellite Layer 2 Fabric

In all nV access topologies, host nodes load share traffic on a per-satellite basis. The active/standby role of a host node for a specific satellite is determined by a locally-defined priority and negotiated between the hosts via ICCP.

ASR9000v and ASR901 are implemented as a satellite devices:

• ASR9000v has four 10 GbE ports that can be used as ICL.

• ASR901 has two GbE ports that can be used as ICL and that can be used as ICL and ASR903 can have up to two 10 GbE ports can be used as ICL.

nV Satellite Simple Rings

In this topology, satellite access nodes connecting branch or campus are connected in an open ring topology terminating at the PE host devices as shown in

Figure 4-4

.

Figure 4-4 nV with L1 Fabric Access nV Host

PE

(ASR 9000)

Host Fabric Port

PE

(ASR 9000) nV Host

CPE

(Branch/Campus

Router)

Satellite

G0/0/40

Satellite

Satellite Ring

Satellite

Satellite

Satellite Fabric

Port Toward

Active nV Host

Satellite Fabric

Port Toward

Standby nV Host

Satellite Fabric

Port Toward

Standby nV Host

Satellite Fabric

Port Toward

Active nV Host

4-17

Cisco Enterprise L3 Virtualization

Design and Implementation Guide

Chapter 4 PE-to-CE Design Options

nV (Network Virtualization) Access

The PE device advertises multicast discovery messages periodically over a dedicated VLAN over fabric links. Each satellite access device in the ring listens for discovery messages on all its ports and dynamically detects the Fabric link port toward the host.

The satellite uses this auto-discovered port for the establishment of a management session and for the exchange of all the upstream and the downstream traffic with each of the hosts (data and control). At the host, incoming and outgoing traffic is associated to the corresponding satellite node using the satellite mac address, which was also dynamically learned during the discovery process. Discovery messages are propagated from one satellite node to another and from either side of the ring so that all nodes can establish a management session with both hosts. nV L1 fabric access configuration is described below.

nV L1 Fabric Configuration

Step 1

Step 2

Step 3

Step 4

Step 5

Step 6

Step 7

Step 8

Interface acting as Fabric link connecting to nV ring.

interface TenGigE0/2/0/3 ipv4 point-to-point ipv4 unnumbered Loopback10

Enter nV configuration mode under interface.

Nv

Define fabric link connectivity to simple ring using keyword "Network".

satellite-fabric-link network

Enter Redundancy configuration mode for ICP group 210.

redundancy

iccp-group 210

!

Define the Access ports of satellite ID 100.

satellite 100

remote-ports GigabitEthernet 0/0/0-30,31-43

!

Define the Access ports of satellite ID 101.

satellite 101

remote-ports GigabitEthernet 0/0/0-43

!

Define the Access ports of satellite ID 101.

!

!

satellite 102

remote-ports GigabitEthernet 0/0/0-43

!

!

Virtual Interface configuration corresponding to satellite 100. Interface is configured with the VRF for

L3VPN service.

interface GigabitEthernet100/0/0/40 negotiation auto load-interval 30

!

interface GigabitEthernet100/0/0/40.502 l2transport

Design and Implementation Guide

Cisco Enterprise L3 Virtualization

4-18

Chapter 4 PE-to-CE Design Options

nV (Network Virtualization) Access

Step 9 vrf BUS-VPN2 ipv4 address 51.1.1.1 255.255.255.252

encapsulation dot1q 49

!

!

Configure ICCP redundancy group 210 and defines peer PE address in the redundancy group.

redundancy iccp

group 210

member

neighbor 100.111.11.2

!

Step 10 Configure system mac for nV communication.

nv satellite

system-mac cccc.cccc.cccc

!

!

!

!

Step 11 Enter nV configuration mode to define satellites.

Nv

Step 12 Define the Satellite ID.

satellite 100

Step 13 Define ASR9000v device as satellite device.

type asr9000v

Step 14 Configure satellite address used for Communication.

ipv4 address 100.100.1.10

redundancy

Step 15 Define the priority for the Host PE

Host-priority 20

!

Step 16 Satellite chassis serial number to identify satellite.

serial-number CAT1729U3BF

!

!

Step 17 Define the Satellite ID.

satellite 101

Step 18 Define ASR9000v device as satellite device.

type asr9000v

Step 19 Configure satellite address used for Communication.

ipv4 address 100.100.1.3

redundancy

Step 20 Define the priority for the Host PE

4-19

Cisco Enterprise L3 Virtualization

Design and Implementation Guide

Chapter 4 PE-to-CE Design Options

nV (Network Virtualization) Access

host-priority 20

!

Step 21

Step 22

Satellite chassis serial number to identify satellite.

serial-number CAT1729U3BB

!

Define the Satellite ID.

satellite 102

Step 23 Define ASR9000v device as satellite device.

type asr9000v

Step 24 Configure satellite address used for Communication.

ipv4 address 100.100.1.20

redundancy

Step 25 Step 25Define the priority for the Host PE.

Host-priority 20

!

Step 26 Satellite chassis serial number to identify satellite.

serial-number CAT1729U3AU

!

nV Satellite Layer 2 Fabric

In this model, satellite nodes connecting to branch or campus are connected to the host(s) over any Layer

2 Ethernet network. Such a network can be implemented as a native or as an overlay Ethernet transport to fit Enterprise access network designs.

Figure 4-5 nV with L2 Fabric Access using Native or Overlay Transport nV L2 Fabric with

Native Ethernet Transport

PE

(ASR 9000)

PE

(ASR 9000) nV Host

Host Fabric

Port

Host Fabric

Subinterface

Native

L2 Fabric nV Host nV L2 Fabric with

EoMPLS Transport

PE

(ASR 9000)

PE

(ASR 9000)

Host Fabric

Subinterface

Host Fabric

Port

PWE3

IP/MPLS

L2 Fabric

PWE3

CPE

(Branch/Campus

Router)

Satellite

Satellite Fabric

Port and Sub

Interfaces

Satellite Fabric

Port and Sub

Interfaces

Unique Satellite

VLANs Toward Hosts

Satellite CPE

(Branch/Campus

Router)

Design and Implementation Guide

Cisco Enterprise L3 Virtualization

4-20

Chapter 4 PE-to-CE Design Options

nV (Network Virtualization) Access

In the case of L2 Fabric, a unique VLAN is allocated for the point-to-point emulated connection between the Host and each Satellite device. The host uses such VLAN for the advertisement of multicast discovery messages.

Satellite devices listen for discovery messages on all the ports and dynamically create a subinterface based on the port and VLAN pair on which the discovery messages were received. VLAN configuration at the satellite is not required.

The satellite uses this auto-discovered subinterface for the establishment of a management session and for the exchange of all upstream and downstream traffic with each of the hosts (data and control). At the host, incoming and outgoing traffic is associated to the corresponding satellite node based on VLAN assignment. nV L2 fabric access configuration is described below.

nV L2 Fabric Configuration

Step 1

Step 2

Step 3

Step 4

Step 5

Step 6

Step 7

Step 8

Interface acting as Fabric link connecting to nV ring.

interface TenGigE0/1/1/3 load-interval 30 transceiver permit pid all

!

Interface acting as Fabric link connecting to nV ring.

interface TenGigE0/1/1/3.210

ipv4 point-to-point ipv4 unnumbered Loopback200 encapsulation dot1q 210

Enter nV configuration mode under interface.

Nv

Define fabric link connectivity to satellite 210.

satellite-fabric-link satellite 210

Configure Ethernet cfm to detect connectivity failure to the fabric link.

ethernet cfm

continuity-check interval 10ms

!

Enter Redundancy configuration mode for ICP group 210.

redundancy

iccp-group 210

!

Define the Access ports of satellite ID 100

remote-ports GigabitEthernet 0/0/0-9

!

!

!

Virtual Interface configuration corresponding to satellite 100. Interface is configured with the VRF for

L3VPN service.

interface GigabitEthernet210/0/0/0 negotiation auto load-interval 30

!

Cisco Enterprise L3 Virtualization

4-21 Design and Implementation Guide

Chapter 4 PE-to-CE Design Options

nV (Network Virtualization) Access

Step 9 interface GigabitEthernet210/0/0/0.49

vrf BUS-VPN2 ipv4 address 51.1.1.1 255.255.255.252

encapsulation dot1q 49

!

Configure ICCP redundancy group 210 and defines peer PE address in the redundancy group.

redundancy iccp

group 210

member

neighbor 100.111.11.2

!

Step 10 Configure system mac for nV communication.

!

!

nv satellite

system-mac cccc.cccc.cccc

!

!

Step 11 Enter nV configuration mode to define satellites.

nV

Step 12 Define the Satellite ID 210 and type of platform ASR 901.

satellite 210

type asr901

ipv4 address 27.27.27.40

redundancy

Step 13 Define the priority for the Host PE.

host-priority 17

!

Step 14 Satellite chassis serial number to identify satellite.

serial-number CAT1650U00D

!

!

nV Cluster

ASR 9000 NV Cluster system is designed to simplify L3VPN, L2VPN. and Multicast dual-homing topologies and resiliency designs by making two ASR9k systems operate as one logical system. An NV cluster system has these properties and covers some of use cases (partial list) described in

Figure 4-6

.

Without an ASR9k cluster, a typical MPLS-VPN dual-homing scenario has a CE dual-homed to two

PEs where each PE has its own BGP router ID, PE-CE peering, security policy, routing policy maps,

QoS, and redundancy design, all of which can be quite complex from a design perceptive.

With a ASR9k Cluster system, both PEs will share a single control plane, a single management plane, and a fully distributed data plane across two physical chassis, and support one universal solution for any service including L3VPN, L2VPN, MVPN, Multicast, etc. The two clustered PEs can be geographically redundant by connecting the cluster ports on the RSP440 faceplate, which

Design and Implementation Guide

Cisco Enterprise L3 Virtualization

4-22

Chapter 4 PE-to-CE Design Options

nV (Network Virtualization) Access will extend the EOBC channel between the rack 0 and rack 1 and operate as a single XR ASR9k router. For L3VPN, we will use one BGP router ID with the same L3VPN instance configured on both rack 0 and rack 1 and have one BGP router ID and peering with CEs and remote PEs.

Figure 4-6

Video

Distribution

Router

Data Center

Interconnect

Enterprise WAN

Core/Edge

Carrier

Ethernet

ASR 9000 nV Cluster Use Cases for Universal Resiliency Scheme

ASR 9000 “nV System”

Cloud Gateway

Router

Internet

Edge/Peering

Campus Core

PE/IP

Business

Services PE

Always-On Virtual Chassis:

Single Control Plane

Single Management Plane

Fully Distributed Data Plane Across two Physical Chassis

1 Universal Solution for Any Service

In the topology depicted and descibed in Figure 4-7

, we tested and measured L3VPN convergence time using a clustered system and compared it against VRRP/HSRP. We tested both cases with identical scale

and configuration as shown in the table in Figure 4-7

. We also measured access-to-core and core-to-access traffic convergence time separately for better convergence visibility.

Figure 4-7

Access

L3VPN Cluster Convergence Test Topology

Core

MC-LAG

ASR 9006

Cluster PE

VRFs

Carrier

Ethernet

L2VPN

4-23

Scale

IPv4 eBGP sessions

IPv6 eBGP sessions

VRF bundle sub-interfaces

Advertise prefix

Multicast (S,G)

L3VPN

3k

500

3k

1M

N/A

The convergence results of L3VPN cluster system versus VRRP/HSRP are summarized in

Figure 4-8

.

We covered the five types of failure tests listed below.

Note We repeated each test three times and reported the worst-case numbers of three trials.

IRL link failure

EOBC link failure

Power off Primary DSC failover

Cisco Enterprise L3 Virtualization

Design and Implementation Guide

Chapter 4 PE-to-CE Design Options

nV (Network Virtualization) Access

DSC RP redundancy switchover

Process restart

Figure 4-8

Access

L3VPN Cluster Convergence Results versus VRRP/HSRP

Core

Carrier

Ethernet

L2VPN

MC-LAG

4

ASR 9006

Cluster PE

Rack 0

3

2 1

VRFs

3

Rack 1

Failure

IRL Link Failure

EOBC Link Failure

Primary Off: Primary DSC

DSC RP Redundancy

Switchover

Process restart

Test #

9

10

11

12

13

14

6

7

4

5

8

1

2

3

Trigger

Rack 0: 1 IRL Down – Fiber Pull

Rack 1: 1 IRL Down – Fiber Pull

All IRLs Down – Fiber Pull

Rack 0: 1 EOBC Down – Fiber Pull

Rack 1: 1 EOBC Down – Fiber Pull

All EOBC Down – Fiber Pull

Rack 0: Powered Down

Rack 1: Powered Down

Rack 0: Primary DSCs RP failover

Rack 1: Primary DSCs RP failover

LDP

BGP

ISIS/OSPF

L2VPN

0

0

0

0

0

0

0

4.6 sec

4.5 sec

Edge to Core

Convergence

4 msec

0

0

4 msec

99 msec

Core to Edge

Convergence

3 msec

0

0

1 msec

205 msec

0

0

0

0

0

0

112 msec

2.1 sec

2.1 sec

VRRP/HSRP

Convergence

N/A

N/A

N/A

N/A

N/A

0

0

N/A

10 sec / 10 sec

10 sec / 10 sec

0

0

10 sec / 10 sec

10 sec / 10 sec nV Cluster PE with L3vpn Service can be implemented on ASR9000 Rack0 and Rack1 as described below.

nV Cluster Configuration

Step 1

Step 2

Step 3

Step 4

Step 5

Step 6

Configure Rack ID 1 for rack 1 in ROMmon mode.

CLUSTER_RACK_ID = 1

Configure Rack ID 0 for rack 0 in ROMmon mode.

CLUSTER_RACK_ID = 0

Configure nV Edge in Admin mode. Required only on Rack 0.

Nv

Configure nV Edge in Admin mode. Required only on Rack 0.

edge

control

Configure Serial Number of Rack 0.

serial FOX1435G0JR rack 0

Configure Serial Number of Rack 1.

serial FOX1436H557 rack 1

Cisco Enterprise L3 Virtualization

Design and Implementation Guide 4-24

Chapter 4 PE-to-CE Design Options

Native IP-Connected Access

Step 7

Step 8

Step 9

!

data

minimum 0

Configure Inter Rack Links (L1 links). Used for forwarding packets whose ingress and egress interfaces are on separate racks.

interface TenGigE0/3/0/1

Configure the interface as nV Edge interface.

Nv

edge

interface

!

Configure mandatory LACP configuration for Bundle interfaces.

lacp system mac f866.f217.5d24

!

Step 10 Configure Bundle interface.

interface Bundle-Ether1

Step 11 Configure VRF service.

vrf BUS-VPN2 ipv4 address 40.1.1.1 255.255.255.0

Step 12 nV Edge requires a manual configuration of mac-address under the Bundle interface.

mac-address f866.f217.5d23

Native IP-Connected Access

In Native Ethernet Access topology, the branch or campus router is dual homed to PEs with redundancy and load balancing mechanisms being taken care by the Routing protocol configuration. VRF service is configured on both PE's interfaces connecting to CPE. CPE can be connected to the PEs using direct links or through normal Ethernet access network. The configuration on the CPE decides which PE will be used as the primary to send traffic.

• If the BGP is the routing protocol between PE and CE, high local preference is configured on CE for primary PE so that best path is selected for primary PE.

• In the case of static routing, floating static routes are configured on CPE such that static route with lower Administrative distance points to primary PE and higher AD to backup PE. BFD is used for fast failure detection to detect fast failure of BGP Peer or static route.

4-25

Cisco Enterprise L3 Virtualization

Design and Implementation Guide

Chapter 4 PE-to-CE Design Options

Figure 4-9 Native IP-Connected Access

BGP/Static

BFD

CPE

(Branch/Campus

Router)

G0/1

100.192.3.1

Ethernet

Network

PE

(ASR 9000)

G0/0/1/7

100.192.3.2

G0/0/1/7

100.192.3.2

PE

(ASR 9000)

MPLS

Network

Native IP-Connected Access

Design and Implementation Guide

Cisco Enterprise L3 Virtualization

4-26

Chapter 4 PE-to-CE Design Options

Native IP-Connected Access

Table 4-3

Native IP-connected configuration is shown in

Table 4-3

.

Native IP-connected Configuration

PE1 Config interface GigabitEthernet0/0/1/7

PE2 Config interface GigabitEthernet0/0/1/7

CPE Config

!***UNI interface towards PE*** vrf BUS-VPN2 ipv4 address 100.192.30.1

255.255.255.0 vrf BUS-VPN2 ipv4 address 100.192.30.2

255.255.255.0 interface GigabitEthernet0/1 ip address 100.192.30.3

255.255.255.0 ipv6 address 2001:100:192:30::1/64 ipv6 address 2001:100:192:30::2/64

!***Configure eBGP peering with

BFD*** router bgp 101

<snip> vrf BUS-VPN2

!***Setup eBGP peering to CE*** neighbor 100.192.30.3 remote-as 65002

!***Configure eBGP peering with

BFD*** ! router bgp 101

<snip> vrf BUS-VPN2

!***Setup eBGP peering to CE*** neighbor 100.192.30.3 remote-as 65002

! duplex auto speed auto

!***Enable BFD on interface*** bfd interval 50 min_rx 50 multiplier 3

! no bfd echo

!***eBGP peering with BFD*** router bgp 65002 bgp router-id 100.111.10.11 bgp log-neighbor-changes

!***eBGP peering towards Primary

PE***

!***Enables BFD for BGP to neighbor for VRF*** bfd fast-detect bfd multiplier 3 bfd minimum-interval 50 address-family ipv4 unicast

! bfd interface GigabitEthernet0/0/1/7

!***Disables BFD echo mode on interface*** echo disable

***Enables BFD for BGP to neighbor for VRF*** bfd fast-detect bfd multiplier 3 bfd minimum-interval 50 address-family ipv4 unicast

! bfd interface GigabitEthernet0/0/1/7

!***Disables BFD echo mode on interface*** neighbor 100.192.30.1 remote-as

101

!***Enable BFD to this BGP Peer*** neighbor 100.192.30.1 fall-over bfd

!***eBGP peering towards Backup

PE*** neighbor 100.192.30.2 remote-as

101 echo disable

!***Enable BFD to this BGP Peer*** neighbor 100.192.30.2 fall-over bfd

! address-family ipv4 no synchronization redistribute connected

!***Advertise prefix facing the

LAN side of the CE router*** network 100.192.193.0 mask

255.255.255.0 neighbor 100.192.30.1 activate

!***Prefer this neighbor PE1 as the primary PE neighbor 100.192.30.1 weight 100 neighbor 100.192.30.2 activate no auto-summary exit-address-family

4-27

Cisco Enterprise L3 Virtualization

Design and Implementation Guide

Chapter 4 PE-to-CE Design Options

MPLS Access using Pseudowire Headend

MPLS Access using Pseudowire Headend

In MPLS Access, Enterprise access devices are connected to the ASR9000 PE devices with the

MPLS-enabled network between access devices and PE devices. The branch or campus router is connected to the access device via an Ethernet 802.1Q-tagged interface. The access device is configured with a pseudowire terminating on the PE device on a Pseudowire Headend interface.

Pseudowire Headend (PWHE) is a technology that allows termination of access PWs into an L3 (VRF or global) domain, therefore eliminating the requirement of keeping separate interfaces for terminating pseudowire and L3VPN service. PWHE introduces the construct of a "pw-ether" interface on the PE device. This virtual pw-ether interface terminates the PWs carrying traffic from the CPE device and maps directly to an MPLS VPN VRF on the provider edge device. Any QoS and ACLs are applied to the pw-ether interface.

All traffic between CE and PE is tunneled in this pseudowire. Access network runs its LDP/IGP domain along with Labeled BGP, as mentioned in

Large Scale Network Design and Implementation, page 3-16

, and learns PE loopback address accordingly for PW connectivity. The access device can initiate this pseudowire using two methods:

• Per Access Port Method in which the pseudowire is configured directly on the interface connecting to CPE or

• Per Access Node Method in which the pseudowire is configured on SVI corresponding, therefore taking traffic from multiple ports in a single pseudowire.

This guide focuses on the Per Access Port topology.

Access device is configured with XConnect on the interface connecting to the branch/campus router. The

XConnect peer is configured as the PE loopback address. On PE PW-ether, an interface is created on which the XConnect is terminating. The same PW-ether interface is also configured with VRF and

L3VPN service is configured on it. The PE and CE can use any routing protocol to exchange route information over PW-Ether Interface. BFD is used between PE and CE for fast failure detection.

PWHE configuration is described as below.

Figure 4-10 depicts MPLS Access using PWHE.

Figure 4-10 MPLS Access using Pseudowire Headend

CPE

(Branch/Campus

Router)

G0/2

G0/4

BGP/Static

BFD

Access

Device

MPLS

Access Network

PE

(ASR 9000)

PW-Ether 100

PWE3

TenG0/0/0/0

TenG0/0/0/3

MPLS

Network

Access Device Configuration

Step 1 Configure PW class on the access device.

pseudowire-class BUS_PWHE encapsulation mpls

! control-word

Design and Implementation Guide

Cisco Enterprise L3 Virtualization

4-28

Chapter 4 PE-to-CE Design Options

MPLS Access using Pseudowire Headend

Step 2

Step 3

Enter Interface configuration of CE-connecting interface.

interface GigabitEthernet0/4

Configure XConnect on the Access device towards PE with encapsulation MPLS and PW-class

BUS_PWHE to inherit its parameters.

xconnect 100.111.11.1 130901100 encapsulation mpls pw-class BUS_PWHE

! mtu 1500

!

PE Configuration

Step 1

Step 2

Step 3

Configure PWHE interface.

interface PW-Ether100

Configure VRF under PWHE interface.

vrf BUS-VPN2 ipv4 address 100.13.9.1 255.255.255.252 ipv6 address 2001:13:9:1::1/64 ipv6 enable

!

Attach interface list to the PWHE interface.

attach generic-interface-list BUS_PWHE

!

Step 4

Step 5

Step 6

Step 7

Step 8

Step 9

Attach the interfaces to the list.

generic-interface-list BUS_PWHE

Assign interfaces to the list.

interface TenGigE0/0/0/0 interface TenGigE0/0/0/3

!

Configure BGP in AS 101.

router bgp 101

Enter VRF configuration under BGP.

vrf BUS-VPN2

rd 8000:8002

Configure neighbor address as PE.

neighbor 100.13.9.10

Configure remote AS as CE AS.

remote-as 105

Step 10 Enable BFD to detect failures in the path between adjacent forwarding engines.

bfd fast-detect

Cisco Enterprise L3 Virtualization

4-29 Design and Implementation Guide

Chapter 4 PE-to-CE Design Options

MPLS Access using Pseudowire Headend

Step 11 Configure BFD multiplier.

bfd multiplier 3

Step 12 Configure Minimum Interval between sending BFD hello packets to the neighbor.

bfd minimum-interval 50

Step 13 Enters IPv4 address family.

address-family ipv4 unicast

Step 14 Configure route-filter to permit all incoming routes.

route-policy pass-all in

Step 15 Configure route-filter to permit all outgoing routes.

route-policy pass-all out neighbor 2001:13:9:9::2

remote-as 105

bfd fast-detect

bfd multiplier 3

bfd minimum-interval 50 address-family ipv6 unicast

route-policy pass-all in

route-policy pass-all out

!

!

Step 16 Enter L2VPN configuration mode.

l2vpn

Step 17 Configure pw-class.

pw-class BUS_PWHE

encapsulation mpls

control-word

Step 18 Configure XConnect on the PWHE interface PW-Ether100 and mentioning access device as neighbor.

!

!

xconnect group BUS_PWHE100

p2p PWHE-K1309-Static

interface PW-Ether100

neighbor 100.111.13.9

Step 19 Configure route-policy.

route-policy pass-all

Step 20 Pass all routes.

pass end-policy

Design and Implementation Guide

Cisco Enterprise L3 Virtualization

4-30

Chapter 4 PE-to-CE Design Options

MPLS Access using Pseudowire Headend

CE Configuration

Step 1

Step 2

Step 3

Step 4

Step 5

Interface connecting to the Access device.

interface GigabitEthernet0/2.110

encapsulation dot1Q 110 ip address 100.13.9.10 255.255.255.252

ipv6 address 2001:13:9:9::2/64 ipv6 enable

Configure BFD for fast failure detection.

bfd interval 50 min_rx 50 multiplier 3 no bfd echo

!

Configure router bgp.

router bgp 105 bgp router-id 100.13.9.10

bgp log-neighbor-changes

Ipv6 PE neighbor with remote as 101.

neighbor 2001:13:9:1::1 remote-as 101 neighbor 2001:13:9:1::1 fall-over bfd

Ipv4 PE neighbor with remote as 101.

neighbor 100.13.9.1 remote-as 101 neighbor 100.13.9.1 fall-over bfd address-family ipv4 no synchronization network 218.10.4.0 mask 255.255.255.252

redistribute connected neighbor 100.13.9.1 activate

!

no auto-summary exit-address-family address-family ipv6 redistribute connected no synchronization network 2001:10:4:1::/64 neighbor 2001:13:9:1::1 activate

!

exit-address-family

To achieve PE level redundancy, another link can be used between the CPE and the access node and on that link, the access node can be configured with another pseudowire terminating at another PE.

4-31

Cisco Enterprise L3 Virtualization

Design and Implementation Guide

C H A P T E R

5

PE UNI QoS

This chapter includes the following major topics:

PE UNI QoS Configuration, page 5-2

PE UNI QoS Configuration with PWHE Access, page 5-4

Enterprise Virtual Networks consists of traffic types that include voice, video, Critical applications traffic, and end user web traffic. All these traffic require different priorities and treatments based upon their nature and how critical to the business they are. While traffic is sent and received between PE and

CE, QoS implementation on ASR9000 PE uses DSCP field in the IP header to ensure that traffic is properly treated as per its priority defined by DSCP. Two-level H-QoS is configured on the PE for both ingress and egress policies. In nV access topologies, the ingress QoS function, configured on the host for Virtual satellite access port, is offloaded to satellite so that only committed traffic enters the nV network and Fabric link oversubscription can be avoided.

The mapping shown in

Table 5-1 is used for different traffic classes to DSCP.

Table 5-1 Mapping for Different Traffic Classes to DSCP

Traffic Class

Enterprise Voice and Real-time

Enterprise Video Distribution

Enterprise Critical: In Contract

Enterprise Critical: Out of Contract

Enterprise Best Effort

PHB

EF

AF

AF

AF

BE

8

0

DSCP

46

32

16

PE configuration for QoS includes configuring class-maps for respective traffic classes and mapping them to the appropriate DSCP. Two-level ingress QOS does policing of traffic in individual classes of child policy. Parent policy is configured with keyword "child-conform-aware" to prevent the parent policer from dropping any ingress traffic that conforms to the maximum rate specified in the child policer. While configuring egress policy map, real-time traffic class CMAP-RT-dscp is configured with highest priority 1 and is policed to ensure low latency expedited forwarding. Rest classes are assigned with respective required bandwidth. WRED is used as congestion avoidance mechanism for Exp 1 and

2 traffic in the Enterprise critical class CMAP-EC-EXP. Shaping is configured on the Parent egress policy to ensure overall traffic does not exceed the committed bit rate (CBR). The ingress and egress policy-maps are applied to the PE interface connecting to CE in respective directions.

Cisco Enterprise L3 Virtualization

5-1 Design and Implementation Guide

Chapter 5 PE UNI QoS

PE UNI QoS Configuration

PE UNI QoS Configuration

Step 1

Step 2

Step 3

Step 4

Configure class-map for business-critical traffic.

class-map match-any CMAP-BC-dscp

Match DSCP 8 and 16.

match dscp 8 16

Configure class-map for video traffic.

class-map match-any CMAP-BC-video-dscp

Match DSCP 32.

match dscp 32

Step 5

Step 6

Step 7

Step 8

Configure class-map for real-time traffic.

class-map match-any CMAP-RT-dscp

Match DSCP expedited forwarding.

match dscp ef

Configure Child Egress policy-map.

policy-map PMAP-BUS-CE-Child-E

Configure RT class-map under policy-map.

class CMAP-RT-dscp

Step 9

Step 10

Configure priority level 1 for RT class.

priority level 1

Police traffic in RT class.

police rate 200 mbps

!

Step 11 Configure business-critical class under policy.

class CMAP-BC-dscp

Step 12 Assign Bandwidth to the class.

bandwidth percent 5

Step 13 Configure Video class under policy.

class CMAP-BC-video-dscp

Step 14 Assign Bandwidth to the class.

bandwidth percent 10

Step 15 Configure class-default for rest of the traffic.

!

!

class class-default end-policy-map

5-2

Cisco Enterprise L3 Virtualization

Design and Implementation Guide

Chapter 5 PE UNI QoS

PE UNI QoS Configuration

Step 16 Configure parent egress policy-map.

policy-map PMAP-BUS-CE-Parent-E

Step 17 Configure class-default for the policy-map.

class class-default

Step 18 Configure child policy under class-default.

service-policy PMAP-BUS-CE-Child-E

Step 19 Configure shaping to ensure egress traffic does not exceed CBR.

shape average 500 mbps

Step 20 Configure bandwidth for the class.

bandwidth 300 mbps

-policy-map

Step 21 Configure ingress child policy-map.

policy-map PMAP-BUS-CE-Child-I

Step 22 Configures real-time class-map under policy-map.

class CMAP-RT-dscp

Step 23 Configure priority level 1 for real-time class.

priority level 1

Step 24 Police traffic in real-time class.

police rate 50 mbps

!

Step 25 Configure video class-map under policy-map.

class CMAP-BC-video-dscp

Step 26 Configure priority level 2 for video class.

priority level 2

Step 27 Police traffic in video class.

police rate 100 mbps

!

Step 28 Configure business-critical class-map under policy-map.

class CMAP-BC-dscp

Step 29 Police traffic in business-critical class.

police rate 100 mbps peak-rate 200 mbps

exceed-action transmit

violate-action drop

!

Step 30 Configures class-default class-map under policy-map.

class class-default

Step 31 Police traffic in default class.

Design and Implementation Guide

Cisco Enterprise L3 Virtualization

5-3

Chapter 5 PE UNI QoS

PE UNI QoS Configuration with PWHE Access

Step 32

police rate 50 mbps exceed-action transmit end-policy-map

!

!

Configure parent egress policy-map.

policy-map PMAP-BUS-CE-Parent-I

Step 33 Configure class-default for the policy-map.

class class-default

Step 34 Child policy under class-default.

service-policy PMAP-BUS-CE-Child-I

Step 35 Configure policing to ensure ingress traffic does not exceed CBR.

police rate 500 mbps

Step 36 Configure child-conform-aware under class.

child-conform-aware end-policy-map

In case of PWHE access, QoS is implemented on PE based on MPLS EXP bits as the received traffic is labeled.

PE UNI QoS Configuration with PWHE Access

Step 1

Step 2

Step 3

Step 4

Step 5

Step 6

Step 7

Configure business-critical class.

class-map match-any CMAP-BC-EXP

Match MPLS EXP of topmost label as 1,2.

match mpls experimental topmost 1 2 end-class-map

Configure real-time class.

class-map match-any CMAP-RT-EXP

Match MPLS EXP of topmost label as 5.

match mpls experimental topmost 5 end-class-map

Configures video class.

class-map match-any CMAP-BUS-video-EXP

Match MPLS EXP of topmost label as 3.

match mpls experimental topmost 3 end-class-map

Configure ingress child policy-map.

Cisco Enterprise L3 Virtualization

5-4 Design and Implementation Guide

Chapter 5 PE UNI QoS

PE UNI QoS Configuration with PWHE Access

Step 8 policy-map PMAP-PWHE-NNI-C-I

Configure real-time class-map under policy-map.

class CMAP-RT-EXP

Step 9 Configure priority level 1 for real-time class.

priority level 1

Step 10 Police traffic in real-time class.

police rate 50 mbps

!

Step 11 Configure video class-map under policy-map.

class CMAP-BUS-video-EXP

Step 12 Configure priority level 2 for video class.

priority level 2

Step 13 Police traffic in video class.

police rate 100 mbps

!

Step 14 Configure business-critical class-map under policy-map.

class CMAP-BC-EXP

Step 15 Configure priority level 1 for business-critical class.

police rate 100 mbps peak-rate 200 mbps

Step 16 Police traffic in business-critical class.

exceed-action transmit

violate-action drop

Step 17 Configure class-default class-map under policy-map.

class class-default

!

Step 18

Step 19

Police traffic in default class.

police rate 50 mbps exceed-action transmit end-policy-map

!

!

Configure parent egress policy-map.

policy-map PMAP-PWHE-NNI-P-I

Step 20 Configure class-default for the policy-map.

class class-default

Step 21 Configure child policy under class-default.

service-policy PMAP-PWHE-NNI-C-I

Step 22 Configure policing to ensure ingress traffic does not exceed CBR.

police rate 500 mbps

Cisco Enterprise L3 Virtualization

Design and Implementation Guide 5-5

Chapter 5 PE UNI QoS

PE UNI QoS Configuration with PWHE Access

Step 23 Configure child-conform-aware under class.

child-conform-aware end-policy-map

Step 24 Configure child egress policy-map.

policy-map PMAP-PWHE-NNI-C-E

Step 25 Configure real-time class-map under policy-map.

class CMAP-RT-EXP

Step 26 Configure priority level 1 for real-time class.

priority level 1

Step 27 Police traffic in real-time class.

police rate 50 mbps

!

Step 28 Configure real-time class-map under policy-map.

class CMAP-BUS-video-EXP

Step 29 Configure priority level 2 for video class.

priority level 2

Step 30 Police traffic in video class.

police rate 100 mbps

Step 31 Configure WRED to congestion avoidance.

random-detect discard-class 3 80 ms 100 ms

!

Step 32 Configure business-critical class-map under policy-map. class CMAP-BC-EXP

Step 33 Configure bandwidth for business-critical class.

bandwidth remaining percent 60

Step 34 Configure WRED to congestion avoidance for discard-class 2.

random-detect discard-class 2 60 ms 70 ms

Step 35 Configure WRED to congestion avoidance for discard-class 1.

random-detect discard-class 1 40 ms 50 ms

!

class class-default end-policy-map

!

Step 36 Configure parent egress policy-map.

policy-map PMAP-PWHE-NNI-P-E

Step 37 Configure class-default for the policy-map.

class class-default

5-6

Cisco Enterprise L3 Virtualization

Design and Implementation Guide

Chapter 5 PE UNI QoS

PE UNI QoS Configuration with PWHE Access

Step 38 Configure child policy under class-default.

service-policy PMAP-PWHE-NNI-C-E

Step 39 Configure shaping to ensure egress traffic does not exceed CBR.

shape average 500000000 bps end-policy-map

Step 40 Service policies applied under PW-Ether interface.

interface PW-Ether100 service-policy input PMAP-PWHE-NNI-P-I service-policy output PMAP-PWHE-NNI-P-E vrf BUS-VPN2

Design and Implementation Guide

Cisco Enterprise L3 Virtualization

5-7

Performance and Scale

C H A P T E R

6

This chapter includes the following major topics:

Internet Peering Application, page 6-2

100G Edge and Core-Facing Ports, page 6-5

Two types of scalability numbers exist for L3VPN: 1-Dimensional (1D) and Multi-Dimensional (MD).

The 1D scale numbers only show scale of L3VPN as a single service running on ASR9000, which is not realistic from a deployment standpoint because a L3VPN PE in an Enterprise or service provider network would usually have mixed services and features, hence we tested and certified the MD scale profile for L3VPN PE.

Table 6-1

captures the MD scale numbers of L3VPN PE Profile with all services and features enabled simultaneously on a PE in a realistic deployment environment.

Table 6-1 ASR9k L3VPN PE Profile Multi-Dimensional Scale Numbers

Feature

L3 Interfaces

MPLS VPNv4

MPLS VPNv6

Parameters

Qot1q, qinq, Ethernet

ATM, POS, FR, CE, TDM, HDLC, etc.

IPv4 VRF Sessions (2 to 3 interfaces per VRF)

VPNv4 Prefixes

PE-CE Routing eBGP with NSR, MD5, and lower KA-HT

OSPF with MD5 and sham links

Staticv4

EIGRPv4

IPv6 VRF Sessions (2 interfaces per VRF)

VPNv6 Prefixes

PE-CE Routing eBGP with NSR, MD5, and lower KA-HT

OSPF with MD5 and sham links

Staticv6

EIGRPv6

Scale

4k

6k

4k

2M

4k

1k

4750

250

4k

500k

4k

1k

4750

250

Cisco Enterprise L3 Virtualization

6-1 Design and Implementation Guide

Chapter 6 Performance and Scale

Internet Peering Application

Table 6-1

Feature

MVPN

P2MP-TE uRPF

IGMP Snooping

MLD Snooping

L2 Interfaces

L2VPN

QoS

ACLs

MPLS TE

BFD

ASR9k L3VPN PE Profile Multi-Dimensional Scale Numbers (continued)

Parameters

MVPN IPv4/IPv6

IPv4 Mroutes, IPv6 Mroutes

Headend LSP

Ipv4, IPv6

BDs, Snooping Entries

BDs, Snooping Entries

Ethernet (Phy, Bundle-Ether, BVI, PW-HE)

POS and Serial

AToM VPWS

FRoMPLS

FR to Eth IWoMPLS

VPWS PWs

VPWS ACs (1000 each on Eth, BE, PW-HE)

VPLS PWs (w/ 5 neighoring PEs)

VPLS ACs (1000 each on 10GigE, BE, PW-HE)

VPLS PWs to Simulated PEs

VPLS ACs for Simulated PEs (GigE, 10GigE)

MAC address

Interfaces w/ Ingress Policy

Interfaces w/ Egress Policy

IPv4 ACLs on interface

IPv6 ACLs on interface

Headend LSP with FRR

Midpoint LSP

IPv4 echo

IPv6 Async

2M

10k

10k

10k

15k

3k

34k

2k

1k

1k

1k

15k

3k

10k

3k

50k

10k

10k

Scale

500

32k, 16k

1k

10k, 10k

1k, 32k

1k, 32k

Internet Peering Application

ASR9K is used extensively in Internet Peering, Inter-Connect and RR applications because of its rich

BGP features, stability of XR SW, and high scale. We have designed and tested the following profiles:

• ASR 9001 as RR

• ASR9k as peering and Enterprise, DC or SP inter-connect platform

Design and Implementation Guide

Cisco Enterprise L3 Virtualization

6-2

Chapter 6 Performance and Scale

Internet Peering Application

ASR 9001 RR-tested scalability numbers are summarized in

Table 6-2

.

Table 6-2 ASR9k Route Reflector Scale Numbers

Feature eBGP sessions with 3 BGP instances eBGP routes with 3 BGP instances iBGP sessions with 2 BGP instances iBGP routes with 2 BGP instances

Scale

5k

Total Route Scale = 14M routes

IPv4 = 6M

VPNv4 = 5M

IPv6 = 1.5M

VPNv6 = 1.5M

5k

Total Route Scale = 10M

Ipv4 = 402k

VPNv4 = 7.6M

VPNv6 = 2M

In the Internet Peering and Inter-Connect profile, we used the topology described in

Figure 6-1

to test

Enterprise, Data Center and SP peering and inter-connect use cases with scalability. The following key features were tested in this profile:

Inter-AS option B and C Unicast Routing

BGP Flowspec

NetFlow 1:10k Sampling for IPv4, IPv6 and MPLS

VXLAN L3VPN/L2VPN Gateway handoff between Inter-AS Core

RFC 3107 PIC, BGP PIC edge for VPNv4, 6VPE, 6PE etc.

LFA, rLFA

Inter-AS option C L2VPN VPWS/VPLS with BGP AD, Inter-AS MS-PW, FAT-PW

Inter-Area/Inter-AS MPLS TE, P2MP TE

Inter-AS Native IPv4/v6 Multicast, Rosen-mGRE-MVPNv4/v6, mLDP-MVPNv4/v6

Native IPv4/v6, VPNv4/v6, VPWS/VPLS, Native IPv4/v6 Multicast, mGRE-MVPNv4/v6,

PBB-EVPN over CsC

Next-generation Routing LISP, LISP-MPLS Gateway

Next-generation MVPN LSM with BGP C-mcast, Dynamic P2MP-TE MVPN, BGP SAFI 2, 129, 5

Next-generation L2VPN PBB-EVPN

Next-generation L2 Multicast: VPLS LSM

TI-MoFRR, MPLS-TP, Bi-Directional TE LSPs (aka. Flex-LSP)

6-3

Cisco Enterprise L3 Virtualization

Design and Implementation Guide

Chapter 6 Performance and Scale

Internet Peering Application

Table 6-3

Feature

Global FIB v4

Global FIB v6

VRF (v4+v6)

VRF FIB v4

VRF FIB v6

LFIB

L3 interfaces

ARP Adjacencies

BGP session V4

BGP session V6

Labeled-BGP routes

OSPFv2 adjacency

OSPFv3 adjacency

OSPFv2 routes

OSPFv3 routes

ISISv4 adjacency

ISISv6 adjacency

ISISv4 routes

ISISv6 routes

IGP LFA

VRRP/HSRP

ECMP

MPLS label

Figure 6-1 ASR9k Internet Peering and Inter-Connect Profile Topology

Inter AS eBGP for VPNv4, 6PF/6VPF,

VPWS/VPLS AD, MDT, MVPN BGP-AD

MVPN BGP C-Multicast, PBB-EVPN

IXIA

CE1

IXIA

CE2

IXIA

PE6

IXIA

PE1

ASR 9922

RR1

IXIA

ASBR1

IXIA

RR2

ASBR3

IXIA

PE3

IXIA

CE3

PE8 PE2 ASBR2 ASBR4 PE4

IXIA

CSC-CE

PE5

IXIA

IXIA

AS100

IXIA IXIA

AS200

IXIA

IP/LDP LFA + 3107 PIC eBGP + 3107 PIC IP/LDP LFA + 3107 PIC

AS200

The ASR9k scalability test results of Internet Peering and Inter-Connect Profile are shown in Table 6-3 .

ASR9k Internet Peering and Inter-Connect Profile Scale Numbers

PE1

512k

18k

4k

2M

256k

8k

32k

3k

256k

10k

32k

32k

5k

10k

32k

32k

5k

10k

10k

400k

8k

512k

PE2

512k

128k

4k

2M

256k

8k

32k

3k

256k

10k

32k

32k

5k

10k

32k

32k

5k

10k

10k

400k

8k

512k

PE3

512k

128k

4k

2M

256k

8k

32k

3k

256k

10k

32k

32k

5k

10k

32k

32k

5k

10k

10k

400k

8k

512k

ASBR1

512k

128k

32k

3k

256k

10k

32k

32k

5k

10k

32k

32k

5k

10k

10k

400k

8k

512k

ASBR2

512k

128k

512k

32k

3k

256k

10k

32k

32k

5k

10k

32k

32k

5k

10k

10k

400k

8k

512k

ASBR3

512k

128k

512k

32k

3k

256k

10k

32k

32k

5k

10k

32k

32k

5k

10k

10k

400k

8k

512k

32k

32k

5k

10k

32k

3k

256k

10k

PE8

512k

128k

4k

2M

256k

512k

8k

32k

32k

5k

10k

10k

400k

8k

512k

Design and Implementation Guide

Cisco Enterprise L3 Virtualization

6-4

Chapter 6 Performance and Scale

100G Edge and Core-Facing Ports

Table 6-3 ASR9k Internet Peering and Inter-Connect Profile Scale Numbers (continued)

Feature

Intra-Area MPLS TE

Inter-Area MPLS TE

Intra-AS MPLS TE

ACL

L2 interfaces (physical)

L2 interfaces (bundle)

PW

MS-PW

BD/VFI

MAC

CFM MEP

CFM MIP

MPLS-TP

Policy-map

Class-map

Policers

Ingress Queue

Egress Queue

16k

32k

4k

4k

512k

4k

4k

1k

1k

1k

32k

64k

64k

PE1

1k

1k

1k

10k

16k

PE2

1k

1k

1k

10k

16k

16k

32k

4k

4k

512k

4k

4k

1k

1k

1k

32k

64k

64k

PE3

1k

1k

1k

10k

16k

16k

32k

4k

4k

512k

4k

4k

1k

1k

1k

32k

64k

64k

ASBR1

1k

1k

1k

10k

32k

4k

4k

512k

4k

4k

1k

1k

1k

32k

64k

64k

ASBR2

1k

1k

1k

10k

32k

4k

4k

512k

4k

4k

1k

1k

1k

32k

64k

64k

32k

4k

4k

512k

4k

4k

1k

1k

1k

32k

64k

64k

ASBR3

1k

1k

1k

10k

100G Edge and Core-Facing Ports

ASR9k is being positioned as the 100G routing platform in Enterprise, SP, Data Center, and Public

Sector segments as the de facto platform for UNI or edge services and NNI or core-facing connectivity.

Table 6-4 describes 100G density and performance testing results based on UNI and NNI testing

configurations of ASR9k.

Table 6-4 Summary of 100G Support for UNI and NNI on ASR9K

Parameter

No. of 100G ports per slot

SW support

No. of 100G ports per slice

Bi-directional bandwidth

Typhoon

2X100G line rate

XR 4.2.1

1x100G

200Gbps 100Gbps per NPU

Bi-directional PPS 90Mpps/direction

UNI or Edge-facing service termination on 100G Yes

NNI or Core-facing for 100G transport nV cluster nV satellite

Yes

Yes

Yes

16k

32k

4k

4k

512k

4k

4k

1k

1k

1k

32k

64k

64k

PE8

1k

1k

1k

10k

16k

6-5

Cisco Enterprise L3 Virtualization

Design and Implementation Guide

Chapter 6 Performance and Scale

100G Edge and Core-Facing Ports

Table 6-4 Summary of 100G Support for UNI and NNI on ASR9K (continued)

Parameter

MACSEC Suite B+

MACSEC over Cloud

100G Pro-active Protection

CPAK Optics

L2FIB MAC address

L3FIB IPv4/IPv6 address

Bridge domain

Typhoon

No

No

Yes

No

2M

4M/2M

64k

5.1.0

5.1.0

5.1.0

5.1.0

5.1.0

5.1.0

5.1.0

5.1.0

5.1.0

5.1.0

SW Ver

5.1.0

5.1.0

5.1.0

5.1.0

5.1.0

5.1.0

5.1.0

Table 6-5

We have validated the 100G line card throughput and latency of ASR9k Typhoon line cards in the following two roles and summarized the performance in

Table 6-5 .

• UNI or edge-facing L2/L3/Multicast VPN services with features

• NNI or core-facing transport with features

The 100G deployment profiles we covered included MPLS, IPv4 and IPv6 in these applications: Internet

Peering, DCI PE, SP Edge PE, Metro-Ethernet PE and P, Wan-Core PE and P router and general purpose

Core P router.

Typhoon 100G Forwarding Chain Performance

Feature

MPLS

MPLS

MPLS

IPv4

IPv4

IPv4

IPv4

IPv6

IPv6

IPv6

IPv6

NNI/Core

NNI/Core

NNI/Core

NNI/Core

IPv6

L3VPN

NNI/Core

NNI/Edge

IPv4 ACL UNI/Edge

IPv4 ACL NNI/Core

IPv4 ACL NNI/Core

IPv4 QoS NNI/Core

UNI/Edge or

NNI/Core

Facing Role

NNI/Core

NNI/Core

NNI/Core

NNI/Core

NNI/Core

NNI/Core

NNI/Core

Sub-Feature mpls_swap mpls_depo mpls_impo

IPv4 10K BGP route

IPv4 500K BGP+uRPF

IPv4 non recursive

IPv4 500K BGP route

IPv6_50K BGP route + QoS

IPv6_nonrcur udp NH

IPv6_50K BGP route

IPv6_10K BGP route + QoS

IPv6_50K BGP route + QoS

L3VPN_30vrf output_acl input_acl in+out_acl in+out_policy

Linecard

Linerate

Packet

Size

(bytes)

A9K-2x100GE-SE 130

A9K-2x100GE-SE 176

A9K-2x100GE-SE 175

A9K-2x100GE-SE 136

A9K-2x100GE-SE 212

A9K-2x100GE-SE 114

A9K-2x100GE-SE 160

A9K-2x100GE-SE 384

A9K-2x100GE-SE 196

A9K-2x100GE-SE 361

A9K-2x100GE-SE 359

A9K-2x100GE-SE 384

A9K-2x100GE-SE 232

A9K-2x100GE-SE 140

A9K-2x100GE-SE 199

A9K-2x100GE-SE 333

A9K-2x100GE-SE 230

18

15

15

15

18

14

17

17

16

16

14

14

14

15

14

16

Min

Latency

(us)

15

Design and Implementation Guide

Cisco Enterprise L3 Virtualization

6-6

Chapter 6 Performance and Scale

100G Edge and Core-Facing Ports

5.1.0

5.1.0

5.1.0

5.1.0

5.1.0

5.1.0

5.1.0

5.1.0

5.1.0

5.1.0

5.1.0

5.1.0

Table 6-5

SW Ver

5.1.0

5.1.0

5.1.0

Typhoon 100G Forwarding Chain Performance (continued)

Feature

IPv4 QoS

UNI/Edge or

NNI/Core

Facing Role

IPv4 QoS NNI/Core

IPv4 QoS NNI/Core

NNI/Core

IPv4 QoS NNI/Core

IPv4 QoS NNI/Core

L2

L2

BVI mVPN

L2VPN

L2VPN

L2VPN

L2VPN

UNI/Edge

UNI/Edge

Multicast UNI/Edge

Multicast UNI/Edge

UNI/Edge

UNI/Edge

UNI/Edge

UNI/Edge

UNI/Edge

UNI/Edge

Sub-Feature out shaper inpol+outshap

IPv4 500K BGP route_inpol+outshap input_policy output_policy

Linecard

A9K-2x100GE-SE

A9K-2x100GE-SE

A9K-2x100GE-SE

Linerate

Packet

Size

(bytes)

168

218

264

Bridge xconnect mcast_IPv4 mcast_IPv6

L2 EFP BVI L3_2K BVI mVPN 12vrf_100mroute

VPLS+qos

VPWS 3ac+3pw

A9K-2x100GE-SE 223

A9K-2x100GE-SE 209

A9K-2x100GE-SE

A9K-2x100GE-SE

A9K-2x100GE-SE

A9K-2x100GE-SE

A9K-2x100GE-SE

A9K-2x100GE-SE

A9K-2x100GE-SE

A9K-2x100GE-SE

129

113

277

516

592

507

596

319

VPLS_9BD+9ac+27pw A9K-2x100GE-SE 374

VPWS_3ac+3pw+inpol+outshap A9K-2x100GE-SE 326

17

15

17

15

16

15

14

13

15

14

16

15

Min

Latency

(us)

15

16

16

6-7

Cisco Enterprise L3 Virtualization

Design and Implementation Guide

A P P E N D I X

A

Related Documents

The Cisco Enterprise L3 Virtualization Design and Implementation Guide is part of a set of resources that comprise the Cisco EPN System documentation suite. The resources include:

• EPN 3.0 System Concept Guide : Provides general information about Cisco's EPN 3.0 System architecture, its components, service models, and the functional considerations, with specific focus on the benefits it provides to operators.

EPN 3.0 System Brochure : At-a-glance brochure of the Cisco Evolved Programmable Network

(EPN).

EPN 3.0 MEF Services Design and Implementation Guide : Design and implementation guide with configurations for deploying the Metro Ethernet Forum service transport models and use cases supported by the Cisco EPN System concept.

EPN 3.0 Transport Infrastructure Design and Implementation Guide : Design and implementation guide with configurations for the transport models and cross-service functional components supported by the Cisco EPN System concept.

EPN 3.0 Mobile Transport Services Design and Implementation Guide : Design and implementation guide with configurations for deploying the mobile backhaul service transport models and use cases supported by the Cisco EPN System concept.

EPN 3.0 Residential Services Design and Implementation Guide : Design and implementation guide with configurations for deploying the consumer service models and the unified experience use cases supported by the Cisco EPN System concept.

EPN 3.0 Enterprise Services Design and Implementation Guide : Design and implementation guide with configurations for deploying the enterprise L3VPN service models over any access and the personalized use cases supported by the Cisco EPN System concept.

Note All of the documents listed above, with the exception of the System Concept Guide and System

Brochure, are considered Cisco Confidential documents. Copies of these documents may be obtained under a current Non-Disclosure Agreement with Cisco. Please contact a Cisco Sales account team representative for more information about acquiring copies of these documents.

Cisco ASR 9000 Enterprise L3VPN

A-1 Design and Implementation Guide