Traffic Manager Vahid Tabatabaee Fall 2007 ENTS689L: Packet Processing and Switching Traffic Manager 1 References Title: Network Processors Architectures, Protocols, and Platforms Author: Panos C. Lekkas Publisher: McGraw-Hill Tam-Anh Chu, “WAN Multiprotocol Traffic Management Theory & Practice,” Communications Design Conference San Jose, 9/2326/2002 ENTS689L: Packet Processing and Switching Traffic Manager 2 Why do we need Traffic Management? Until recently traffic was treated under a best effort paradigm. Internet protocol has ended up to be the common network protocol for multi-service networks and applications. By emergence of new applications, specially those provided by new generation of wireless protocols, the situation is getting more complex in the future. These applications have different performance requirements. We have to use network resources efficiently to have a profitable network. ENTS689L: Packet Processing and Switching Traffic Manager 3 Traffic Management for Best Effort Traffic Best effort should not be interpreted as no effort !! In reality edge routers are frequently over-subscribed. In the best effort paradigm we want to treat all users equally. Connection to the core network is usually over-subscribed. Users are distributed non-uniformly across the access ports. A simple Round-Robin schedulers treats all PORTS equally. We want to treat all users equally. Situation gets more DSLAM 1 Edge Router complex when: Number of active users change dynamically. Each user has different applications, requirements and service level agreements. DSL Modem DSLAM 2 OC-48 Ethernet Switch 1 DSL Modem access port Core Networ k DSLAM K DSL Modem ENTS689L: Packet Processing and Switching Traffic Manager access port Ethernet Switch 30 4 Traffic Management Objective To unequally share the network resources (bandwidth and memory) between the users and applications. Traffic flows should be identified and classified in multiple queues to be able to control QoS. Network protocols and architectures such as IntServ, DiffServ and MPLS help us to provide QoS in network. QoS seeks to specify and control five fundamental network variables: Bandwidth or throughput Latency Jitter Packet loss Link availability ENTS689L: Packet Processing and Switching Traffic Manager 5 Traffic Management vs. Traffic Engineering Traffic management is performed on the data plane over the packets: Resource allocation: Scheduling Shaping Congestion Control Packet discard Traffic Engineering is performed on the control plane to set up the routes and paths: Load balancing Failure recovery Link Utilization Control ENTS689L: Packet Processing and Switching Traffic Manager 6 Traffic Management Obstacles We have enough knowledge about algorithms and their properties: Bounds on delay and memory requirement Unresolved Challenges Is there a systematic way to set the parameters? To some extent the answer is yes, but the theoretical bounds are very loose. What if we set the parameters wrong? Is there a systematic way to pin point the problem? As far as I know the answer is no. ENTS689L: Packet Processing and Switching Traffic Manager 7 Major Tasks and Algorithms Statistics gathering Traffic policing Traffic Shaping Scheduling Queueing and Buffer Management Congestion avoidance and packet dropping ENTS689L: Packet Processing and Switching Traffic Manager 8 Statistics We need to gather statistics Number of packet arrivals for each flow Number of discarded packets for each flow Number of non-conforming packets Usually they use on-chip counters to gather this information Only TM has information related to the network congestion level Packet marking should be done based on the congestion level. ENTS689L: Packet Processing and Switching Traffic Manager 9 Packet Marking It is important to make sure that the packets are conforming to the SLA. In the DiffServ AF PHB a marking algorithm such as two-rate three-color marker (trTCM) or single-rate three-color marker (srTCM) established the packet-discarding precedence: In trTCM we have two rate and three colors for the packets: Useful when peak rate should be enforced In srTCM we have one rate and three colors for the packets Useful when only burst size matters Green maps to AFx1, Yellow to AFx2 and Red to AFx3 Source: http://www3.ietf.org/proceedings/99mar/slides/diffserv-tcm-99mar/ ENTS689L: Packet Processing and Switching Traffic Manager 10 Two Rate TCM Parameters: Peak Information Rate (PIR) and Peak Burst Size (PBS) Committed Information Rate (CIR) and Committed Burst Size (CBS). PIR > CIR Source: http://www3.ietf.org/proceedings/99mar/slides/diffserv-tcm-99mar/ ENTS689L: Packet Processing and Switching Traffic Manager 11 Single Rate TCM Parameters: Committed Information Rate (CIR) Committed Burst Size (CBS). Excess Burst Size (EBS) Source: http://www3.ietf.org/proceedings/99mar/slides/diffserv-tcm-99mar/ ENTS689L: Packet Processing and Switching Traffic Manager 12 Traffic Shaping Traffic shaping is usually done in the egress line card to shape and smooth the outgoing traffic. Token rate regulates transfer of packets If sufficient tokens available, packets enter network without delay B determines how much burstiness allowed into the network ENTS689L: Packet Processing and Switching Traffic Manager 13 Congestion Management We discard packets to avoid congestion. Simple tail dropping results in TCP global synchroniztion. RED starts to randomly drop packets when buffers are more than Tmin. In WRED different queues have different buffer occupation thresholds. http://www.cisco.com/warp/public/473/187.html#topic5 ENTS689L: Packet Processing and Switching Traffic Manager 14 Dropping Policy in RED avg TH min F TH max TH min Pb F Pmax count number of consecutiv e packets not discarded Pa 1 1 count Pb Floyd, S., and Jacobson, V., Random Early Detection gateways for Congestion Avoidance V.1 N.4, August 1993, p. 397-413 ENTS689L: Packet Processing and Switching Traffic Manager 15 Scheduling Scheduler decides which queue to be served next? Round Robin Scheduler: Every queue is served in a round-robin fashion. Weighted Round Robin (WRR): Queue i is served Ni times in a round robin fashion. Priority Queueing: A lower priority queue is only served when there is no higher priority backlogged traffic. Weighted Fair Queuing (WFQ) provides minimum bandwidth guarantees for different queues (their fair shares) Excess bandwidth (if any) distributed equally among flows Proven to provide delay bounds for well-behaved traffic flows Deficit Round Robin: Good approximation of the WFQ. ENTS689L: Packet Processing and Switching Traffic Manager 16 GPS and WFQ One problem with WRR is penalization of short packets. Genralized Processor Sharing (GPS) to take care of this problem. In GPS each flow i is assigned a weight i The service rate for any non-empty queue is gi i C j jbusy queues Using GPS we can bound delay of packets. If a flow is limited by a token bucket specification, where Bi and Ri are the bucket size and token rate and i Ri Bi Di Ri ENTS689L: Packet Processing and Switching Traffic Manager 17 GPS and WFQ Implementing GPS explicitly is only possible if we can send and serve flows at the bit granularity. It is said that GPS is a fluid policy, because it needs to serve fraction of packets. WFQ is a packetized policy that tracks output of GPS. The idea is to calculate the finishing time of every packet if we were able to implement GPS. WFQ always serve the packet with smallest finishing time. WFQ has a bounded delay too: Bi ( K i 1) Li Lmax Di Ri Ri m 1 Cm Ki ENTS689L: Packet Processing and Switching Traffic Manager 18 Bounded Delay for WFQ Bi ( K i 1) Li K i Lmax Di Ri Ri m 1 Cm Di : maximum delay of flow i Bi : token bucket size of flow i Ri : token rate of flow i K i : number of nodes in the path flow i Li : maximum packet size of flow i Lmax : maximum packet length for all flows through t he nodes in the path Cm : outgoing link capacity at node m Pay for the burst once ENTS689L: Packet Processing and Switching Traffic Manager 19 Arrival and Service Curves Backlog bound Source: Patrick Maillé, “An introduction to Network Calculus”. ENTS689L: Packet Processing and Switching Traffic Manager 20 DRR Each queue has a deficit counter. At the beginning of each round deficit counter of each queue is incremented by its quantum value. Quantum value determines how many bytes from that queue we want to schedule in each round. A round is one round-robin iteration over backlogged queues. In a round every queue that its packet length is less than its deficit counter. If a queue is served its deficit counter is reduced by the packet length. In each round each backlogged queues deficit is incremented by its quantum value. ENTS689L: Packet Processing and Switching Traffic Manager 21 Comparison of Scheduling ENTS689L: Packet Processing and Switching Traffic Manager 22 RECAP ENTS689L: Packet Processing and Switching Traffic Manager 23