Theory and Practice
Dimitrios Kalogeras
2
Introduction – History – Background
QoS Metrics
QoS Architecture
QoS Architecture Components
Applications in Cisco Routers
The Internet, originally designed for U. S. government use, offered only one service level: Best Effort.
– No guarantees of transit time or delivery
– Rudimentary prioritization was available, but it was rarely used.
Commercialization began in early 1990’s
– Private (intranet) networks using Internet technology appeared.
– Commercial users began paying directly for Internet use.
– Commerce sites tried to attract customers by using graphics.
– Industry used the Internet and intranets for internal, shared communications that combined previously-separate, specialized networks -- each with its own specific technical requirements.
– New technologies (voice over the Internet, etc.) appeared, designed to capitalize on inexpensive Internet technologies.
Network flexibility is becoming central to enterprise strategy
– Rapidly-changing business functions no longer carried out in stable ways, in unchanging locations, or for long time-periods
– Network-enabled applications often crucial for meeting new market opportunities, but there’s no time to custom-build a network
Traffic is bursty
Interactive voice, video applications have stringent bandwidth and latency demands
Multiple application networks are being combined into consolidated corporate utility networks
– Bandwidth contention as critical transaction traffic is squeezed by web browsing, file transfers, or other low-priority or bulk traffic
– Latency problems as interactive voice and video are squeezed by transaction, web browsing, file transfer, and bulk traffic
5
QoS development inspired by new types of applications in IP environment:
Video Streaming Services
Video Conferencing
VoIP
Legacy SNA / DLSw
Quality of Service (QoS) classifies network traffic and then ensures that some of it receives special handling.
– May track each individual dataflow (sender:receiver) separately.
– May include attempts to provide better error rates, lower network transit time (latency), and decreased latency variation
(jitter).
Differentiated Class of Service (CoS) is a simpler alternative to QoS.
– Doesn't try to distinguish among individual dataflows; instead, uses simpler methods to classify packets into one of a few categories.
– All packets within a particular category are then handled in the same way, with the same quality parameters.
Policy-Based Networking provides end-to-end control.
– The rules for access and for management of network resources are stored as policies and are managed by a policy server.
In random arrival, the time that each packet arrives is completely independent of the time that any other packet arrives.
– If the true situation is that arrivals tend to be evenly spaced, then random arrival calculations will overestimate the queuing delay.
– If the true situation is that arrivals are bunched in groups (typical of data flows, such as packets and acknowledgements), then random arrival calculations will underestimate the queuing delay.
Our intuition is usually misleading when we think of random processes.
– We tend to assume that queue size increases linearly as the number of customers increases.
– But, with random arrival, there is a drastic increase in queue size as the customer arrival rate approaches 80% of the theoretical server capacity. There’s no way to store the capacity that is unused by late customers, but early customers increase the queue.
The surprising increase in queue length is best shown by a graph:
Actual
Intuitive
20% 40% 60%
System Capacity
80%
Although random arrival is very convenient mathematically (it’s relatively simple to do random arrival calculations), it has been shown that much data traffic is self-similar .
– Ethernet and Internet traffic flows, in particular, are self-similar.
– The rate of initial connections is still random, however.
Self-similar traffic shows the same pattern regardless of changes in scale.
– Fractal geometry (e.g., a coastline) is an example.
Self-similar traffic has a heavy tail .
– The probabilities of extremely large values (e.g., file lengths of a gigabyte or more) don’t decrease as rapidly, as they would with random distributions of file lengths.
– This matches real data traffic behaviours.
Long file downloads mixed with short acknowledgements
Compressed video with action scenes mixed with static scenes
10
“If high levels of utilization are required, drastically larger buffers are needed for self-similar traffic than would be predicted based on classical queuing analysis [i.e., assuming random behaviour].” [Stallings]
– Combining selfsimilar traffic streams doesn’t quickly result in smoother traffic patterns; it’s only at the highest levels of aggregation that random-arrival statistics can be used.
20%
Self-similar
40% 60%
System Capacity
Random arrival
80%
Four metrics are used to describe a packet’s transmission through a network – Bandwidth,
Delay, Jitter, and Loss
Using a pipe analogy, then for each packet:
Bandwidth is the perceived width of the pipe
Delay is the perceived length of the pipe
Jitter is the perceived variation in the length of the pipe
Loss is the perceived leakiness if the pipe
A The path as perceived by a packet!
Delay
B
11
12
The amount of bandwidth available to a packet is affected by:
The slowest link found in the transmission path
The amount of congestion experienced at each hop – TCP slow-start and windowing
The forwarding speed of the devices in the path
The queuing priority given to the packet flow
100 Mb/s 2Mb/s
2 Mb/s Maximum Bandwidth
10 Mb/s
13
The amount of delay experienced by a packet is the sum of the:
Fixed Propagation Delays
Bounded by the speed of light and the path distance
Fixed Serialization Delays
The time required to physically place a packet onto a transmission medium
Variable Switching Delays
The time required by each forwarding engine to resolve the next-hop address and egress interface for a packet
Variable Queuing Delays
The time required by each switching engine to queue a packet for transmission
The amount of Jitter experienced by a packet is affected by:
Serialization delays on low-speed interfaces
Variations in queue-depth due to congestion
~214ms Serialization
Delay for a 1500-byte packet at 56Kb/s
Variations in queue cycle-times induced by the service architectures –
First-Come, First-Served, for example
60B every 20ms 60B every 214ms
Voice 1500 Bytes of Data Voice Voice 1500 Bytes of Data Voice
10 Mbps Ethernet
60B every 214ms
Voice 1500 Bytes of Data Voice
10 Mbps Ethernet
56 Kbps WAN
14
15
The amount of loss experienced by a packet flow is affected by:
Buffer exhaustion due to congestion caused by oversubscription or rate-decoupling
Intentional packet drops due to congestion control mechanism such as Random Early Discard
GE DS-3
GE
GE
Oversubscribed
Buffer Exhaustion
16
Best Effort Service
Integrated Service
Differentiated Service
17
Per-Flow State No State Aggregated State
1. Best Effort
3. DiffServ
2. IntServ/RSVP
4. RSVP+DiffServ+MPLS
18
What exactly IP does:
All packets treated equally
Unpredictable bandwidth
Unpredictable delay and jitter
19
20
The Integrated Services (IntServ) model builds upon Resource
Reservation Protocol (RSVP)
Reservations are made per simplex flow
Applications request reservations for network resources which are granted or denied based on resource availability
Senders specify the resource requirements via a PATH message that is routed to the receiver
Receivers reserve the resources with a RESV message that follows the reverse path
Sender
PATH
RESV
Receiver
The Integrated Services Model can be divided into two parts – the Control and Data Planes
Control Plane
Routing Selection Admission Control
Reservation Setup
Reservation Table
21
Data Plane
Flow Identification Packet Scheduler
22
Control Plane
Route Selection – Identifies the route to follow for the reservation
(typically provided by the IGP processes)
Reservation Setup – Installs the reservation state along the selected path
Admission Control – Ensures that resources are available before allowing a reservation
Data Plane
Flow Identification – Identifies the packets that belong to a given reservation (using the packet’s 5-Tuple)
Packet Scheduling – Enforces the reservations by queuing and scheduling packets for transmission
23
Applications using IntServ can request two basic servicetypes:
Guaranteed Service
Provides guaranteed bandwidth and queuing delays end-to-end, similar to a virtual-circuit
Applications can expect hard-bounded bandwidth and delay
Controlled-Load Service
Provides a Better-than-Best-Effort service, similar to a lightly-loaded network of the required bandwidth
Applications can expect little to zero packet loss, and little to zero queuing delay
These services are mapped into policies that are applied via CB-WFQ,
LLQ, or MDRR
24
IntServ routers need to examine every packet to identify and classify the microflows using the 5tuple
IntServ routers must maintain a token-bucket per microflow
Guaranteed Service requires the creation of a queue for each microflow
Data structures must be created and maintained for each reservation
25
26
The DiffServ Model specifies an approach that offers a service better than Best-Effort and more scalable than
IntServ
Traffic is classified into one of five forwarding classes at the edge of a DiffServ network
Forwarding classes are encoded in the Differentiated
Services Codepoint (DSCP) field of each packet’s IP header
DiffServ routers apply pre-provisioned Per-Hop Behaviors
(PHBs) to packets according to the encoded forwarding class
5 4 3 2 1 5 4 3 2 1
27
DiffServ allocates resources to aggregated rather than to individual flows
DiffServ moves the classification, policing, and marking functions to the boundary nodes – the core simply forwards based on aggregate class
DiffServ defines Per-Hop forwarding behaviors, not end-toend services
DiffServ guarantees are based on provisioning, not reservations
The DiffServ focus is on individual domains, rather than endto-end deployments
28
DS field DSCP CU
The DS field is composed of the 6 high-order bits of the
IP ToS field
The DS field is functionally similar to the IPv4 TOS and
IPv6 Traffic Class fields
The DS field is divided into three pools:
nnnnn0 – Standards Use
nnnn11 – Experimental / Local Use
nnnn01 – Experimental / Local Use, possible Standards Use
Class Selector Codepoints occupy the high-order bits
(nnn000) and map to the IPv4 Precedence bits
29
The DS Field can encode:
Eight Class Selector Codepoints compatible with legacy systems (CS0-7)
An Expedited Forwarding (EF) Class
Four Assured Forwarding Classes, each with three Drop Precedence (AFxy, where x=1-4, and y=1-3)
Packets in a higher AF Classes have a higher transmit priority
Packets with a higher Drop Precedence are more likely to be dropped
AF33
CS4
AF41
AF42
AF43
CS5
EF
CS6
CS7
Codepoint
CS0 (DE)
CS1
AF11
AF12
AF13
CS2
AF21
AF22
AF23
CS3
AF31
AF32
011110
100000
100010
100100
100110
101000
101110
110000
111000
DSCP
000000
001000
001010
001100
001110
010000
010010
010100
010110
011000
011010
011100
30
A Per-Hop Behaviour (PHB) is an observable forwarding behaviour of a DS node applied to all packets with the same
DSCP
PHBs do NOT mandate any specific implementation mechanisms
The EF PHB should provide a low-loss, low-delay, low-jitter, assured bandwidth service
The AF PHBs should provide increasing levels or service
(higher bandwidth) for increasing AF levels
The Default PHB (CS0) should be equivalent to Best-Effort
Service
Packets within a given PHB should not be re-ordered
DiffServ Boundary Nodes are responsible for classifying and conditioning packets as they enter a given DiffServ Domain
Conditioning
Classification
Classifier Marker Meter
Remarker
Shaper
Dropper
Classifier
Marker
Examine each packet and assign a Forwarding Class
Set the DS Field to match the Forwarding Class
Meter Measure the traffic flow and compare it to the traffic profile
Remarker Remark (lower) the DS Field for out-of-profile traffic
Shaper
Dropper
31
Shape the traffic to match the traffic profile
Drop out of profile traffic
DiffServ Domain
32
Classification / Conditioning
Premium Gold Silver Bronze
PHB
LLQ/WRED
33
As currently formulated, DiffServ is strong on simplicity and weak on guarantees
Virtual wire using EF is OK, but how much can be deployed?
DiffServ has no topology-aware admission control mechanism
34
The best of both worlds – Aggregated RSVP integrated with
DiffServ
No State
Best Effort
Aggregated
State
DiffServ
RSVP + DiffServ
Aggregated State
Firm Guarantees
Admission Control
Per-Flow State
IntServ
But – given the presence of a DiffServ domain in a network, how do we support
RSVP End-to-End?
35
Routers at edge of a DS cloud perform microflow classification, policing, and marking
• Guaranteed Load set to the EF, Controlled load set to AFx, and Best
Effort set to CS0
• Service Model to Forwarding Class mapping is arbitrary
RSVP signaling is used in both the IntServ and DiffServ regions for admission control
The DiffServ core makes and manages aggregate reservations for the DS Forwarding Classes based on the
RSVP microflow reservations
The core then schedules and forwards packets based only on the DS Field
36
Border Routers implement per-flow classification, policing, and marking
DiffServ Region
The DiffServ region aggregates the flows into
DS Forwarding Classes
RSVP Signaling is propagated
End-to End
The IntServ regions contain
Guaranteed or Controlled
Load Microflows
The forwarding plane is still DiffServ
We now make a small number of aggregated reservations from ingress to egress
Microflow RSVP messages are carried across the
DiffServ cloud
Aggregate reservations are dynamically adjusted to cover all microflows
RSVP flow-classifiers and per-flow queues are eliminated in the core
Scalability is improved – only the RSVP flow states are necessary
– Tested to 10K flows
37
38
Classification
Coloring
Admission Control
Traffic Shaping/Policing
Congestion Management
Congestion Avoidance
Signaling
39
Most fundamental QoS building block
The component of a QoS feature that recognizes and distinguishes between different traffic streams
Without classification, all packets are treated the same
40
Always performed at the network perimeter
Makes traffic conform to the internal network policy
Marks packets with special flags (colors)
Colors used afterwards inside the network for
QoS management
41
Packet
Classifier
Meter
Marker
Shaper/
Policer
Admitted
Dropped
42
IP header fields
TCP/UDP header fields
Routing information
Packet Content (NBAR) i.e. HTTP, HTTPS, FTP, Napster etc.
43
IP Precedence
DSCP
QoS Group
802.1p CoS
ATM CLP
Frame Relay DE
Precedence D T R Unused
44
0
Version Length
8
ToS Field
15
D
T
R
0
Normal Delay
Normal Throughput
Normal Reliability
Total Length
1
Low Delay
High Throughput
High Reliability
31
…
45
111 Network Control
110 Internetwork Control
101 Critical
100 Flash Override
011 Flash
010 Immediate
001 Priority
000 Routine
46
Diffserv Code Point
DSCP (6 bits)
Low Drop
Precedence
Medium Drop
Precedence
High Drop
Precedence
Class 1
001010
001100
001110
Class 2
010010
010100
010110
Class 3
011010
011100
011110
Unused
Class 4
100010
100100
100110
47
MQC ( Modular Qos Command Line Interface)
CAR ( Commited Access Rate)
48
Modular QoS CLI (MQC)
Command syntax introduced in 12.0(5)T
Reduces configuration steps and time
Uniform CLI across all main Cisco IOS-based platforms
Uniform CLI structure for all QoS features
router(config)# class-map [match-any | match-all] class-name
• 1.
Create Class Map - a traffic class ( match access list, input interface, IP Prec, DSCP, protocol (NBAR) src/dst MAC address, mpls exp)
.
router(config)# policy-map policy-map-name
• 2. Create Policy Map (Service Policy) Associate a class map with one or more QoS policies (bandwidth, police, queuelimit, random detect, shape, set prec, set DSCP, set mpls exp)
.
router(config-if)# service-policy {input | output} policy-map-name
49
• 3. Attach Service Policy - Associate the policy map with an input or output interface.
50
1. Create Class Map – defines traffic selection criteria
Router(config)# class-map class1
Router(config-cmap)# match ip precedence 5
Router(config-cmap)# exit
2. Create Policy Map- associates classes with actions
Router(config)# policy-map policy1
Router(config-pmap)# class class1
Router(config-pmap-c)# set mpls experimental 5
Router(config-pmap-c)# bandwidth 3000
Router(config-pmap-c)# queue-limit 30
Router(config-pmap)# exit
3. Attach Service Policy – enforces policy to interfaces
Router(config)# interface e1/1
Router(config-if)# service-policy output policy1
Router(config-if)# exit
51
MQC based
IOS 12.1(5)T class-map match-all premium match access-group name premium
!
class-map match-any trash match protocol napster match protocol fasttrack
!
policy-map classify class premium
Traffic class definitions set ip precedence priority QoS policy definition class trash police 64000 conform-action set-prec-transmit 1 excess-action drop
!
ip access-list extended premium permit tcp host 10.0.0.1 any eq telnet
!
interface serial 2/1 ip unnumbered loopback 0 service-policy input classify
ACL definition
QoS Policy attached to interface
52
CAR based ip cef
!
interface serial 2/1 ip unnumbered loopback 0 rate-limit input access-group 100 64000 8000 8000 conform-action set-prec-transmit 1 exceed-action set-prec-transmit 0
!
access-list 100 permit tcp host 10.0.0.1 any eq http
CAR definition
ACL definition
53
Route-map based route-map classify permit 10 match ip address 100 set ip precedence flash
!
route-map classify permit 20
Route-map definitions match ip next-hop 1 set ip precedence priority
!
interface serial 2/1 ip unnumbered loopback 0 ip policy route-map classify
!
access-list 1 permit 192.168.0.1
Route-map attached to interface access-list 100 permit tcp host 10.0.0.1 any eq http
ACL definitions
54
Used to assign more predictive behavior to traffic
Uses Token Bucket model
Token Bucket characterizes traffic source
Token Bucket main parameters:
Token Arrival Rate - v
Bucket Depth - Bc
Time Interval – tc
Link Capacity - C Overflow Tokens v
Tokens
Bc tc = Bc/v Incoming packets
C
Conform
Exceed
55
56
Bucket is being filled with tokens at a rate v token/sec.
When bucket is full all the excess tokens are discarded.
When packet of size L arrives, bucket is checked for availability of corresponding amount of tokens.
If several packets arrive back-to-back and there are sufficient tokens to serve them all, they are accepted at peak rate (usually physical link speed).
If enough tokens available, packet is optionally colored and accepted to the network and corresponding amount of tokens is subtracted from the bucket.
If not enough tokens, special action on packet is performed.
57
Actions performed on nonconforming packets:
Dropped (Policing)
Delayed in queue either FIFO or WFQ (Shaping)
Colored/Recolored
58
Bucket depth is characteristic of traffic burstiness
Maximum number of bytes transmitted over period of time
t :
A(
t) max
= Bc+v · t
59
Cisco Implementation
GTS ( Generic Traffic Shaping)
If during previous tc n-1 interval bucket no congestion), in the next interval tc n burst.
Bc was not depleted (there is
Bc+Be bytes are available for
In frame relay implementations packets admitted via Be tokens are marked with DE bit.
60
Cisco Implementation
CBTS (Class Based Traffic Shaping) allows higher throughput in uncongested environment up to peak rate calculated as v
Peak
= v
CIR
(1+Be/Bc)
Peak rate can be set up manually.
Cisco Implementation
61
CAR allows RED like behavior:
traffic fitting into Bc always conforms
traffic fitting into Be conforms with probability proportional to amount of tokens left in the bucket
traffic not fitting into Be always exceeds
CAR uses the following parameters:
t – time period since the last packet arrival
Current Debt ( D cur
)
– Amount of debt during current time interval
Compound Debt ( D comp
) – Sum of all D cur since the last drop
Actual Debt ( D act
) – Amount of tokens currently borrowed
62
Cisco Implementation
Packet of length
L arrived
Bc cur
– L > 0
Y
N
D cur
= L - Bc cur
D comp
D act
Bc cur
= 0
= D comp
+ D cur
= D act
+v
· t
+ D cur
Bc cur
= Bc cur
– L
D act
> Be
N
Y
D comp
> Be
N
Y
CAR Algorithm
Conform
Action
Exceed
Action
D comp
= 0
GTS Based interface serial 2/1 ip unnumbered loopback 0 traffic-shape rate 64000 8000 1000 256
!
interface serial 2/2
Shaper Definitions ip unnumbered loopback 0 traffic-shape group 100 64000 8000 8000 512
!
access-list 100 permit tcp host 10.0.0.1 any eq http
ACL definition
63
Shaper can be only used to control egress traffic flow!
64
IOS 12.0(5)T
CAR Based ip cef interface serial 2/1 ip unnumbered loopback 0 rate-limit output access-group 100 64000 8000 16000 conform-action transmit excess-action drop
CAR Definitions
!
interface serial 2/2 ip unnumbered loopback 0 rate-limit input 128000 16000 32000 conform-action transmit excess-action drop
!
access-list 100 permit tcp host 10.0.0.1 any eq http
ACL definition
Policer can be used to control ingress traffic flow!
65
IOS 12.1(5)T
MQI Based class-map match-all policed match protocol http class-map match-all shaped match access-group name ftp-downloads
!
Class definitions policy-map bad-boy class policed police 64000 8000 8000 conform-action transmit exceed-action drop
QoS policy definition class shaped shape average 128000
!
interface serial 2/1 QoS Policy attached to interface ip unnumbered loopback 0 service-policy output bad-boy
!
ip access-list extended ftp-downloads permit tcp any eq ftp-data any
ACL definition
66
Why cannot my traffic reach CIR value?
Cause: Improper setting of Bc and Be values
CAR is aggressive, as drops excessive packets and the lost data needs to be retransmitted by upper layers (mainly TCP) after timeout. This also causes TCP to shrink its window reducing flow throughput.
Cisco Systems recommends the following settings:
Bc = 1.5
x CIR/8
Be = 2 x Bc
67
68
Traffic burst may temporarily exceed interface capacity
Without queuing this excess traffic will be lost
Queuing allows bursty traffic to be transmitted without drops
Queuing strategy defines order in which packets are transmitted through egress interface
Queuing introduced additional delay which signals to adaptive flows (like
TCP) to back off their throughput
69
FIFO
Priority (Absolute)
Weighted Round Robin (WRR)
Fair
70
Simplest queuing method with the least CPU overhead
No congestion control
Transmits packets in the order of arrival
High volume traffic can suppress interactive flows
Default queuing for interfaces > 2Mbps (i.e. Ethernet)
71
FIFO average queue depth dependence on load
72
Generic Priority Queuing
Custom Queuing
RTP Priority Queuing
Low Latency Queuing (LLQ)
73
Stated requirement:
– “If <application> has traffic waiting, send it next ”
Commonly implemented
– Defined behavior of IP precedence
74
Identify interesting traffic
– Access lists
Place traffic in various queues
Dequeue in order of queue precedence
Traffic
Destined for Interface
Classify
High
Medium
Normal
Low
Q Length Defined by Q Limit
75
Classification by:
•
Protocol (IP, IPX, AppleTalk,
SNA, DecNet, Bridge, etc.)
•
Incoming Interface
(EO, SO, S1, etc.)
Interface Buffer
Resources
Interface Hardware
• Ethernet
• Frame Relay
• ATM
• Serial Link
•
Etc.
Transmit
Queue
Output
Line
Absolute Priority
Scheduling
76
High Empty?
N
Y
Send packet from High
Medium Empty?
Y
N
Send Packet from Medium
Normal Empty?
N
Y
Send Packet from Normal
Low Empty?
N
Send Packet from Low
Y
77
Needs thorough admission control
No upper limit for each priority level
High risk of low priority queues` starvation effect
78 priority-list 1 protocol ip high tcp telnet priority-list 1 protocol ip high list 100 priority-list 1 protocol ip medium lt 1000 priority-list 1 interface ethernet 0/0 medium priority-list 1 default low
!
interface serial 2/1
PQ Definition
!
ip unnumbered loopback 0 priority-group 1 PQ Attached to Interface access-list 100 permit tcp host 10.0.0.1 any eq http
ACL definition
79
( Weighted Round Robin )
Traffic
Destined for Interface
Classify
1/10
2/10
3/10
2/10
3/10
•
•
Interface Hardware
• Ethernet
• Frame Relay
• ATM
Serial Link
Etc.
Transmit
Queue
Output
Line
Up to 16
Q Length
Deferred by
Queue Limit
Link
Utilization
Ratio
Weighted Round
Robin Scheduling
(byte count)
Classification by:
• Protocol (IP, IPX, AppleTalk,
SNA, DecNet, Bridge, etc.)
• Incoming interface
(EO, SO, S1, etc.)
Interface
Buffer
Resources
Allocate
Proportion of
Link Bandwidth)
80
Unpredictable jitter
Fairness significantly depends on MTU and TCP window size
Complex calculations to achieve desired traffic proportions
Distribute bandwidth to 3 queues with proportion x:y:z and packet sizes q x
, q y
, q z
.
1.
Calculate a x
=x/q x
, a y
=y/q y
, a z
=z/q z
.
2.
Normalize and round a x
, a y
, a z
.
a x
’= round(a x
/min(a x
, a y
, a z
)); a y
’= round(a y
/min(a x
, a y
, a z
)); a z
’= round(a z
/min(a x
, a y
, a z
)) .
3.
Convert obtained packet proportion into byte count bc x
= a x
’·q x
; bc y
= a y
’·q y
; bc z
= a z
’·q z
.
4.
Actual bandwidth share of i -th queue can be calculated with the following formula: share i
bc i
C n bc j j
1
5.
For better approximation obtained byte-counts can be multiplied by some positive whole number.
Starting with IOS 12.1 CQ employs Deficit Round Robin algorithm and there is no need in such byte-count tuning.
81
82 queue-list 1 protocol ip 1 tcp telnet queue-list 1 protocol ip 2 list 100 queue-list 1 protocol ip 3 udp 53 queue-list 1 interface ethernet 0/0 4 queue-list 1 queue 1 byte-count 3000 queue-list 1 queue 2 byte-count 4500 queue-list 1 queue 3 byte-count 3000 queue-list 1 queue 4 byte-count 1500 queue-list 1 default 4
!
interface serial 2/1 ip unnumbered loopback 0 custom-queue-list 1
!
CQ List Definition
CQ Attached to Interface access-list 100 permit tcp host 10.0.0.1 any eq http
ACL Definition
83
Time Division
Multiplexer
Keshav, Demers, Shenker, and Zhang
Simulates a TDM
One flow per channel
84
6
5
4
3
2
1
Time Division
Multiplexer
85
Time Division
Multiplexer
6
5 4
3
2
1
86
Employs virtual bit-by-bit round robin model (BRR)
BRR dynamics are described by the equation:
R
t
N
ac
( t )
i-th packet from flow
arriving at time t
0 time t : is services at
R ( t i
)
R ( t i
0
)
P i
Servicing of i-th packet from flow
will start at S i
S i
MAX ( F i
1
, R ( t i
)) F i
S i
and finish at F i
P i
:
Additional
parameter is added for priority assignment to inactive flows :
B i
MAX ( F i
1
, R ( t i
)
)
Packets are ordered for transmission according to B i
values.
87
Enqueue traffic in the sequence the TDM would deliver it
As a result, be as fair as the TDM
88
Low-bandwidth flows get
– As much bandwidth as they can use
– Timely service
High-bandwidth flows
– Interleave traffic
– Cooperatively share bandwidth
– Absorb latency
89
In TDM
– Channel speed determines message “duration”
In WFQ
– Multiplier on message length changes simulated message “duration”
Result:
– Flow’s “fair” share predictably unfair
Traffic
Destined for Interface
Classify
Transmit
Queue
Output
Line
Weighted Fair
Scheduling
Configurable
Number of
Queues
90
Flow-Based Classification by:
•
Source and destination address
• Protocol
• Session identifier (port/socket)
Interface
Buffer
Resources
Weight Determined by:
• Requested QoS (IP Procedure, RSVP)
• Frame Relay FECN, BECN, DE
(For FR Traffic)
• Flow throughput (weighted-fair)
91
Fair bandwidth per flow allocation
Low delay for interactive applications
Protection from ill-behaved sources
92
Flow classified by the following fields:
Source address
Source port
Destination address
Destination port
ToS
Weight of each flow (queue) depends on ToS: weight = 1/(precedence+1)
Bandwidth distributed in 1/weight proportions
93
Packets are ordered according to the expected virtual departure time of their last bit.
Low volume flows have preference over high volume transfers.
Low volume flow is identified as using less than its share of bandwidth.
The special queue length threshold value is established, after which only low volume flows can enqueue. All the packets, that belong to high volume flows are dropped.
94
Requires more sorting than other approaches
95
FTP
Telnet t
96
FTP
Telnet t
interface serial 2/1 ip unnumbered loopback 0 fair-queue 32 128 0
97
Queue Threshold
(packets)
Maximal number of queues
Number of reservable queues
98
Classifies only by UDP port range
Only even ports from the range are classified
Establishes upper limit via integrated policer
Excess traffic dropped during congestion periods
RTP PQ has priority over LLQ
interface serial 2/1 ip unnumbered loopback 0 ip rtp priority 16384 16383 256
99
Starting UDP port
Range length
Bandwidth Limit
(kbps)
100
Implemented using MQI
Very rich classification criteria (class-map)
Establishes upper limit via integrated policer
Excess traffic dropped during congestion periods
101
IOS 12.0(5)T
Class definitions class-map match-all voice match access-group name voip
!
policy-map llq class voip priority 30 class class-default fair-queue 64
!
interface serial 2/1 ip unnumbered loopback 0 service-policy output llq
!
ip access-list extended voip permit ip host 10.0.0.1 any
LLQ policy definition
LLQ Policy attached to interface
ACL definition
102
Based on the same algorithm as WFQ
Weights can be manually configured
Allows to easily specify guaranteed bandwidth for a class
Configuration based on Cisco MQI
IOS 12.0(5)T
Class definitions
103 class-map match-all premium match access-group name premium-cust class-map match-all low-priority match protocol napster
!
policy-map cbwfq-sample class premium bandwidth 512 class low-priority shape average 128 shape peak 512 class class-default fair-queue 64
!
interface serial 2/1 ip unnumbered loopback 0 max-reserved-bandwidth 85 service-policy output cbwfq-sample
!
ip access-list extended premium-cust permit ip host 10.0.0.1 any
Qos policy definition
QoS Policy attached to interface
ACL definition
Hierarchical Design class-map match-all premium match access-group name premium-cust class-map match-all voice match ip precedence flash
!
policy-map total-shaper class class-default shape average 1536 service-policy class-policy policy-map class-policy class premium bandwidth 512 class voice priority 64 class class-default fair-queue 128
IOS 12.1(5)T interface fastethernet 1/0 ip unnumbered loopback 0 max-reserved-bandwidth 85 service-policy output total-shaper
!
ip access-list extended premium-cust permit ip host 10.0.0.1 any
104
Only two levels of hierarchy are supported
set command not supported in child policy
Shaping allows only in parent policy
LLQ can be configured only either in child or parent policies but not in both
FQ allowed only in child policy
105
106
107
Link Capacity
Avg. Throughput t
Packet drops from all TCP sessions simultaneously
High probability of multiple drops from the same
TCP session
Uniformly distributed drops from high volume and interactive flows
Result: Low average throughput!
108
Developed by Van Jacobson in 1993
Starts randomly dropping packets before actual congestion occurs
Keeps average queue depth low
Increases average throughput
109
Link Capacity
Avg. Throughput t
110
p
1
0
Tail Drop q max p
RED
1
q avg
0
min
max
Adjustable q avg
111
RED Parameters:
min
– Minimal threshold after which RED starts packet drops.
Minimal recommended value is 5 packets.
max
– Maximal threshold after which all packets are dropped.
Recommended value is 2-3 times
min
.
- Mark probability denominator denotes packet drop probability at
average queue depth. Optimal value – 0.1 .
max
- Exponential weighting factor determines the level of backward value-dependence in average queue depth calculation: q avg
= (q old
· (1 - 2 -
)) + (q cur
· 2 -
)
General recommendation
= 9 .
112
In TCP, the spacing of ACKs and the window size in the
ACKs controls the transmitter’s rate.
Rate Control manipulates the ACKs as they pass through the rate control device by:
– Adjusting the size of TCP ACK window
– Inserting new ACKs
– Re-spacing existing ACKs
Rate Control works only with TCP; other methods, such as Token Bucket, must be used with UDP.
Rate Control violates the protocol layering design, as it allows network devices to manipulate a higher-layer protocol’s operation. Nevertheless, it usually functions well and provides fine-grained control.
Example:
Transmitter Rate-control device wind ow: 80
00
Receiver wind ow: 20
00 wind ow: 20
00 wind ow: 20
00 wind ow: 20
00
115
Modified version of RED
Weights determine the set of parameters:
max and
.
min
Weight depends on ToS field value
,
Interactive flows are preserved
116
Interface based interface serial 2/1 ip unnumbered loopback 0 random-detect random-detect 0 32 64 20 random-detect 1 32 64 20 random-detect 2 32 64 20 random-detect 3 32 64 20
…
min
max
117
MQI based policy-map red class class-default random-detect random-detect 0 32 64 20 random-detect 1 32 64 20 random-detect 2 32 64 20 random-detect 3 32 64 20
… interface Serial2/1 ip unnumbered loopback 0 service-policy output red
min
max
WRED is incompatible with LLQ feature!
118
119
Jumbogram Voice
Packet
For links < 128kbps
64 kbps
1500 bytes
190ms
120
Supported interfaces:
Multilink PPP
Frame Relay DLCI
ATM VC
64 kbps
121
MLP version interface virtual-template 1 ip unnumbered loopback 0 ppp multilink ppp multilink interleave ppp multilink fragment-delay 30 ip rtp interleave 16384 1024 512
…
122
123
End-to-end QoS signaling protocol
Used to establish dynamic reservations over the network
Always establishes simplex reservation
Supports unicast and multicast traffic
Actually uses WFQ and WRED mechanisms
124
125
126
Reservation Types:
Guaranteed Rate (uses WFQ and LLQ)
Controlled Load (uses WRED)
Distinct Shared
Explicit Fixed Filter (FF) Shared Explicit (SE)
Wildcard X Wildcard Filter (WF)
127
QoS policy can be shared inside single AS or among different ASs.
Community attribute is usually used for color assignments
Prevents manual policy changes in network devices
128
129
Router A ip bgp-community new-format
!
router bgp 10 neighbor 10.0.0.1 remote-as 20 neighbor 10.0.0.1 send-community
!
neighbor 10.0.0.1 route-map cout out route-map cout permit 10 match ip address 20
!
set community 60:9 access-list 20 permit 192.168.0.0
0.0.0.255
Router B ip bgp-community new-format
!
router bgp 20 neighbor 10.0.0.2 remote-as 10 table-map mark-pol
!
route-map mark-pol permit 10 match community 1 set ip precedence flash
!
ip community-list 1 permit 60:9
!
interface Serial 0/1 ip unnumbered loopback 0 bgp-policy source ip-prec-map
130
131
Multiprotocol Label Switching (MPLS)
Frame Relay QoS
ATM QoS
Distributed Queuing Algorithms
Multicast
132
QoS is not an exotic feature any more
QoS allows specific applications (VoIP, VC) to share network infrastructure with best-effort traffic
QoS in IP networks simplifies their functionality avoiding Frame Relay and ATM usage
133