AQM-Whitepaper-v1

advertisement
DRAFT!!!! Contact Author: James Martin (jim.martin@cs.clemson.edu)
On AQM for Modern DOCSIS-Based Networks
January 20, 2014
DRAFT!!!! Contact Author: James Martin (jim.martin@cs.clemson.edu)
Table of Contents
EXECUTIVE OVERVIEW ..........................................................................................................3
INTRODUCTION..........................................................................................................................3
BACKGROUND ............................................................................................................................5
RESOURCE MANAGEMENT MECHANISMS ..................................................................................6
TECHNIQUES UNDER STUDY....................................................................................................11
ILLUSTRATING DELAY-BASED AQM ................................................................................13
BIGGER PICTURE.....................................................................................................................13
MOVING FORWARD ................................................................................................................14
REFERENCES .............................................................................................................................14
APPENDIX 1 AQM ALGORITHMS ....................... ERROR! BOOKMARK NOT DEFINED.
APPENDIX 2 VOIP PERFORMANCE METRIC ..................................................................17
APPENDIX 3 GAMING APPLICATION PERFORMANCE METRIC..............................17
2
DRAFT!!!! Contact Author: James Martin (jim.martin@cs.clemson.edu)
Executive Overview
DOCSIS-based networks are evolving very rapidly. The recent DOCSIS 3.1 standard
paves the way for up to 10 Gbps downstream speeds and up to 1 Gbps upstream speeds.
Service rates offered to subscribers will certainly increase. Further complicating matters
is the convergence between IP, the Internet, and traditional cable broadcast. The only
thing clear is that bandwidth management is crucial to the success of the cable industry.
In this white paper we focus on one component that is available to operators to manage
D3.0/3.1 bandwidth, active queue management (AQM). In this report, we overview the
motivations and evolution of AQM. Using simulation analysis, we compare and contrast
recently proposed AQM schemes with established methods. We focus on downstream
(we plan on a second report that focuses more directly on upstream in the Fall of 2014).
An important contribution of this report is to confirm that AQM is not a panacea – the
operator still must have robust subscriber management practices that ensure capacity
problems are identified and addressed as they emerge.
Introduction
{JJM: Describe D3.0 and now D3.1. Focus on downstream management, in particular
AQM }
Resource allocation is the process by which network elements try to meet the competing
demands that applications have for network resources. Resource allocation mechanisms
can be classified over several dimensions including: router versus host centric,
reservation versus feedback, and window versus rate control. The specific mechanisms
used in a network depend on the requirements of the network, or more specifically on the
service model supported by the network. However for any choice of resource allocation
method, the routers within the network will implement some form of queue scheduling
discipline that decides which packet in the queue(s) is selected to transmit next and queue
drop policy that decides which packet is to be dropped when the interface becomes
congested.
In the Internet model resource allocation mechanisms are host centric, based on feedback
and utilize window control. Routers generally apply a first come first served (FCFS)
scheduling discipline at queues that build at output interfaces. When a packet is
forwarded to an output interface in a router that has a full queue, the arriving packet is
dropped. Dropped packets provide implicit signals of network congestion back to the end
points. The TCP congestion control algorithms provide the basic elements of control for
managing congestion and for sharing network resources among competing flows.
For routers with queue management based on FCFS scheduling and tail-drop packet
discard policy, the maximum size of the queue is a readily accessible ‘control knob’ that
can impact performance. Queues with large capacity are useful to absorb the inherent
3
DRAFT!!!! Contact Author: James Martin (jim.martin@cs.clemson.edu)
‘burstiness’ associated with data. However, large queues can significantly add to end-toend packet delays. Finding the optimal queue capacity is difficult as the best setting is
likely to depend on the specific set of flows and the underlying application requirements.
More advanced queue disciplines can provide more granular control of router resources.
For example, if traffic can be grouped into classes, a priority queue algorithm can be
used. The challenge with strict priority queuing is to ensure that low priority traffic does
not get starved. Algorithms such as weighted round robin, deficit round robin, and
weighted fair queuing have been developed to provide a framework to implement a
desired policy.
Router-based congestion control can also be implemented through alternative packet drop
algorithms. For example, the Random Early Detection (RED) algorithm manages the
queue in a more adaptive manner than a simple tail-drop policy by randomly dropping
packets using a drop probability that grows as the average queue level grows. The drop
rate, maxp, increases linearly as the average queue level grows from a minimum
threshold (minth) to a maximum threshold (maxth). It has been shown that RED provides
the following benefits: reduced level of packet loss by better tolerating bursty traffic,
reduced queue delay by maintaining a lower average queue level, and improved fairness
of competing TCP flows. However, it has been shown that the average queue delay with
RED is sensitive to traffic loads and to the RED parameter settings [CJOS00,
MBDL99,OLW99]. Many modifications to RED have been proposed to address this
issue, including Adaptive RED and BLUE [FGS01, FSKS02]. Adaptive RED adapts the
maxp parameter to keep the average queue level between minth and maxth. BLUE, on
the other hand, asserts that the use of queue levels to determine if the network is
congested provides vague information about the level or nature of congestion forcing
RED to require a range of parameter settings to operate optimally. BLUE replaces the
RED congestion decision with an algorithm that is based on packet loss and link
utilization. BLUE claims to reduce packet loss rates (compared to RED) even when
operating with queues configured with small capacities.
In spite of the volume of academic research in the area of queue management, there has
been little attention paid to the issue in cable access networks. In [DHGS07] the authors
show that the upstream queue delay (measured in time) of cable networks can be on the
order of several seconds. To the best of our knowledge, applying AQM to better manage
upstream queues in a cable modem has not been explored. In this paper we document a
study that we have performed that provides insight in this issue.
In our study we assume the network will provide differentiated service levels, although
the majority of traffic is likely to be best effort. Therefore, the primary focus of our study
will be on strategies to manage best effort traffic. The initial focus of our study will be
on downstream traffic.
While network operators must deal with managing bandwidth across possibly very large
networks, our study is limited to a single DOCSIS domain. In subsequent sections we
4
DRAFT!!!! Contact Author: James Martin (jim.martin@cs.clemson.edu)
show that the models and analysis that are done in the study are applicable for larger
networks.
This report is organized as follows. The next section provides the necessary background
to explain the intent and impacts of AQM in DOCSIS networks. After this, we illustrate
two recently introduced delay-based AQM algorithms, CoDel and PIE [ ]. Finally we
illustrate scenarios where these schemes might have difficulty. We conclude by outlining
a possible direction that addresses these difficult scenarios…
Background
Resource allocation is the process by which network elements try to meet the competing
demands that applications have for network resources. Two broad dimensions to the
problem of network resource management are:



Service models: At the highest level, a network can provide any combination of
guaranteed services, differentiated services, or a simple best effort service. A
guaranteed service provides services that meet specific performance criteria. A
differentiated service typically allows traffic to be divided into classes and have
the network treat each class differently when subject to various congestion
situations. Best effort treats all data the same. Each model requires different
resource allocation strategies.
Scope or location of the management problem: Resource allocation mechanisms
might operate at a local router level or at a global, network-wide level. The
control mechanisms generally differ depending on the scope however they must
operate in unison to achieve overall allocation goals and objectives.
Time scales: Different strategies that each have different time scales of control
are likely to be operating concurrently. The range of time includes:
 Microseconds: Packet scheduling disciplines determine which packets get
serviced when a link becomes congested and also how the queue is managed.
 Milliseconds: End-to-end congestion control algorithms such as the control
algorithms supported by TCP stacks manage how a flow reacts to signs of
network congestion.
 Minutes-hours: Traffic management methods like Comcast’s Fairshare
management that modifies the allocation of resources based on control
procedures that are based on relatively large time scales.
 Days or weeks: Admission control and capacity planning methods are used to
ensure that the network is adequately provisioned to meet throughput and
delay requirements.
5
DRAFT!!!! Contact Author: James Martin (jim.martin@cs.clemson.edu)
Resource Management Mechanisms
Figures 2 and 3 illustrate two architectural approaches for managing bandwidth in a cable
system. Figure 2 represents the situation when external traffic management devices are
used to manage subscriber traffic. Products from Procera and Sandvine are designed to
support this architecture. Figure 3 illustrates an alternative approach where bandwidth
management is performed by the CMTS.
Inbound packets (i.e., packets that arrive from the Internet that are destined for a
subscriber’s network) that arrive at the access network are associated with a service flow.
A service flow might be based on combinations of source and destination addresses and
ports, or based on traffic associated with specific subscribers. We define a service flow to
be equivalent to a DOCSIS unicast service flow which could represent all downstream
(or upstream) traffic destined for one subscriber. Or, the service flow can be more
specific, involving one or more application flows consumed (or generated) by a
subscriber. Further, we define an aggregate service flow as a set of service flows. For
example, a bandwidth strategy based on two levels of priority (e.g., service flows are
either high or low priority). Packets that arrive at the CMTS or external traffic
management device are classified and possibly queued in a service flow or an aggregated
service flow queue. The queues are managed and serviced using multiple levels of
bandwidth management. We overview these mechanisms in the following subsections,
using the timescale of control to organize the discussion.
Time Scales of Microseconds
A packet scheduling discipline is required at any link (whether it’s real or virtual) that
becomes congested. On a packet-by-packet basis, the scheduler selects the next packet to
transmit and/or decides which packet will be dropped in certain situations. There are at
least five degrees of freedom involved with designing packet scheduling disciplines
[KES97]:
 Number of priority levels
 Whether each level is work conserving or non-work conserving
 Degree of aggregation within a level
 Service order within a level
 Queue management technique
Flows can be classified and grouped by level of priority. A scheduling discipline might
support hard priorities. For example VoIP traffic might be considered high priority and
other traffic low priority. Therefore, two queues are used. When a link is congested,
VoIP traffic is served first. Low priority traffic is scheduled only when VoIP is not
waiting for service. This behavior can starve lower priority traffic. Flow-based
scheduling can provide better fairness.
A work conserving scheduling discipline is one that always assigns packets for service
when the link becomes available. Non-work conserving scheduling might chose to not
serve a packet waiting in the queue even if the link is idle. Arguably service rates are a
form of non-work conserving scheduling.
6
DRAFT!!!! Contact Author: James Martin (jim.martin@cs.clemson.edu)
The degree of aggregation within a level determines how many flows are combined
together by the scheduler when determining the service order. This can range from
combining all flows into a single first-come-first-served (FCFS) queue to using a separate
queue for each flow. The tradeoff involves complexity versus maintaining isolation
between flows.
Or particular relevance to this study are the complementary aspects of service order and
queue management. Service order within a level determines the manner in which the
scheduler serves packets from flows (or aggregates of flows) at the same priority level.
Queue management controls how packets are to be dropped when an output queue
becomes congested.
Packet scheduling disciplines
The building blocks of a scheduling discipline are the service order selection algorithm
and the queue management technique. The number of priority levels, the choice of work
conserving or non-work conserving, and the degree of aggregation each impose specific
requirements on the choice of service order and queue management techniques. The
overriding requirement will come from the service model(s) the network must support.
In this section, we overview relevant background associated with packet scheduling
disciplines. For brevity, we focus the discussion with the following assumptions:
 The service model requires the network to provide a best effort transport service
with differentiated levels of service. Therefore there will be flows of different
priority levels.
 As the service model does not include guaranteed services, the scheduling
discipline will be work conserving.
 The degree of aggregation is an integral attribute of the scheduling discipline.
Service order:
It is well known that first-come-first-served scheduling is not fair, as connections receive
service roughly in proportion to the speed at which they send data. An acceptable
fairness model is ‘max-min’ fair, which was applied to computer networks originally
(and independently) by Jaffe [JAF81] and Hayden [HAY81]. The max-min criterion
dictates that the smallest session must be as large as possible, and subsequently, the
second smallest session must be as large as possible, continuing until further allocations
are not possible. Max-min fair allocation can be achieved with an ideal work conserving
scheduling discipline called generalized processor sharing (GPS) [DKS89]. GPS services
packets from per-flow queues, visiting each queue in round-robin order and serving an
infinitesimally small amount of data.
As it is not possible to implement GPS, a tremendous amount of research has explored
efficient algorithms that approximate GPS. The tradeoff that is typically studied is
complexity versus fairness.
There are two main approaches: timestamp-based
algorithms and round-robin algorithms. Weighted fair queueing (WFQ), a timestamp
algorithm, approximates GSP by computing the time a packet would complete service
7
DRAFT!!!! Contact Author: James Martin (jim.martin@cs.clemson.edu)
using GPS and then serves packets in the order of this virtual finish time. Computing the
finish times is generally complex. A number of low complexity timestamp packet
scheduling algorithms have been proposed including self-clocked fair queueing and
stochastic fair queuing [GOL94,MC91]. SCFQ approximates WFQ without having to
implement a simulation of GPS to track the virtual finish times. Stochastic fair queueing
(SFQ) eliminates the need for per flow queues by aggregating a number of service flows
using a hash allowing aggregate traffic to be directed to one of a number of FCFS queues
[MC91]. Round robin algorithms, such as deficit round robin (DRR), are considered
scalable while adequately approximating GPS [SV96].
Many of the proposed scheduling ideas have been modified (or were created) to support
service differentiation. It is well known that servicing multiple FCFS queues based on
hard priorities is not fair as low priority traffic might be starved. WFQ supports higher
priority flows by assigning them a higher weight allowing different sessions to receive
different allocations relative to other sessions. Weighted DRR can provide higher
priority flows with a larger quantum amount (i.e., the amount of service provided per
round) [KES97,SV96].
A slightly different approach involving multiple levels of resource sharing that allows
aggregate traffic to be treated on a class (or organizational) base, and then possibly on a
more granular basis within a particular class. The idea of hierarchical link sharing, also
referred to as class-based queuing, avoids starvation of low priority traffic by assigning a
minimum link allocation to top level classes [FJ95]. The enforcement of bandwidth used
by a class can be achieved using rate control or a form of round robin. Other examples
include hierarchical fair queueing and class-based weighted fair queueing [BZ97, CIS-A].
As an example, 75% of a link bandwidth might be allocated to high priority class traffic,
25% to low priority traffic. Packets could be selected within a class using a flow-based
scheduling algorithm.
Queue management:
While much of the Internet uses drop-tail, more complex algorithms have been proposed
to manage router queues. For example, the Random Early Detection (RED) algorithm
manages the queue in a more adaptive manner than a simple tail-drop policy by randomly
dropping packets using a drop probability that grows as the average queue level
grows[FJ93]. The drop rate, maxp, increases linearly as the average queue level grows
from a minimum threshold (minth) to a maximum threshold (maxth). It has been shown
that RED provides the following benefits: reduced level of packet loss by better tolerating
bursty traffic, reduced queue delay by maintaining a lower average queue level, and
improved fairness of competing TCP flows. However, it has been shown that the average
queue delay with RED is sensitive to traffic loads and to the RED parameter settings
[CJOS00, MBDL99,OLW99]. Many modifications to RED have been proposed to
address this issue, including Adaptive RED and BLUE [FBS01, FSKS02]. Adaptive
RED adapts the maxp parameter to keep the average queue level between minth and
maxth. BLUE, on the other hand, asserts that the use of queue levels to determine if the
network is congested provides vague information about the level or nature of congestion
forcing RED to require a range of parameter settings to operate optimally. BLUE
8
DRAFT!!!! Contact Author: James Martin (jim.martin@cs.clemson.edu)
replaces the RED congestion decision with an algorithm that is based on packet loss and
link utilization. BLUE claims to reduce packet loss rates (compared to RED) even when
operating with queues configured with small capacities. An alternative congestion
decision can take into account packet queue delay. For example, Procera’s BROWN
algorithm is similar to RED, except the drop probability is increased as the queue delay
approaches a maximum tolerated delay (default is 30 ms) [Procera]. We will refer to this
form of AQM as DRED (delay-based RED). In the results presented in this paper include
simulations that involve RED, ARED and DRED. Refer to Appendix 1 for a detailed
description of the RED, Adaptive-RED, and DRED models implemented in the
simulator.
Variants of AQM have been developed to support differentiated services. Weighted RED
(WRED) adjusts the drop threshold parameters based on a packet’s IP precedence bits
[CIS-B]. RED In and Out (RIO) drops packets based on two threshold settings: one for
packets that are considered ‘in profile’ and one for packet’s that are considered ‘out of
profile’ [CF98]. A further blurring between packet scheduling and queue management
occurs with algorithms such as Flow-RED (FRED), and Stochastic Fair Blue (SFB)
[LM97, FKS01]. These algorithms focus mainly on fairness, primarily to address the
fairness problem when TCP traffic competes with unresponsive UDP flows. FRED
maintains a separate drop rate for each flow, allowing the drop rate for high bandwidth
flows to be higher thereby protecting ‘fragile’ or low bandwidth, latency sensitive flows.
SFB employs efficient bloom filters to identify high bandwidth, non-responsive flows
without having to maintain significant per flow state.
Time Scales of Milliseconds
Most traffic generated by broadband access subscribers is TCP. YouTube, ESPN360,
and other popular video streaming content providers have adopted TCP-based
progressive downloading. In addition to downloading video, the two other most common
applications are P2P and web browsing. Therefore, TCP is the dominant transport
protocol which in turn makes TCP’s congestion control algorithms a crucial component
to bandwidth management over access networks.
For the purposes of this study, we make the following observations of TCP:
 TCP is an end-to-end protocol that attempts to fairly share network resources and
to prevent congestion. There are three cases to consider:
o When the access link is not the bottleneck, the access network is likely to
not be fairly divided across active flows.
o When the access link is congested, it is likely that some flows have
additional bottlenecks and therefore, broadband bandwidth will not be
fairly shared.
o When the access link is congested and it the primary bottleneck for the
majority of TCP flows, the TCP congestion control protocol again will not
fairly share the access link as the allocation depends on a flows RTT as
well as specific application behaviors such as the number of TCP
connections the application might maintain.
9
DRAFT!!!! Contact Author: James Martin (jim.martin@cs.clemson.edu)


It is likely that downstream TCP throughput will be constrained by the flow of
ACK packets in the upstream direction.
Traffic such as UDP is considered ‘unresponsive’ because it will not react and
adjust its transmission rate when packet loss occurs.
Time Scales of Minutes or Hours
Load balancing or automated optimization capabilities are designed to supplement the
controls that operate on smaller time scales. Most modern cable systems have equipment
that support to automate load balancing.
Time Scales of days or weeks
A network operator might chose to manage subscribers at least in part by consumption
levels. For example, a service plan offered to a subscriber might include a maximum
allowed usage level. Once the subscriber reaches this level (e.g., 10 GBytes of data
uploaded in any month), the subscriber is switched to a lower service rate. Again, most
equipment comes with extensive support for this.
Network operators typically analyze network statistics over large timescales to find timeof-day or day-of-week usage patterns. This information can help the operator find underprovisioned areas of the network. The data might also be used to help the provider
forecast or predict future demand. However, the tremendous changes that are in affect
makes it difficult for providers to accurately assess the network performance and to
conduct reliable capacity planning.
A broadband access network likely requires bandwidth management operating at all time
scales. Due to the inherent complexity of bandwidth management and due to sometimes
unseen underlying interactions between the different levels of bandwidth management, it
is difficult to know which management schemes and their respective configurations are
the best choice.
DS Service Flows
SF1
traffic
arrival process
SF2
….
SFn
Single Aggregate
Queue
scheduler
Regulator
Figure 1. Conceptual System Diagram- Downstream DOCSIS
10
DRAFT!!!! Contact Author: James Martin (jim.martin@cs.clemson.edu)
Techniques Under Study
Our study focuses on two established AQM methods, RED and ARED as well as two
recently introduced techniques: CoDel and PIE [ ]. We also discuss the use of multiple
queues….
RED
We have implemented the RED algorithm as defined in [FVJ93]. In our implementation,
the algorithm is executed each time a packet arrives from the subscriber network that has
to be queued in the upstream service flow queue. The algorithm is summarized in Figure
A1-1. The parameters are the queue capacity (Qmax), the maximum drop probability for
an early drop (maxp), and the average queue weight factor (wp). The queue capacity is an
experimental parameter that ranges from 4 to 1024 packets. The RED maxth parameter is
set to the queue capacity. The minth parameter is 1/10 the queue capacity. The units for
the queue capacity and the minth and maxth parameters are in packets. The maxp
parameter specifies the maximum drop probability that would ever be applied. We
experimented with different values and selected a default value of 1.0 1. The weight
factor is the time constant used in the low pass filter that maintains the average queue
level. We kept wp at the recommended value of 0.002.
Figure A1-1. The RED Algorithm
Each time a packet arrives at the queue, the average queue level is updated based on the
following low pass filter:
avg=(1-wp)avg + wp*currentQueueLevel
For each packet arrival, the current packet drop probability (pb) is computed as follows:
pb=maxp(avg-minth)/(maxth-minth)
pa =pb/(1-(count*pb)
The count variable reflects the number of consecutive times packets arrive when the
queue length is between minth and maxth thresholds and when the algorithm decides not
1
We also evaluated a RED configuration based on minth set to 4 packets, maxth set to 50% of the buffer
capacity, and maxp set to 0.50. This setting causes the Cable Modem to react more aggressively to
upstream congestion resulting in lower average queue levels. Our implementation supports gentle mode
where the drop rate increases linearly from maxp to 1 proportional to the average queue size value when it
falls in between maxth and 2*maxth.
11
DRAFT!!!! Contact Author: James Martin (jim.martin@cs.clemson.edu)
to drop the packet. As count increases, it inflates the drop probability. The maximum
drop probability is always set to 1.0. The original RED algorithm used a maximum drop
probability of 0.02, however it has been pointed out that the value should be much higher
[FLO97]. This parameter allows the sensitivity of the algorithm to be controlled. The
results that we report were based on setting the maxp to its most sensitive level of 1.0.
DRED (delay-based RED)
Figure A1-3 describes our DRED AQM algorithm. The algorithm is identical to RED
except the measure of congestion is based on the average queue delay rather than the
average packet loss rate.



On packet arrival calculate the average queue delay (avg)
If minth <= avg < maxth
o Count ++
o Calculate the drop probability (Pdrop)
o With probability Pdrop, drop the arriving packet
o If dropped, count = 0
Else if maxth <=avg
o Drop the arriving packet
o Count = 0
Figure A1-3. DRED AQM Algorithm
On a packet arrival, if the packet can be serviced immediately update the filtered queue
delay average as follows (wp is the filter time constant):
Avg = (1-wp)avg + wp* 0
On a packet departure from the queue, compute the packet’s queueing time (Qdelay) as
follows:
Avg = (1-wp)avg + wp* Qdelay
For each packet arrival, the current packet drop probability (pb) is computed as follows:
pb=maxp(avg-minth)/(maxth-minth)
pa =pb/(1-(count*pb)
The count variable reflects the number of consecutive times packets arrive when the
queue delay is between minth and maxth thresholds and when the algorithm decides not
to drop the packet. As count increases, it inflates the drop probability.
ARED (Adaptive RED)
CoDel
12
DRAFT!!!! Contact Author: James Martin (jim.martin@cs.clemson.edu)
PIE
Illustrating Delay-Based AQM
….
VoIP Server
Gaming Server
DASH servers
DS-1
Web browsing clients CM #n
DS-2
VoIP Monitor CM
DS-x
Gaming clients
DASH clients
FTP clients
Data rates: multiples of:
30.72Mbps upstream,
42.88Mbps downstream
(will change with D3.1)
CMTS
1000Mbps,
.5ms prop delay
CM-1
router
1000Mbps,
Variable prop
delay
router
FTP servers
FS-1
100Mbps,
(1-3ms prop delay)
FS-2
FS-x
Server’s link speed and
Prop delay are varied
Figure 2. Simulation Network Diagram
Bigger Picture
….
13
DRAFT!!!! Contact Author: James Martin (jim.martin@cs.clemson.edu)
Moving Forward
….
References
[BC98] P. Barford and M. Crovella, “Generating representative web workloads for
network and server performance evaluation”, in Proceedings of Performance '98 / ACM
SIGMETRICS '98, July 1998.
[BLS04] A. Bellissimo, B. Levine, and P. Shenoy, “Exploring the use of bittorrent as the
basis for a large trace repository”,n Technical report 04-41, Department of Computer
Science, University of Amherst, June 2004.
[BZ97] J. Bennett, H. Zhang, “Hierarchical Packet Fair Queueing Algorithms”,
IEEE/ACM Transactions on Networking, Vol. 5., No. 5, October 1997.
[CB95] M. Crovella, A. Bestavros, “Self-Similarity in World Wide Web Traffic:
Evidence and Possible Causes”, IEEE ACM Transactions on Networking, Vol. 5, No. 6,
Dec. 1997.
[CF98] D. Clark, W. Fang, “Explicit Allocation of Best-effort Deliver Service”,
IEEE/ACM Transactions on Networking, Vol. 6, No. 4, pp. 362-373, August 1998.
[CIS-A] Technical Document, “Class-based Weighted Fair Queueing”, available at
http://www.cisco.com/en/US/docs/ios/12_0t/12_0t5/feature/guide/cbwfq.pdf
[CIS-B] Technical Document, “Distributed Weighted Random Early Detectin”,
http://www.cisco.com/en/US/docs/ios/11_1/feature/guide/WRED.html
[COM08] COMCAST’s submission to the FCC, Attachment B: Comcast Corporation
Description of Planned Network Management Practices to be Deployed Following the
Termination of Current Practices”, 2008. Available at
http://downloads.comcast.net/docs/Attachment_B_Future_Practices.pdf.
14
DRAFT!!!! Contact Author: James Martin (jim.martin@cs.clemson.edu)
[COR01] R. Cole, J. Rosenbluth, “Voice Over IP Performance Monitoring”, Proceedings
of the ACM SIGCOMM01, April, 2001.
[DKS89] A. Demers, S. Keshav, and S. Shenker. Analysis and simulation of a fair
queueing algorithm. In Journal of Internetworking Research and Experience, pages 3–26, October 1990. Also in Proceedings of ACM SIGCOMM’89, pp 3-12.
[FBS01] S. Floyd, R. Bummadi, S. Shenker, “Adaptive RED: An Algorithm for
Increasing the Robustness of RED’s Active Queue Management”, Technical report,
ICSI, 2001. Available at http://www.icir.org/floyd/papers/adaptiveRed.pdf
[FGHW99] A. Feldmann, A. Gilbert, P. Huang, W. Willinger, “Dynamics of IP Traffic:
A study of the role of variability and the impact of control”, SIGCOM99.
[FJ93] S. Floyd, V. Jacobson, “Random Early Detection Gateways for Congestion
Avoidance”, IEEE/ACM Transactions on Networking, Vol. 1, No.4, August 1993.
[FJ95] S. Floyd, V. Jacobson, “Link-Sharing and Resource Management Models for
Packet Networks”, IEEE/ACM Transactions on Networking, Vol. 3, NO. 4, August
1995.
[FKS01] W. Feng, D. Kandlur, D. Saha, K. Shin, “Stochastic Fair Blue: A Queue
Management Algorithm for Enforcing Fairness”, Proceedings of the IEEE Infocom 2001.
[FSK02] W. Feng, K.Shin, D. Kandlur, D. Saha, “The Blue Active Queue Management
Algorithms”, IEEE/ACM Transactions on Networking, Vol. 10, No.4, Aug 2002.
[GOL94] S. Golestani, “A Self-Clocked Fair Queueing Scheme for Broadband
Applications”, In Proceedings of IEEE INFOCOM’94, pages 636–646, Toronto,
CA, June 1994.
[HAY81] H. Hayden, Voice Flow Control in integrated Packet Networks, Master’s
Thesis, Massachusetts Institute of Technology, Dept. of Electrical Engineering,
Cambridge, MA, 1981.
[IZA04] M. Izal, G. Uroy-Keller, E. Biersack, P. Felber, A. Hamra, L. Garces-Erice,
“Dissecting BitTorrent: Five Months in a Torrent’s Lifetime”, Proceedings of Passive
and Active Measurements Workshop, 2004 (PAM04), April 2004.
[JAF81] J. Jaffe, “Bottleneck Flow Control,” IEEE Transactions on Communications,
29(7), pp. 954-962, 1981.
[KES97] S. Keshav, “An Engineering Approach to Computer Networking”, AddisonWesley, 1997.
15
DRAFT!!!! Contact Author: James Martin (jim.martin@cs.clemson.edu)
[LM97] D. Lin, R. Morris, “Dynamics of Random Early Detection”, ACM Sigcomm
1997.
[MBDL99] M. May, J. Bolot, C. Diot, B. Lyles, “Reasons not to Deploy RED”,
Proceedings of the 7th International Workshop on Quality of Service (IWQoS’99), June
1999.
[MC91] P. McKenney, “Stochastic Fairness Queueing”, Internetworking: Research and
Experience, Vol. 2, Jan 1991, pp. 113-131.
[OLW99] T. Ott, T. Lakshman, L. Wong, “SRED: Stabilized RED”, Proceedings of the
IEEE Infocom, March 1999.
[SV96] M. Shreedhar, G. Varghese, “Efficient Fair Queueing Using Deficit RoundRobin”, IEEE/ACM Transactions on Networking, Vol. 4, NO. 3, June 1996.
16
DRAFT!!!! Contact Author: James Martin (jim.martin@cs.clemson.edu)
Appendix 1 VoIP Performance Metric
The R-values are mapped to a human-oriented, perceived quality assessment as follows:
100-90: best quality
90-80: high call quality
80-70: medium quality
70-60: low quality
60-0: poor quality
After applying simplifying assumptions (refer to Appendix 1 for details), the R-value for
G.711 assumes an ideal value of 94.2 which is then possibly reduced based on estimated
impairments caused by packet latency and loss. The function defining the R-value is
shown in equation 1.
R  94.2  I d  I ef
(1)
The impairment components are summarized as follows.
I d : The impairment in call quality caused by the mouth-to-ear delay. The impairment
considers the effects of echo or the disruption in user interactivity due to latency. Echo
canceling is a common technique to reduce the impacts of echo. Our metric assumes that
echo canceling is successful so that the impairment is solely caused by packet delay. The
technique is based on a simple function involving the measured one-way packet latency.
The function is based on data provided by the ITU and does NOT depend on the type of
codec.
I ef : The impairment caused by distortion incurred in encoding/decoding process or due
to packet loss. Our metric accounts only for the latter. In a packet network the
impairment caused by packet loss depends on the (non-)effectiveness of the loss
concealment technique that is being used. The quantification of the impairment depends
on the loss rate (characterized by both the mean and a measure of the burstiness) as well
as the type of encoder/decoder in operation. Packet loss occurs either in the network or at
the codec when a frame arrives too late to be decoded back to analog voice.
Appendix 2 Gaming Application Performance Metric
TBD
17
Download