in progress

advertisement
AQM and packet scheduling,
... again!
Jim Roberts
IRT-SystemX, France
A new AQM working group at IETF
• new interest is arising from perceived problems in home
networks (bufferbloat) and data center interconnects
• the IETF working group will define an AQM that
–
–
–
–
“minimizes standing queues”
“helps sources control rates without loss (using ECN)”
“protects from aggressive flows”
“avoids global synchronization”
• for queues in routers, home networks, data centers,...
• “combined queue management / packet scheduling algorithms are
in-scope”
Outline
•
•
•
•
bufferbloat
the data center interconnect
fair flow queuing for large shared links
user controlled sharing of the access link
“Bufferbloat”
• the problem: too large buffers, notably in home routers, get
filled by TCP leading to long packet latency
Bufferbloat
• how TCP should use the buffer
router
buffer
link
TCP
flow
TCP
window/rate
packets
in buffer
“good
queue”
Bufferbloat
• impact of a bloated buffer: longer delays, same throughput
router
bloated buffer
link
TCP
flow
packets
in buffer
“bad
queue”
Bufferbloat
• impact of a bloated buffer: high latency for real time flows
router
VoIP
flow
bloated buffer
link
TCP
flow
packets
in buffer
“bad
queue”
Bufferbloat
• impact of drop tail: unfair bandwidth sharing
router
TCP
flow
TCP
flow
packets
in buffer
bloated buffer
link
Bufferbloat
• impact of drop tail: unfair bandwidth sharing
router
“UDP”
flow
TCP
flow
packets
in buffer
bloated buffer
link
AQM to combat bufferbloat
• CoDel (Controlled Delay) [Nichols & Jacobson, 2012]
– measure packet sojourn time
– drop packets to keep minimum delay near target
• PIE (Proportional Integral controller Enhanced) [Pan et al, 2013]
– drop probability updated based on queue length & departure rate
• both rely on TCP in end-system, neither ensures fairness
AQM and scheduling: fq_codel
• fq_codel combines SFQ (stochastic fairness queuing) and CoDel
– hash flow ID to one of ~1000 queues (buckets)
– deficit round robin scheduling over queues
– control latency in each queue using CoDel
• with some enhancements
– priority to packets of “new” flows
– drop from queue head rather than tail
• V. Jacobson (quoted by D. Täht):
– “If we're sticking code into boxes to deploy CoDel, don't do that.
Deploy fq_codel. It's just an across the board win”
Advantages of per-flow fair queuing
• fairness is imposed, no need to rely on TCP
– flows use the congestion control they want (eg, high speed TCP)
– no danger from “unresponsive flows”
• realizes implicit service differentiation
– since rate of streaming flows generally less than fair rate
– lower still latency by giving priority to “new” flows
• as proposed by [Nagle 1985], [Kumar 1998], [Kortebi 2004],...
– with longest queue drop as AQM
fair rate
Fair
Queuing
Outline
•
•
•
•
bufferbloat
the data center interconnect
fair flow queuing for large shared links
user controlled sharing of the access link
Congestion in the interconnect
• 1000s of servers connected by commodity switches and routers
• mixture of bulk data transfers requiring high throughput...
• and query flows requiring low latency
core routers
aggregation
switches
top of rack
switches
servers
pod
Congestion in the interconnect
•
•
•
•
1000s of servers connected by commodity switches and routers
mixture of bulk data transfers requiring high throughput...
and query flows requiring low latency
a practical observation
– regular TCP does not ensure high throughput and low latency
switch
query
flows
bulk
data
links
shared buffer
Many new congestion control schemes
• DCTCP [Alizadeh 2010]
– limits delays by refined ECN scheme to smooth rate variations
• D3 [Wilson 2011]
– “deadline driven delivery”
• D2TCP [Vamanan 2012]
– combines aspects of previous two
• PDQ [Hong 2012]
– size-based pre-emptive scheduling
• HULL [Alizadeh 2012]
– low delay by “phantom queues” and ECN
• evaluations assume all data center flows implement the
recommended protocol
pFabric: “minimalist data center transport”
• instead of end-to-end congestion control, implement scheduling
in switch and server buffers [Alizadeh 2013]
• “key insight: decouple flow schedule from rate control”
core routers
aggregation
switches
top of rack
switches
servers
pod
pFabric: “minimalist data center transport”
• instead of end-to-end congestion control, implement scheduling
in switch and server buffers [Alizadeh 2013]
• “key insight: decouple flow schedule from rate control”
• “shortest job first” scheduling to minimize flow completion time
input
server or
switch
output
pFabric: “minimalist data center transport”
• instead of end-to-end congestion control, implement scheduling
in switch and server buffers [Alizadeh 2013]
• “key insight: decouple flow schedule from rate control”
• “shortest job first” scheduling to minimize flow completion time
• also optimal for flows that arrive over time
• minimal rate control: eg, start at max rate, adjust using AIMD
input
server or
switch
output
green flow
resumes
green flow
pre-empted
pFabric: possible drawbacks
• relies on scheduler knowing flow sizes
– hosts can lie or divide long flows into successive short flows
• priority in networks can lead to loss of traffic capacity
– capacity is load at which delays explode
– eg, 2 classes & 2-buffer paths requires ρ2 < (1 − ρ1)2
– cf. [Bonald & Massoulié 2003], [Verloop et al. 2005]
ingress buffer
egress buffer
server
server
-
-
Outline
•
•
•
•
bufferbloat
the data center interconnect
fair flow queuing for large shared links
user controlled sharing of the access link
How can the Internet continue to rely on
end-to-end congestion control ?
• “TCP saved the Internet from congestion collapse”
– but there are easier ways to avoid the “dead packet” phenomenon
•
new TCP versions are necessary for high speed links
– but they are generally very unfair to legacy versions
• no incentive for applications to be “TCP friendly”
– in particular, congestion pricing is unworkable
• better, like pFabric, to decouple scheduling and rate control
flow suffers
congestion
collapse
large number of
“dead packets”
local congestion ⇒
many dropped packets
“Flow-aware networking” [Kortebi et al. 2004]
• rather than Intserv, DiffServ, MPLS TE,...
routers should impose per-flow fair sharing and not rely on
end-system implemented congestion control
• fair queuing is feasible and scalable...
... and realizes implicit service differentiation...
... for network neutral traffic control
• note: fairness is an expedient, not a socio-economic objective
fair rate
Fair
Queuing
Understanding traffic at flow level
• a flow is an instance of some application (document
transfer, voice signal,...)
– a set of packets with like header fields, local in space and
time
• traffic is a process of flows of different types
– conversational, streaming, interactive data, background
– with different traffic characteristics – rates, volumes,...
– and different requirements for latency, integrity,
throughput
• characterized by size and “peak rate”
video stream
TCP data
peak
rate
Controlled bandwidth sharing
• three bandwidth sharing regimes
“transparent”
“elastic”
“overload”
• transparent regime:
– all flows suffer negligible loss, no throughput degradation
• elastic regime:
– some high rate flows can saturate residual bandwidth; without
control these can degrade quality for all other flows
• overload regime:
– traffic load is greater than capacity; all flows suffer from
congestion unless they are handled with priority
Statistical bandwidth sharing
• ie, statistical multiplexing with elastic traffic
• consider a network link handling flows between users, servers, data
centers,...
• define, link load = flow arrival rate x mean flow size / link rate
= packet arrival rate x mean packet size / link rate
= mean link utilization
Traffic variations and stationarity
mean
link
utilization
a stationary stochastic process
mean
one week
busy hour
demand
mean
link
utilization
one day
Statistical bandwidth sharing
• ie, statistical multiplexing with elastic traffic
• consider a network link handling flows between users, servers, data
centers,...
• define, link load = flow arrival rate x mean flow size / link rate
= packet arrival rate x mean packet size / link rate
= mean link utilization
Bandwidth sharing performance
• in the following simulation experiments, assume flows
– arrive as a Poisson process
– have exponential size distribution
– instantaneously share link bandwidth fairly
• results apply more generally thanks to insensitivity
Performance of fair shared link
(arrival rate x mean size / link rate)
flow
performance
duration
mean
rate
number of
active flows
time
Performance of fair shared link
number of
active flows
time
Performance of fair shared link
number of
active flows
time
Performance of fair shared link
number of
active flows
time
Performance of fair shared link
number of
active flows
time
Performance of fair shared link
number of
active flows
time
Observations
• the number of flows using a fairly shared link is small until load
approaches 100% (for any link capacity)
• therefore, fair queuing schedulers are feasible and scalable
• our simulations make Markovian assumptions but the results for
the number of active flows are true for much more general
traffic [Ben-Fredj 2001]
flow
arrivals
Poisson
session
arrivals
session
departures
new flow
of same
session
flow
1
think
time
flow
2
think
time
flow
3
a session
...
flow
n
More simulations
• on Internet core links (≥ 10 Gbps), the vast majority of flows
cannot use all available capacity; their rate is constrained
elsewhere on their path (eg, ≤ 10 Mbps)
• consider a link shared by flows whose maximum rate is only 1%
of the link rate
– conservatively assume these flows emit packets as a Poisson process
at rate proportional to the number of flows in progress
Performance with rate limited flows
number of
flows in
progress
number of
active flows
time
“active flows”
have ≥ 1 packet
in queue
Performance with rate limited flows
number of
flows in
progress
number of
active flows
time
Performance with rate limited flows
number of
flows in
progress
number of
active flows
time
Performance with rate limited flows
number of
flows in
progress
number of
active flows
time
Performance with rate limited flows
number of
flows in
progress
number of
active flows
time
Performance with rate limited flows
number of
flows in
progress
number of
active flows
time
Observations 2
• most flows are not elastic and emit packets at their peak rate
• these flows are “active”, and need to be scheduled, only when
they have a packet in the queue
• the number of active flows is small until load approaches 100%
• fair queuing is feasible and scalable, even when the number of
flows in progress is very large
Yet more simulations
• links may be shared by many rate limited flows and a few elastic
flows
• consider a link shared by 50% of traffic from flows whose peak
rate is 1% of link rate and 50% elastic traffic
Performance of link with elastic and
rate limited flows
number of
flows in
progress
number of
active flows
time
Performance of link with elastic and
rate limited flows
number of
flows in
progress
number of
active flows
time
Performance of link with elastic and
rate limited flows
number of
flows in
progress
number of
active flows
time
Observations 3
• the number of active flows is small (<100) with high probability
until load approaches 100%
• therefore, fair queuing is feasible and scalable
• fair queuing means packets of limited peak rate flows see
negligible delay:
– they are delayed by at most 1 round robin cycle
– this realizes implicit service differentiation since conversational
and streaming flows are in the low rate category
– a scheduler like DRR considers packets as belonging to new flows;
we can therefore identify them and give them priority (cf. fq-codel)
• imposed flow fairness yields predictable performance that is
insensitive to most traffic characteristics
– an “Erlang formula” for the Internet [Bonald & Roberts 2012]
Rate control and congestion collapse
• like pFabric, FQ decouples flow schedule from rate control
– minimal rate control: eg, start at max rate, adjust using AIMD
– performance doesn’t depend on standard control algorithm (eg, TCP)
– FQ has maximal traffic capacity (unlike SRPT)
• congestion collapse occurs in the overload regime (ie, load > 1)
– to be avoided by traffic engineering and overload control
• another form of congestion collapse would occur if users were
all unresponsive to congestion (eg, [Floyd and Fall 1999])
– but this is hardly a realistic scenario!
overload regime
these flows
suffer from
low fair rate
large number
of high rate flows
local congestion
⇒large number
of flows
Recommendation for traffic control
on shared network links
• implement per-flow fair queuing in router queues with longest
queue drop
–
–
–
–
–
this is scalable and feasible
avoids relying on end-system congestion control
and realizes implicit service differentiation
view fairness as an expedient not a socio-economic objective
yielding predictable performance, an “Internet Erlang formula”
• apply traffic engineering to ensure load is not too close to 100%
and implement overload controls in case this fails
• this works for the Internet and data center interconnects but
access networks need more than fair sharing...
Outline
•
•
•
•
bufferbloat
the data center interconnect
fair flow queuing for large shared links
user controlled sharing of the access link
Sharing the first/last mile
• this is the usual bottleneck for Internet flows
– for DSL, cable, wireless, fiber
• what role for AQM, scheduling, congestion control, QoS?
– for upstream and downstream
• who rules? who decides priorities?
– surely, the user, not the network operator
“home router”
uploads
ACKs
VoIP
∙∙∙
“edge router”
downloads
streams
VoIP
∙∙∙
Sharing the first/last mile
• CoDel, PIE, fq-codel do not distinguish classes of service
– eg, though users have priorities (eg, background uploads)
• also, difficult coexistence of AQM and low priority congestion ctrl
– eg, CoDel, FQ,... give same share to LEDBAT & TCP [Gong 2013]
• prefer explicit, per-flow scheduling to realize user’s fairness and
priority objectives
– eg, priority to Dad’s flows, limit Junior’s total bandwidth,...
“home router”
uploads
streams
ACKs
VoIP
∙∙∙
“edge router”
downloads
streams
ACKs
VoIP
∙∙∙
Sharing the first/last mile
• operators differentiate managed and over-the-top services
– though users may want other priorities (eg, priority to Skype,
background downloads)
• user control would be a better alternative, requiring:
– a flow scheduler in the router trading off complexity and flexibility
– a signalling protocol allowing the user to dictate priorities
“home router”
uploads
streams
ACKs
VoIP
∙∙∙
“edge router”
downloads
streams
ACKs
VoIP
∙∙∙
Conclusions
• congestion in data center interconnects
– new specific congestion control algorithms and AQMs
– pFabric decouples scheduling and rate control
• imposed fair flow sharing is a scalable solution for large capacity
links in the “elastic regime”
– avoids relying on end-to-end congestion control
– and realizes implicit service differentiation
• bufferbloat in the home network
– new AQMs reduce latency but assume TCP congestion control
– fair queuing (fq-codel) is “an across the board win”
• the first/last mile requires more than just fairness
– schedulers in home and edge routers
– a signalling scheme to implement user downstream priorities
References
– Alizadeh, et al., “Data center TCP (DCTCP)” SIGCOMM, 2010.
– Alizadeh, et al., “Less is more: trading a little bandwidth for ultra-low
latency in the data center”, NSDI, 2012.
– Alizadeh et al. “pFabric: minimal near-optimal datacenter transport”,
SIGCOMM, 2013.
– Ballani et al., “Towards predictable datacenter networks”, SIGCOMM, 2011.
– Ben-Fredj et al., “Statistical bandwidth sharing: a study of congestion at
flow level”. SIGCOMM, 2001.
• Bonald and Massoulié, “Impact of fairness on Internet performance”,
SIGMETRICS, 2001.
– Bonald and Roberts, “Internet and the Erlang formula”, SIGCOMM Comput.
Commun. Rev., 2012.
• Floyd and Fall, “Promoting the use of end-to-end congestion control”, IEEE
ToN, 1999.
• Gong et al. “Fighting the bufferbloat: on the coexisitence of AQM and low
priority congestion controlé, TMA, 2013.
References (2)
– Hong et al., “Finishing flows quickly with preemptive scheduling”, SIGCOMM,
2012.
– Kortebi et al. “Cross-protect: implicit service differentiation and admission
control”, HPSR, 2004.
– Kumar et al., “Beyond best efort: router architectures for the
differentiated services of tomorrow’s Internet”, IEEE Comm Mag., 1998.
– Nagle, “On packet switches with infinite storage”, RFC 970, 1985.
– Nichols and Jacobson, “Controlling queue delay”, Comm. ACM, 2012.
– Pan et al. “PIE: a lightweight control scheme to address the bufferbloat
problem”, HPSR, 2013.
– Vamanan et al., “Deadline-aware datacenter TCP (D2TCP)”, SIGCOMM, 2012.
• [Verloop et al., “Stability of size-based scheduling disciplines in resource
sharing networks”, Perfom. Eval., 2005.
- Wilson et al., “Better never than late: meeting deadlines in datacenter
networks”, SIGCOMM, 2011.
Download