Active Queue Management (AQM) based Internet Congestion Control October 1 2002 ()

advertisement
Active Queue Management (AQM)
based Internet Congestion Control
October 1 2002
Seungwan Ryu
(sryu@eng.buffalo.edu)
PhD Student of IE Department
University at Buffalo
Contents
Internet Congestion Control
Active Queue Management (AQM)
Control-Theoretic design of AQM
Performance Evaluation
Summary and Issues for Further study
References
I. Internet Congestion Control
Internet Traffic Engineering
What is Congestion ?
Congestion Control and Avoidance
TCP Congestion Control
Active Queue management (AQM)
Other Approaches
Measurement: for reality check
Experiment: for Implementation Issues
Analysis:
Bring fundamental understanding of systems
May loose important facts because of simplification
Simulation:
Internet Traffic Engineering
Complementary to analysis: Correctness, exploring
complicate model
May share similar model to analysis
What is congestion ?
The aggregate demand for bandwidth exceeds
the available capacity of a link.
What will be occur ?
What is Congestion ?
Performance Degradation
•
•
•
•
Multiple packet losses
Low link utilization (low Throughput)
High queueing delay
Congestion collapse
What is congestion ? (2)
Congestion Control
Open-loop control
• Mainly used in circuit
switched network (GMPLS)
Implicit feedback control
Closed-loop control
• Mainly used in packet switched network
• Use feedback information: global & local
• End-to-end congestion control
• Examples:
TCP Tahoe, TCP Reno, TCP Vegas, etc.
Explicit feedback control
• Network-assisted congestion control
• Examples:
IBM SNA, DECbit, ATM ABR, ICMP
source quench, RED, ECN
Two approaches of handling Congestion
Congestion Control (Reactive)
• Play after the network is overloaded
Congestion Control and Avoidance
Congestion Avoidance (Proactive)
• Play before the network becomes overloaded
Paradigms:
Paradigms of the Current Internet
For design and Operation: “Keep it simple”
Design principle of TCP:
These paradigms are aimed for best-effort service
“Do not ask the network to do what you can do yourself”
As the Internet evolves and grows in size and number of
users, the network has experienced performance
degradation such as more packet drop
In addition, service evolves to a variety of services
Question: Do we need new paradigm?
Uses end-to-end congestion control
Uses implicit feedback
• e.g., time-out, triple duplicated ACKs, etc.
Uses window based flow control
• cwnd = min (pipe size, rwnd)
• self-clocking
• slow-start and congestion avoidance
TCP Congestion Control
Examples:
• TCP Tahoe, TCP Reno, TCP Vegas, etc.
TCP Congestion Control (2)
Slow-start and Congestion Avoidance
cwnd
Slow Start
Congestion Avoidance
W*
W+1
W*/2
RTT
Time
RTT
TCP Tahoe
Use slow start/congestion avoidance
Fast retransmit: an enhancement
detect packet (segments) drop by three duplicate ACKs
W = W/2, and enter congestion avoidance
Upon receiving three duplicate ACKs
TCP Reno (fast recovery)
TCP Congestion Control (3)
ssthresh = W/2, and retransmit missing packets
Upon receiving next ACK: W = ssthresh
Allow the window size grow fast to keep the pipeline full
TCP (Tahoe) sender can only know about a single lost per RTT
SACK option provides better recovery from multiple losses
TCP SACK (Selected Acknowledgement)
The sender can transmit all lost packets
But those packets may have already been received
Operation
TCP Congestion Control (4)
Add SACK option into TCP header
The receiver sends back SACK to sender to inform the reception
of the packet
Then, the sender can retransmit only the missing packet
Smart-market [Mackie-Mason 1995]
A price is set for each packet depends on the level of demand
for bandwidth
Admit packets with bid prices that exceed the cut-off value
The cut-off is determined by the marginal cost
Paris metro pricing (PMP) [Odlyzko]
Other Approaches : Pricing
To provide differentiated services
The network is partitioned into several logical separate
channels with different prices
With less traffic in channel with high price, better QoS would
be provided.
Network resource allocation
problem:
User problems
Network problems
User problem sends bandwidth
request with price
Network problem allocate
bandwidth to each users by
solving NLP
User problem
Users can be distinguished
by a utility function
A user wants to maximize its
benefit (utility - cost)
Network problem
Concept
Other approaches (2): Optimization
maximize aggregate utilities
subject to the link capacity
constraints
Then, it can be formulated to
a Non-linear programming
(NLP) problem
NLP formulation of network resource allocation
SYSTEM(u,A,C):
max
U s ( xs )
s.t.
Ax ≤ C
xs ≥ 0
where J = {1,
s∈S
, CJ }
, J } is a set of links with capacities C = {C 1,
S = {1, , S } is a set of sources,
J ( s) ⊆ J is a set of links that source s uses
Sources with different valuation of bandwidth should react differently to
the congestion
Congestion becomes a rate based optimization problem
Window size correspond to xs is: Ws = xs * Ts = xs ( Ds + d s )
U s ( xs ) ∈ U is a utility function of source s, and
A be a routing matrix where A js = 1 if j ∈ J (s) and zero otherwise
Optimization (2) : NLP formulation
Optimization (3):
Hard Real-time
Tolerant to delays
Strictly concave and continuously
differentiable function of xs over xs ≥ 0
Example: U s ( xs ) = log xs
Delay-adaptive
Examples:
file transfer, e-mail, remote terminal,
web-traffic, etc.
Tolerate to delay bound and packet
drop
Ex: Most video and audio applications
Rate-adaptive
Utility
Delay sensitive: needs data arrive
within a given delay bound
Ex: Circuit switched services such as
telephony
Adjust transmission rate in response
to network congestion
Ex: ATM ABR flow control
Bandwidth
Elastic services
4 types of Utility functions [Shenker 1995]
Optimization (4): Kelly’s Approach [1997]
A user subproblem
xs ≥ 0
U s ( x s ) − λ s xs
max
s
Ax ≤ C
For given set of willingness-to-pay
A user choose willingnessto-pay to maximize its benefit
s.t.
Where λs is price per unit bandwidth
For given price per bandwidth
λs xs
Choose source rates to maximize
revenue
Determine user’s rate according to
proportional fairness
max
A Network subproblem
Two fairness issues
Fair bandwidth sharing: network-centric
Fair packet drop (mark): user-centric
Max-min fair [Bertsekas, 1992]:
No rate can be increased without simultaneous decreasing other rate which is
already small
provides equal treatment to all flows
Proportional fair [Kelly 1998]
A feasible set of rates are non-negative and the aggregate rate is not greater
than link capacity and the aggregate of proportional change is zero or
negative
provides different treatment of each flow according to their rates
Fair bandwidth sharing
Other approaches (3): Fairness
Fairness (2): Max-min fair
Definition:
Equal treatment of each flows
Gives absolute priority to the small flows, x s* ≤ x s
Cannot increase x s no matter how large ( “MAX” ) for any decrease
in x s* , no matter how small ( “MIN” )
A vector of rates x = (xs , s ∈ S) is max-min fair
If it is feasible (i.e., x ≥ 0 and Ax ≤ C ), and
For each s ∈ S , x s cannot be increased (while maintaining
feasibility) without decreasing x s* , for some s* for which x s* ≤ x s
Fairness (3): Proportional fair
Definition:
A vector of rates x = (xs , s ∈ S) is proportional fair
If it is feasible, and
*
For any other feasible vector x , the aggregate of proportional
changes is zero or negative:
Equal different treatment of each flows according to its rate
Gives less emphasis on small flow than max-min fair
x s* − x s
≤0
xs
s∈S
Current congestion indication
Use packet drop to indicate congestion
Sources infer congestion implicitly from timeout or triple duplicate ACKs
To give less packet drop and better performance
Uses packet marking rather than dropping
Reduces long timeout and retransmission
Needs cooperation between sources and network
Sources must indicate that they are ECN-capable
Sources and receivers must agree to use ECN
Receiver must inform sources of ECN marks
Sources must react to marks just like losses
ECN [IETF RFC2481, 1999]
Explicit Congestion Notification (ECN)
ECN (2)
Needs additional flags in TCP header and IP header
In IP header: ECT and CE
ECN Capable Transport (ECT):
Set by sources on all packets to indicate ECN-capability
Congestion Experienced (CE):
Set by routers as a (congestion) marking (instead of dropping)
In TCP header: ECE and CWR
Echo Congestion Experienced (ECE):
When a receiver sees CE, sets ECE on all packets until CWR is received
Congestion Window Reduced (CWR):
Set by a source to indicate that ECE was received and the window size
was adjusted (reduced)
ECN (3) : Operation
ECT
IP Header
CE
ECT
CE
0
1
1
1
1
0
0
CWR
CWR
2
1
ECN-Echo
TCP Header
3
1
CWR
Source
ACK TCP
Header
4
Router
Destination
TCP Header
II. Active Queue Management (AQM)
Internet Congestion Control
Active Queue Management (AQM)
Control-Theoretic design of AQM
Performance Evaluation
Summary and Issues for Further study
References
Active Queue Management (AQM)
What is AQM?
Examples of AQM: RED and Variants
More about AQM: Extensions
Performance degradation in current TCP Congestion
Control
Multiple packet loss
Low link utilization
Congestion collapse
The role of the router becomes important
Active Queue Management (AQM)
Control congestion effectively in networks
Allocate bandwidth fairly
Problems with current router algorithm
Use FIFO based tail-drop (TD) queue management
Two drawbacks with TD: lock-out, full-queue
Lock-out: a small number of flows monopolize usage of buffer
capacity
Full-queue: The buffer is always full (high queueing delay)
Possible solution: AQM
AQM (2)
Definition:
A group of FIFO based queue management mechanisms to
support end-to-end congestion control in the Internet
Goals of AQM
Reducing the average queue length:
Decreasing end-to-end delay
Reducing packet losses:
More efficient resource allocation
Methods:
AQM (3)
Drop packets before buffer becomes full
Use (exponentially weighted) average queue length as an
congestion indicator
Examples: RED, BLUE, ARED, SRED, FRED,….
Use network algorithm to detect incipient congestion
Design goals:
• minimize packet loss and queueing delay
• avoid global synchronization
• maintain high link utilization
• removing bias against bursty source
Achieve goals by
• randomized packet drop
• queue length averaging
Random Early Detection (RED)
AQM (4)
RED
avg Q = (1 − WQ )avg Q + WQ Q
Pd = pmax
maxth − minth
1
avgQ < minth
minth ≤ avgQ < maxth
maxth ≤ avgQ
P
0
avgQ − minth
Concept
• Parameter tuning problem
• Actual queue length fluctuation
Upon packet loss
if (now - last_update >freeze_t)
Pm = pm + d1
last_update = now
Decouple congestion control
from queue length
Use only loss and idle event as
an indicator
Maintains a single drop prob., pm
Drawback
upon link idle
if (now - last_update >freeze_t)
Pm = pm - d2
last_update = now
To avoid drawbacks of RED
Can not avoid some degree of
multiple packet loss and/or low
utilization
Algorithm
AQM (5) : BLUE
(1 / 6) B ≤ q < (1 / 3) B
q < (1 / 6) B
psred = (1 / 4) pmax
0
(1 / 3) B ≤ q < B
P(i)-1 is not a good estimator for
heterogeneous traffic
Parameter tuning problem: Psred,
Pzap, etc.
Stabilize queue occupancy when
traffic load is high.
What happen when traffic load
is low ?
pmax
stabilize queue occupancy
use actual queue length
Penalize misbehaving flows
Drawbacks
Concept
ith arriving packet is compared with a
randomly selected one from Zombie list
Hit = 1, if they are from same flow
= 0, if NOT
p(i)=hit frequency=(1-α)p(i-1)+αHit
p(i)-1: estimator of # of active flows
Packet drop probability
1
Pzap = Psred * min(1,
)
( 256 × P(i )) 2
Algorithm
AQM (6) : SRED
Adapt aggresiveness of RED according to the traffic
load change
adapt maxp based on queue behavior
Operation
AQM (7) : ARED
Increase maxp when avgQ crosses above maxth
Decrease maxp when avgQ crosses below minth
freeze maxp after changing to prevent oscillation
RED fail to regulate unresponsive flows
UDP do not adjust sending rate upon receiving congestion signal
UDP flows consumes more bandwidth than fair share
FRED [Lin & Morris, 1997]
Tracks the # of packets in the queue from each flow
maintain logical queues for each active flows in a FIFO queue
Fair share for a flow is calculated dynamically
Unresponsive flows are identified and penalized
Responsive (TCP) vs. unresponsive flows (UDP)
More about AQM
Drop packets proportional to bandwidth usage
See TCP-friendly website
(http://www.psc.edu/networking/tcp_friendly.html)
RIO (RED In and Out) [Clark98]
CBT (Class based Thresholds) [Floyd1995]
Try to support a multitude of transport protocol (TCP, UDP, etc.)
Classify several types of services rather than one best-effort
service.
Then, apply different AQM control to each services classes.
Examples:
Supporting QoS and DiffServ with AQM
More about AQM (2)
Separate flows into two classes: IN and OUT service profile
Router maintains two different statistics for each service profiles.
RIO (RED in and out) [Clark 1998]
More about AQM (3)
Different parameters and average queue lengths
Avgs: for IN packet: avgIN, for OUT profile: avgTOTAL
When congested, apply different control to each classes
Drop Prob.
1
Pmax_OUT
Pmax_IN
Minth_OUT Maxth_OUT
= Minth_IN
Maxth_IN
avg
Fairness problem in case of
changing traffic mix
static threshold setting
Total utilization can be fluctuated
Dynamic-CBT [Chung2000]
Track the number of active flows of
each class
dynamically adjust threshold
values of each class
Drawbacks
packets are classified into
several classes
maintain a single queue
but allocate fraction of
capacity to each class
Apply AQM (RED) based
control to each class
Once a class occupies its
capacity, discard all
arriving packets
CBT [Floyd 1995]
More about AQM (4)
III. Control-Theoretic Design of AQM
Internet Congestion Control
Active Queue Management (AQM)
Control-Theoretic design of AQM
Performance Evaluation
Summary and Issues for Further study
References
III. Control-Theoretic Design of AQM
Analytic modeling and analysis of
TCP/AQM
Control-theoretic Modeling and Analysis
PID-controller
PAQM
Goals of analytic modeling
See steady state system dynamics
Capture main factors influence to performance
Provide recommendations for design and operation
Two approaches for TCP Congestion Control
Modeling steady state TCP behaviors
• the square root law*, PFTK [Padhye et al., 1998]
• assume TD queue management at the router
*:
Mathematical modeling and analysis of AQM (RED)
T=
c
RTT p , T: Throughput, p: constant drop rate
Analytic Modeling of TCP/AQM
A Feedback Control System
+
e(t)
-
Controller
u(t)
Plant
y(t)
Controller: generates control signal, u(t), to make the plant
output desirable
Plant: A component whose output is controlled
Signals:
r
r : a reference input (or a desired value) of y(t)
y(t) : plant output at time t
e(t) : control error at time t
u(t) : control signal from a controller to a plant
Feedback Control and TCP/AQM
AQM based TCP congestion control can be modeled
as a feedback control system
Qref
pd
+
-
AQM
TCP
Delay
Plant
Controller
Feedback
λ
Q
Queue
Hollot et al. linearized and
simplified Misra’s model
2N
1
R0
s+
,
2N
R02 C
s+
P(s) =
C2
Assumed that there are fixed number
of persistent TCP flows (e.g. FTP)
Ignores slow-start and timeout
mechanisms
The packet loss for each flows is
assumed to be Poisson process
TCP flow dynamics (P(s))
Misra et al. developed “Nonlinear
stochastic differential equations” for
TCP/AQM flow dynamics [Misra
2000]
Control-theoretic modeling of TCP/AQM
where N = # of flows (load factor),
R0= Round-trip time,
C = link capacity
N=60, R0=246ms, C=3750 packets/sec.
Open-loop transfer function:
For a given network configuration
(used in [Hollot 2001b])
P( s) =
117187
( s + 0.5258)( s + 4.05)
Closed-loop transfer function:
PC ( s ) =
TCP flow dynamics
P(s)
1 + P(s)
MATLAB simulation shows severe
oscillation of output signal (i.e., the
queue length)
RED attempts to eliminate steady-state error by introducing
EWMA queue length as an Integral (I) control
Problems:
Random Early Detection (RED)
Uses a range of reference input: [min th , max th ]
•
Introduce very small weighting factor (W Q=1/512=0.02) for
current queue length
•
Cause oscillatory queue length dynamics
Cause sluggish response to the traffic variation
RED shows poor control performance over a wide range of
traffic environments
Goal: Overcome drawbacks of RED
Introduce a constant desired queue length (Qref) as a
reference input
Use instantaneous queue length to speed up response
s / z +1
s
C ( s) = K PI
Becomes a type 1 system (i.e., there is a pole at s=0):
Can remove steady-state error
K PI C 2
( s / z + 1)
2
N
Open-loop transfer function: PF ( s) = C (s) P( s) =
2N
1
s+
s s+ 2
R0
R0 C
PI-controller:
PI-controller
Proportional Integral (PI)-controller
where
ωg =
jω g
PQueue
2N
,
K
=
ω
PI
g
(( RC ) 3 ) /( 2 N ) 2
R02 C
ω g = 2πf
KPI
is the unity-gain crossover frequency (in radian)
is a PI-control gain
Digitized PI-controller for implementation
Digitize using tustin’s (trapezoidal) method [Franklin, 1995]
Control equation:
p ( kT ) = a (δ q )( kT ) + b (δ q )(( k − 1)T ) + p (( k − 1)T )
where T = 1 / f : the sampling time interval
δq = Q − Qref : deviation of the queue length from Qref
a, b: constant
1+
Design rules:
PI-controller (2)
Congestion should be handled proactively before it
becomes problem not reactively after network is loaded
Requirements for an AQM
Congestion Indicator
• detect congestion proactively based on the incipient congestion
not reactively based on current congestion
Limitations of existing AQM proposals
Control function
•
•
should be a function of incipient as well as current congestion
ˆ )
Example: p d = f (Qt , Q
t +1
where pd is the packet drop probability,
ˆ are the current and incipient queue
Qt and Q
t +1
lengths
Most AQM algorithms show satisfactory performance
under certain traffic environment
Designed with limited assumptions on traffic environment
( For example, existence of only persistent FTP-like flows )
Designed to achieve long-term performance such as steady-state
control performance
Mismatch between long-term and short-term behavior of the queue
length
Insensitive to dynamically changing traffic load: Sluggish response
As a result, configuration problem has been a main issue in design of
an AQM algorithm, especially for RED
Problems
Limitations of existing AQM proposals (2)
Design goals
Regulate the queue length around a desired level (Qref)
Provide fast response to the changing traffic with minimum
fluctuation
Provide satisfactory long-term performance
Can be achieved using PID feedback control
current error: current (P)
integral of previous error: past (I)
the changing rate of current error: future (D)
PID control generate control signal which is a combination of
Proportional-Integral-Derivative
(PID)-controller
PID control
U (s) = K P +
d
1
e
d
K
(
τ
)
τ
+
e(t )
D
TI
dt
( in s-domain) :
u (t ) = K P e(t ) +
Control signal:
1
+ K D s E (s)
TI s
KP: proportional gain
TI=1/KI: integral time (KI is the integral gain)
KD: derivative gain
PID-controller (2)
Use TCP flow dynamics (P(s)) developed in [Hollot
2001b] as a plant model
Two step design procedure [Kuo]
D(s) = K P + K D s +
where
Step 1: to achieve transient (short-term) performance such as fast
response with minimum overshoot
Step 2: to achieve steady-state (long-term) performance
PID-controller (3): Design Procedure
KI
K
= (K P1 + K D1 s ) K P 2 + I 2
s
s
K P = K P1 K P 2 + K D1 K I 2 , K D = K D1 K P 2 , and K I = K P1 K I 2
Digitize by emulation of a continuous model for implementation
PID-controller (4): Design Procedure
PID controller
r(t)
e(t)
+
-
K
KP + KDs + I
s
u(t)
y(t)
Plant
(a) A PID control system
PI-control
part
K P1 + K D1 s
K
K P2 + I 2
s
e(t)
+
-
u(t)
y(t)
Plant
(b) An equivalent PD-PI series control system
r(t)
PD-control
part
PID-controller (5):
Step response (MATLAB results)
TCP flow dynamics
PID-controller
(with 5% of target overshoot)
Pro-Active Queue Management (PAQM)
Design goals
Takes advantages of PID feedback control similar to PIDcontroller
Achieve transient and steady-state control performance
Does not rely on any assumptions on the plant model
Only controls input traffic
Design by direct digital design method for direct
implementation
Design Concept
PID-controller uses emulation
PAQM (2)
k −1
K D'
(ek − ek −1 )
ei +
Ts
i =0
uk = K P
T
ek + s'
TI
A digitized PID control
where u k : a control signal at time instant k = t T = 0,1,2,
s
Ts : an unit sampling time interval
e k = Qk − Qref : error term at time
Output of PAQM at time k is the desired change of control signal
∆u k = u k − u k −1
Apply a velocity PID control
Design two parts
PI control part: to control current congestion
D control part: to avoid incipient congestion
PAQM controller
Qref
+
ek
-
Qk
Pd
PI-control
D-control
P(s)
PAQM (3)
PI control equation
p d is adjusted proportional to the amount of slack/surplus
p d (k ) = p d (k − 1) + α
D control equation
p d is adjusted proportional to the amount of predicted
slack/surplus traffic
pd = pd (k −1) +α[Qˆ k +1 − Qref ]
where Predicted queue length for time k+1 is Qˆ k +1 = Qk + γˆ k +1
γˆ k +1 is the predicted amount of slack/surplus traffic for time k+1
Qk + Qk −1
− Qref
2
traffic
PAQM (4)
IV. Performance Evaluation
Internet Congestion Control
Active Queue Management (AQM)
Control-Theoretic design of AQM
Performance Evaluation
Summary and Issues for Further study
References
Simulation I: Sensitivity to traffic load (N)
Queue length dynamics
Packet loss dynamics
Simulation II: Sensitivity to RTT (longer RTT)
Queue length dynamics
Simulation Setup
Simulation Studies
Performance Evaluation
Simulation Setup
Network configuration
src: 1, 2, 3
50Mbps
5/10/15 ms
Bottleneck Link
30Mbps / 10ms
50 Mbps
20/25/30 ms
50Mbps
20/25/30 ms
src: 3, 4, 5
src: 7, 8, 9
Router (nc0)
50Mbps
35/40/45 ms
dest: 1, 2, 3
50 Mpbs
5/10/15 ms
Router (nc1)
dest: 4, 5, 6
50 Mbps
35/40/45 ms
dest: 7, 8, 9
Traffics
Simulation Setup (2)
1/3 (33%): persistent FTP (elephant) flows
2/3 (66%): short-lived (mice) flows
Parameters setting
AQM algorithms
Parameters
PID-controller
K P = 6.2 * 10 −5 , K D = 5.1 * 10 −5 , K I = 3.12 * 10 −5 , Ts = 33.9ms
PAQM
α = 8.0 *10 −4 , Ts = 50ms
PI-controller
a = 1.822 * 10 −5 , b = 1.816 * 10 −5 , Ts = 6.25ms
RED
ω Q = 0.002, max p = 0.1, max th = 200, min th = 70
Network components
Static factors (Time-invariant)
Vary dynamically over time
Examples: Load factor (# of connections) (N), RTT (R)
Load factors: 189 flows & 378 flows
RTT: longer RTT
We will examine sensitivity of AQM algorithms to
dynamic factors
Fixed when an AQM is installed
Examples: Buffer capacity (B), Link capacity (C)
Dynamic factors (time varying)
Simulation Setup (3)
Performance Metrics
Queue length
Transient (short-term): instantaneous queue dynamics over time
Steady-state (long-term): use QACD**
Se = e =
2
k
1 N
2
(
Q
−
Q
)
i
ref
N + 1 i =1
Packet loss rate
Simulation Setup (4)
Packet loss dynamics over time
Average packet loss rate / link utilization
**QACD: Quadratic Average of Control Deviation [Isermann, 1989]
QACD is only applicable to PID-controller, PAQM and PI-controller
Simulation I:
Sensitivity to traffic load
Queue length dynamics under 189 flows
800
600
600
400
400
200
200
0
0
0
25
50
75
Time(sec.)
1 00
0
1 25
25
50
75
t ime ( se c .)
100
125
150
RED: Que ue le ngt h, n=7 (189 flows)
P IC: Que ue le ngt h, n=7(189 flows)
PI-C
PAQM
PAQM : q ueue leng th, n=7 (18 9 flows )
P ID: Que ue le ngt h, n=7 ( 189 flows)
800
800
800
600
600
400
400
200
200
0
RED
0
0
25
50
75
t ime (se c .)
100
125
150
0
25
50
75
t ime (se c .)
100
125
150
PID
Simulation I:
Sensitivity to traffic load (2)
Queue length dynamics under 378 flows
PAQM: n=14 , n ft p / 2 n mice
PID: Queue leng t h, n=14 (3 78 flo ws )
PID
8 00
800
6 00
600
PAQM
400
4 00
200
2 00
0
0
0
25
50
75
Time ( se c .)
100
0
125
25
RED: Queue leng t h, n=14 (3 78 flo ws )
PI-C
50
75
t ime ( se c .)
100
125
150
PIC: Queue leng t h, n=14 (3 78 flo ws )
80 0
800
60 0
600
40 0
400
20 0
200
RED
0
0
0
25
50
75
t ime ( se c .)
100
12 5
15 0
0
25
50
75
t ime ( sec .)
100
125
150
Simulation I:
Sensitivity to traffic load (3)
Mean & Variance
Num.
of
flows
PID-C
M
PAQM
V
M
V
95% confidence Interval
Quadratic average of control deviation (QACD)
PI-C
M
V
92.1
89.2
80
189
43.5
1.9
53.1
2.8
90.6
80.1
76.9
4.3
270
42.9
1.1
52.3
1.4
78.5
5.1
378
42.0
2.5
51.9
2.5
65.1
2.1
66.1
64.1
60
54.2
51.9
44.5
42.5
40
53.0
50.7
43.7
42.1
42.5
41.5
PI-controller
PAQM
PID-controller
20
* M: mean of QACD
** V: Variance of QACD
53.2
51.5
189
270
Traffic load level (number of flows)
Confidence
Interval
378
Upper
Lower
Simulation I:
Sensitivity to traffic load (4)
Packet loss dynamics under 189 flows
PID
1
0.8
0.8
0.6
0.6
0.4
0.4
0.2
0.2
0
0
25
50
75
100
0
125
0
25
50
Time ( se c .)
PIC: Packet lo ss rate, n=7 (18 9 flows)
PI-C
PAQM
PAQM : Packet Los s rat e, n=7 (1 89 flows )
PID: Packet lo ss rate, n=7 (189 flows )
1
75
Time (se c .)
100
125
150
RED: Packet lo ss rate, n=7 (18 9 flo ws )
1
1
0.8
0.8
0.6
0.6
0.4
0.4
0.2
0.2
0
RED
0
0
25
50
75
Time (se c.)
100
125
150
0
25
50
75
Time (se c .)
100
125
150
Simulation I:
Sensitivity to traffic load (5)
Packet loss dynamics under 378 flows
PID
1
0.8
0.8
0.6
0.6
0.4
0.4
0.2
0.2
0
0
0
25
50
75
100
125
0
25
50
Time (se c .)
PIC: Packet los s rat e, n=14 (3 78 flo ws )
PI-C
PAQM
PAQM : Packet Lo ss Rat e, n=14 (378 flo ws)
PID: Packet los s rate, n=14 (378 flo ws)
1
75
Time (se c .)
100
125
150
RED: Packet lo s s rate, n=14 (3 78 flo ws)
1
1
0.8
0.8
0.6
0.6
0.4
0.4
0.2
0.2
0
RED
0
0
25
50
75
Time (se c .)
100
125
150
0
25
50
75
Time (se c .)
100
125
150
Simulation I:
Sensitivity to traffic load (6)
Average packet loss rate
Average Link Utilization
Average packet rate
Link Utilization
1
0 .1
0 .0 8
PID
PAQM
PI-C
RED
0 .9 9 9
0 .9 9 8
0 .0 6
0 .9 9 7
PID
PAQM
PI-C
RED
0 .9 9 6
0 .0 4
0 .9 9 5
0 .0 2
0 .9 9 4
189
270
378
Traffic Load level (Num. of flow s)
189
270
378
Traffic load level (Num. of flow s)
Simulation II:
Queue length dynamics (under 378 flows)
Sensitivity to RTT (1)
PID
RTT + 100ms : from [93.3, 253.3]ms to [193.3, 353.3]ms
PID: Longer RTT: n=14
PAQM
PAQM : Lo ng er RTT, 378 flo ws
800
800
600
600
400
400
200
200
0
0
0
25
50
75
100
12 5
150
0
25
50
75
100
time (s ec.)
time (s ec.)
PIC: Longer RTT, n=14
RED: Long er RTT, n=14
PI-C
800
80 0
600
60 0
400
40 0
12 5
150
RED
20 0
200
0
0
0
25
50
75
time (s ec.)
100
12 5
150
0
25
50
75
time (sec.)
100
12 5
150
Simulation II:
Sensitivity to RTT (2)
Average Packet loss rates under different traffic loads
Packet Loss rate w ith larger RTT (+100ms)
0.04
0.03
PID
PAQM
PIC
RED
0.02
0.01
0
189
270
Traffic load (Num. of flows)
378
V. Summary and Issues for Further
study
Internet Congestion Control
Active Queue Management (AQM)
Control-Theoretic design of AQM
Performance Evaluation
Summary and issues for Further study
References
AQM based Congestion Control should be adaptive to
dynamically changing traffic
TCP flow dynamics with TD shows severe oscillatory behavior
Existing AQM proposals respond reactive based on current congestion
Congestion should be detected and controlled proactively using incipient as
well as current congestion
We suggested two requirements for an AQM to be adaptive
Summary
Congestion indicator must be able to detect current and incipient
congestion effectively.
Then, the control function can control or avoid congestion proactively
We designed two Proactive queue management algorithms
Takes advantages of Proportional-Integral-Derivative (PID) feedback
control:
PAQM has been developed using direct digital design method
Implemented by emulation of a continuous model
PAQM does not rely on plant dynamic model
PID-controller and PAQM are able to improve transient (short-term) and
steady-state (long-term) control performance at the same time
Outperforms other AQM such as RED and PI-controller in terms of queue
length dynamics, packet loss rate, and link utilization
Shows robust control performance under various traffic environments such
as different traffic loads and RTT
Control Performance of PID-controller and PAQM
controls based on current (P), past (I), and future (D) error
PID-controller has been designed analytically based on TCP flow dynamic
model developed in [Hollot, 2001]
Summary (2)
In this study, we use time-domain response method
Root-locus method:
Alternative Control-theoretic design methods
Used to analyze influence of the pole location of an open-loop transfer
function on the response characteristics of a closed-loop transfer function
Used to identify effect of gain-variation on control performance
Can be used for steady-state performance and stability
Design by performance indices
For example, the integral-of-time-multiplied-by-absolute-error (ITAE)
∞
ITAE = t e(t ) dt
0
Frequency response method
Issues for Further study
Load-adaptive queue length sampling (TS)
The queue length sampling time interval should be adjusted
adaptively to the dynamically changing traffic situation.
For example, under light traffic
longer TS
under heavy traffic
shorter TS
Design issues on improving adaptability of AQM
Fair packet drop
Issues for Further study (2)
Packets should be treated fairly and adaptively to traffic situation
AQM needs to work with intelligent source response
for better performance
Enhanced-ECN
If receive ECN feedback in δ(t-1)
• If No ECN feedback in δt
If received ACK > 0 , W= W+M/W + M
Else , W= W+M/W
• Else, Continue usual response to ECN feedback
Else, Continue TCP Congestion Avoidance
Issues for Further study (3)
S. Ryu et al. “Advances in Internet congestion control.” IEEE comm. Survey &
Tutorial, 2002
“Special Issues on Best-effort Services,” IEEE Network Magazine, May/June 2001
S. Shenker, “Fundamental Design Issues for the Future Internet,” IEEE JSAC-13,
no.7, 1995, pp1176-1188
Kelly et al., “Charging and rate control for elastic traffic,” European Transactions on
Telecommunications, vol. 8, 1997, pp33-37
Bertsekas et al. Data Networks, 2nd ed. Prentice Hall, 1992
Mackie-Mason et al. “Pricing Congestible Network Resources", IEEE JSAC-13,
no. 7, 1995, pp. 1141-1149
Low et al. “Optimization flow control I: Basic algorithm and convergence,”
IEEE/ACM TON, vol. 7, no. 6, 1999, pp. 861 - 874
Internet Congestion Control
VI. References
S. Floyd et al. “Random early detection gateways for congestion avoidance
control.” IEEE/ACM TON, 1993.
RED web page, http://www.aciri.org/floyd/red.html
RED for dummies, http://www.magma.ca/~terrim/RedLit.htm
B. Braden et al. “Recommendations on queue management and congestion
avoidance in the Internet.” IETF RFC2309, 1998.
M. Christiansen et al. “Tuning RED for web traffic,” IEEE/ACM TON, vol. 9, n. 3,
249-264, 2001
Floyd, S., “TCP and Explicit Congestion Notification,” ACM Computer
Communication Review, V. 24 N. 5, 1994, p. 10-23.
Ramakrishnan et al., “A Proposal to add Explicit Congestion Notification (ECN)
to IP,” IETF RFC 2481, January 1999 (Old version)
Ramakrishnan et al., “The Addition of Explicit Congestion Notification (ECN) to
IP,” IETF RFC 3168, Proposed Standard, September 2001
ECN webpage: http://www.icir.org/floyd/ecn.html
AQM / RED / ECN
References (2)
Control theoretic Approaches
References (3)
Hollot et al. “A Control-Theoretic Analysis of RED,” Proc. Of INFOCOM 2001
Hollot et al. “On designing improved controller for AQM routers supporting TCP
flows,” Proc. Of INFOCOM 2001
Misra et al. “Fluid-based analysis of a network of AQM routers supporting TCP
flows with an application to RED,” Proc. Of ACM SIGCOMM 2000.
S. Ryu et al. “PAQM: Pro-Active Queue Management,” Telecommunication
Network design and Economics, Kluwer Inc., Boston, USA, 2002, p247-271
Franklin et al., Feedback Control of Dynamic Systems, Addison-Wesley, 3rd Ed.,
1995
Kuo et al., Automatic Control Systems, John Wiley & Sons, inc., 7th Ed., 1995
K. Astrom et al., PID controller: Theory, Design, and Tuning, Instrument Society
of America, 2nd Ed., 1995
Download