An Improved Hop-by-hop Interest
Shaper for Congestion Control in
Named Data Networking
Yaogong Wang, NCSU
Natalya Rozhnova, UPMC
Ashok Narayanan, Cisco
Dave Oran, Cisco
Injong Rhee, NCSU
© 2013 Cisco and/or its affiliates. All rights reserved.
1
• Two important factors to consider:
1. Receiver-driven: one interest generates one data packet
2. Symmetric: Content retrieved in response to an interest
traverses the same path in reverse
• Content load forwarded on a link is directly related to interests
previously received on that link
• Given these properties, shaping interests can serve to control
content load and therefore proactively avoid congestion.
• There are multiple schemes that rely on slowing down interests to
achieve congestion avoidance or resolution
• But, detecting the congestion in question is not simple
Because it appears on the other side of the link where interests can be slowed
© 2013 Cisco and/or its affiliates. All rights reserved.
3
• Different schemes have been proposed
• HoBHIS
First successful scheme, demonstrated the feasibility of this method
Slows down interests on the hop after congestion
Relies on backpressure to alleviate congestion
• ICP/HR-ICP
Runs per-flow AIMD scheme to manage outstanding interests
Tracks estimated RTT as a mechanism to rapidly detect congestion & loss
Endpoints control flow requests by shaping interest issue rate
Main congestion control operates end-to-end, some hop-by-hop shaping for
special cases
© 2013 Cisco and/or its affiliates. All rights reserved.
4
• Assume constant ratio r of content-size/interest-size
• Simple unidirectional flow with link rate c
• Ingress interest rate of c/r causes egress content rate of c
• If we shape egress interest rate to c/r, remote content queue will
not be overloaded
• Issues with varying content size, size ratio, link rate, etc.
• But the biggest issue is…
© 2013 Cisco and/or its affiliates. All rights reserved.
5
• Interests consume bandwidth
(specifically, c/r in the reverse direction)
• Bidirectional data flow also implies bidirectional interest flow
• Therefore, the reverse path is not available to carry c bandwidth
of data, it also needs to carry some interests
• And similarly, the rate of interests carried in the reverse direction
cannot budget the forward path entirely for data, it needs to leave
space for forward interests as well
• Ordinarily there is no way to predict and therefore account for
interests coming in the other direction, but…
• There is a recursive dependence between the interest shaping
rate in the forward and reverse directions.
© 2013 Cisco and/or its affiliates. All rights reserved.
6
© 2013 Cisco and/or its affiliates. All rights reserved.
7
• We can formulate a mutual bidirectional optimization as follows
•
u(.) is link utility function
This must be proportionally fair in
both direction, to avoid starvation
We propose log(s) as utility function
•
•
•
•
•
i1 = received forward interest load
i2 = received reverse interest load
c1 = forward link bandwidth
c2 = reverse link bandwidth
r1 = ratio of received content size to
sent interests size
r2 = ratio of sent contents size to
received interests size
•
© 2013 Cisco and/or its affiliates. All rights reserved.
8
• Feasible region is convex
• First solve for infinite load
in both directions
• Optimal solutions at the
Lagrange points marked
with X
• If Lagrangian points do not
lie within feasible region
(most common case),
convert to equality
constraints and solve
c1
2r2
<
c2 r1r2 +1
2r2
c r r +1
£ 1 £ 12
r1r2 +1 c2
2r1
c1 r1r2 +1
>
c2
2r1
© 2013 Cisco and/or its affiliates. All rights reserved.
s1 =
c1
2
r2 c2 - c1
r1r2 -1
c
s1 = 2
2r1
s1 =
s2 =
s2 =
c2
2r2
r1c1 - c2
r1r2 -1
s2 =
c2
2
9
• Optimal shaping rate assumes unbounded load in both directions
We can’t model instantaneously varying load in a closed-form solution
• If one direction is underloaded, fewer interests need to travel in the
reverse direction to generate the lower load
• As a result, the local shaping algorithm need not leave as much space
for interests in the reverse direction
Extreme case: unidirectional traffic flow
• Actual shaping rate needs to vary between two extremes depending on
actual load in the reverse path
c
max_ s1 = 2
r1
min_ s1 =
r2 c2 - c1
r1r2 -1
• BUT, we don’t want to rely on signaling reverse path load
© 2013 Cisco and/or its affiliates. All rights reserved.
10
• We observe that each side can independently compute both
expected shaping rates
• Our algorithm observes the incoming interest rate, compares it to
the expected incoming interest shaping rate, and adjusts our
outgoing interest rate between these two extremes
s1 = min_ s1 + (max_ s1 - min_ s1 )(1-
obs _ s2 2
)
expmin _ s2
• On the router, interests and contents are separated in output
queues. Interests are shaped as per the equation above, and
contents flow directly to the output queue.
© 2013 Cisco and/or its affiliates. All rights reserved.
11
• When an interest cannot be enqueued into the interest shaper
queue, it is rejected
• Instead of dropping it, we return it to the downstream hop in the
form of a “Congestion-NACK”
• This NACK is forwarded back towards the client in the place of
the requested content
Consumes the PIT entries on the way
• Note that the bandwidth consumed by this NACK has already
been accounted for by the interest that caused it to be generated
Therefore, in our scheme Congestion-NACKs cannot exacerbate congestion
• Clients or other nodes can react to these signals
• In our current simulations, clients implement simple AIMD window
control, with the NACK used to cause decrease
© 2013 Cisco and/or its affiliates. All rights reserved.
12
Data Throughput
(Mbps)
Data loss
(%)
Client 1
Client 2
R1
R2
Client 1
Client 2
Baseline
9.558
9.559
0
0
0.015
0.015
25B Interest, 1KB Data
±0.001
±0.002
±0.0006
±0.0011
Varying Pkt Size
9.432
9.434
0.018
0.017
Data from 600-1400B
±0.005
±0.008
±0.0014
±0.0015
Asymmetric data size
9.373
9.326
0.007
0.016
1000B/500B
±0.014
±0.001
±0.0006
±0.0006
Asymmetric bandwidth
9.774
0.719
0.012
0.058
10 Mbps/1 Mbps
±0.001
±0.001
±0.0005
±0.0000
Scenario
© 2013 Cisco and/or its affiliates. All rights reserved.
0
0
0
0
0
0
Interest rejection rate
(%)
13
Scenario
Data Throughput
(Mbps)
Data loss
(%)
Interest rejection rate
(%)
Client 1
Server 3
Client 2
Server 4
R1
R2
Client 1
Server 3
Client 2
Server 4
5.142
4.692
0
0
0.515
0.063
±0.5
±0.5
±0.011
±0.013
Heterogeneous RTT
5.209
4.624
0.513
0.042
R2—S4 link now 20ms
±0.38
±0.38
±0.009
±0.007
Flipped data flows
9.566
9.419
Client1Server3, Client4Server2
±0.001
±0.007
Homogeneous RTT
© 2013 Cisco and/or its affiliates. All rights reserved.
0
0
0
0
0.148
0.012
±0.0004
±0.0005
14
• Queue depth on bottleneck queues is small
1 packet for homogeneous RTT case
Varies slightly more in heterogeneous RTT case, but is quite low (<17 packets)
• Client window evolution is quite fair
© 2013 Cisco and/or its affiliates. All rights reserved.
15
• Optimally handles interest shaping for bidirectional traffic
• No signaling or message exchange required between routers
Corollary: no trust required between peers
• No requirement of flow identification by intermediaries
• Fair and effective bandwidth allocation on highly asymmetric links
• Congestion NACKs offer a timely and reliable congestion signal
• Congestion is detected downstream of the bottleneck link
© 2013 Cisco and/or its affiliates. All rights reserved.
16
• Use congestion detection and/or NACKs to offer dynamic reroute
and multi-path load balancing
• Use NACKs as backpressure mechanism in the network to
handle unco-operative clients
• Investigate shaper under different router AQM schemes (e.g.
RED, CoDEL, PIE) and client implementations (e.g. CUBIC).
© 2013 Cisco and/or its affiliates. All rights reserved.
17
© 2013 Cisco and/or its affiliates. All rights reserved.
18