pptx - AsiaFI

advertisement
Caching in Information-Centric
Networking
Sen Wang, Jun Bi, Zhaogeng Li, Xu Yang, Jianping Wu
Network Research Center, Tsinghua University
AsiaFI 2011 Summer School
Aug 8th, 2011
Introduction
• ICN is becoming an important direction of the
Future Internet Architecture research
– PSIRP, NetInf, PURSUIT, CCN, DONA and NDN.
• In-network caching is considered as one of the
most significant properties of ICN
• But so far, cache policy for ICN is still explored
little
Basic Communication and Caching
Paradiam
• Requests are sent to original
content objects by name routing
• Any router along the routing path
has the content object in cache
can respond to the request
• While forwarding a content, an
intermediate router can choose to
cache the content according to its
own caching policy.
Problem Statement
• Consider the static case
• Given an ICN network
– the request rate from each router to each content
– the storage capacitiy of each router
– a set of initial contents and an assignment of resident
routers of these contents
• Optimizing Objective
– find a feasible assignment of caching copies of each
content to routers in order to minimize the overall
resource comsumption of the network.
MILP Formulation
•
Objective Function: minimizing the overall average hops
– min
•
∗ m c ∗
0≤i<u pv,l c
min
,δc,g p
v,l c ,i
!=0
δc,g
pv,l c ,i
∗ i, u pv,l
c
Capacity Constraints
–
•
cϵC q v,c
vϵV
c∈C δc,v
∗ m c ≤ b v , ∀v ∈ V
(2)
Introduce extra variables to linearize the objective fuction:
– Objective Function : min vϵV cϵC qv,c ∗ m c ∗ μv,c
–
(2)
c∈C δc,v ∗ m c ≤ b v , ∀v ∈ V
–
i≤u(pv,l c )
σv,c,i
i=0
– δc,g
pv,l c ,i
– μv,c ≥ δc,g
𝑢 pv,l
c
= 1, ∀v ∈ V, ∀c ∈ C
(3)
≥ σv,c,i , ∀v ∈ V, ∀c ∈ C, 0 ≤ i < u pv,l
pv,l c ,i
∗ i − ( max u pv,l
v∈V,c∈C
(7)
c
c
(4)
) ∗ 1 − σv,c,i , ∀v ∈ V, ∀c ∈ C, 0 ≤ i <
5
– μv,c ≥ u pv,l
c
− ( max u pv,l
v∈V,c∈C
c
) ∗ 1 − σv,c,u
pv,l c
, ∀v ∈ V, ∀c ∈ C 6
1
Theoretical Analysis
g
• Theorem 1: If an assignment A is among the
optimal cache assignments, it must have the
property that any content cached in node n,
its request rate is not smaller than that of
any content cached in nodes which reside in
the subtree rooted at node n.
• Algorithm 1 greedily caches the contents
with maximum request rate from the root
node of the spanning tree to the leaf nodes
– The resulting content assignment follows the
Theorem 1
– But not optimal
c
f
e
b
a
d
Cache Policy
• Statistically Perfect-LFU can can result in the
same caching assignment as the greedy algorithm
generates
• we expect that in practice the Perfect-LFU is a
relatively good sub-optimal caching policy
intuitively from the Theorem 1
• Improving Perfect-LFU by taking into account the
distance
– Adding an additional field HopNum to the ICN
protocol
– Named by LB(Least Benefit)
Intelligent Forwarding
• With the ability of caching more
intelligent forwarding should be
endowed in ICN.
• A bit more intelligent forwarding
scheme inspired by P2P forwarding
g
c
– Forwarding with Shallow Flooding (FSF
for short).
• When a request is received by a node,
the request will be flood to all its other
interfaces with a specific flooding
depth
f
e
b
a
d
Forwarding with One-step Flooding
EVALUATION
• Traffic Model for Evaluation
– the Zipf–Mandelbrot (Also known as the ParetoZipf) distribution as the traffic model for
simulation
–𝐏 𝐢 =
𝛀
(𝒊+𝒒)𝜶
𝟖
– The two parameters α and q in the equation
above are the shape parameter and shift
parameter respectively
– Ω is the normalizing constant.
Evaluation with Simple Linear Topology
• Scenario of single content source:
– One requester, namely node 0
– One content source, namely node 11
0.4
PLB
LRU
Random
ILFU
ILB
PLFU
OPTIMAL
Proportion of all the requests
0.3
0.2
0.1
0.0
0
0
1
1
2
2
3
3
4
5
6
7
Hops to Get Content items
4
5
6
8
7
9
8
10
11
9
Simple Linear Evaluating Topology
10
11
Evaluation with Simple Linear Topology
• We studied effects of the following three factors on
the proportion of cache miss and average hops.
– cache size
– parameters of request pattern namely q and α
0.8
9
8
PLB
LRU
Random
ILFU
ILB
PLFU
OPTIMAL
0.8
Proportion of Cache Miss
7
0.5
0.4
6
5
4
0.3
3
0.2
2
0.1
1
0.0
0
27.5
55.0
11
82.5
110.0
1.0
Cache Size
PLB
LRU
Random
ILFU
ILB
PLFU
OPTIMAL
10
9
7
6
5
4
3
0.5
0.4
0.3
0.2
0.1
0.0
0.0
0.7
Value of q
0.5
1.4
0.6
11
PLB
LRU
Random
ILFU
ILB
PLFU
OPTIMAL
0.7
Values of α
0.8
10
0.9
PLB
LRU
Random
ILFU
ILB
PLFU
OPTIMAL
9
7
0.6
0.4
6
5
4
3
0.2
2
0.6
8
Proportion of Cache Miss
8
0.8
0.7
Average hops
0.6
PLB
LRU
Random
ILFU
ILB
PLFU
OPTIMAL
0.9
0.7
Average Hops
Proportion of Cache Miss
10
PLB
LRU
Random
ILFU
ILB
PLFU
OPTIMAL
0.9
Average Hops
1.0
11
1.0
2
1
1
0.0
0
27.5
55.0
82.5
110.0
Cache Size
Effects of Cache Size
0
0.0
0.7
Values of q
Effects of q
1.4
0.5
0.6
0.7
Values of α
0.8
Effects of α
0.9
Evaluation with Simple Linear Topology
•
Scenario of multiple requesters and content sources
– All the nodes can generate requests with the same request rate
– The content objects are randomly distributed
•
The InCache-LB gains 44.2% reduction in average hops compared to that in the
case of no cache and slightly higher than InCache-LFU by 2.9%.
The Impact of topology scale by increasing the node number from 12 to 24
•
– The the reduction percentage of average hops arises by about 6.6%.
6
6
LRU
Random
ILFU
ILB
NO
5
5
3
2
2
1
1
0
8
Average Hops
3
0.6
0.7
Values of α
(a)
0.8
0.9
6
4
2
0
0.5
LRU
Random
ILFU
ILB
NO
10
4
Average Hops
4
Average Hops
12
LRU
Random
ILFU
ILB
NO
27.5
55.0
82.5
Cache Sizes
110.0
0
12
(b)
Simulation results for multiple requesters
18
Node Number
(c)
24
Evaluation with ISP Topology
A series of simulations using practical ISP topology is conducted to evaluate cache
policies
–
The effect of parameter α and cache size.
– With α and cache size to be fixed to 0.7 and 55% respectively, the InCache-LB gains 40.3%
reduction in average hops compared to that in the case of no cache
– Almost no difference with InCache-LFU
5.0
5.0
LRU
Random
ILFU
ILB
NO
4.5
4.0
4.0
3.5
3.5
3.0
3.0
2.5
2.0
2.5
2.0
1.5
1.5
1.0
1.0
0.5
0.5
0.0
LRU
Random
ILFU
ILB
NO
4.5
Average Hops
•
PoP topology of ISP with AS No. 1221
Average Hops
•
0.0
0.5
0.6
0.7
Values of α
Effects of α
0.8
0.9
55
110
165
Cache Sizes
Effects of Cache Size
220
Evaluation with ISP Topology
• Study the effect of different ISP topology
– the PoP topology of another ISP with AS No. 1239
– 78 nodes and 84 edges, larger that AS 1221 with 44 nodes and 44 edges
• Different topologies make result in almost no difference in average hop
reductions, which are 40.3% and 41.7% for AS 1221 and AS 1239
respectively with the cache policy of InCache-LB.
5
LRU
Random
ILFU
ILB
NO
Average Hops
4
3
2
1
0
AS 1239
AS No.
AS 1221
Effects of topology
Evaluation with ISP Topology
• Study the effect of heterogeneous request rate among nodes.
– In former simulation, each node uses the same mean value of request
intervals
– The request rates of nodes range from 10 per second to 1 in this simulation
• In the setting of heterogeneous request rates, more average hops can be
achieved,
– Arising from 49.9% to 57.3%.
5
Average Hops
4
LRU
Random
ILFU
ILB
NO
3
2
1
0
homo-rate
α value
hetero-rate
Effects of request way
Evaluation for Intelligent Forwarding
• Evalute the proposed forwarding scheme, namely FSF, with different cache
policies
• Two series of simulations were conducted
– A 6⨉6 mesh topology
– The PoP topology of AS 1221
• The FSF can furture decrease the average hops by 6.3% with 2 hops
flooding for the InCache-LB
• InCache-LB is better than InCache-LFU obviously while the flooding hops
increasing.
4
5
NoCache
LRU
ILB
ILFU
Random
4
NoCache
LRU
ILB
ILFU
Random
Average Hops
Average Hops
3
3
2
2
1
1
0
0
1
Flood Hops
2
6⨉6 mesh topology
0
1
Flood Hops
2
The PoP topology of AS 1221
Conclusion
• The in-network caching problem of ICN can be formulated
into Mixed-Integer Linear Programming problem.
• Via studying the properties of the optimal caching, We
found that LFU-like cache policies is supposed to perform
well, which is proved by our simulation results.
• The proposed cache policy LB (Least Benefit) performs
beter than LFU when the proposed forwarding scheme FSF
is involved too and reduces the average hops future by
6.3%.
• With in-networking caching, the average hops of the ICN
network can be reduced significantly by nearly 50% and
with some simple improvement such as LB and FSF the
average hop can be reduced future
Q&A
Thank you!
Download