ANCS12 - Northwestern University

advertisement
On Pending Interest Table in
Named Data Networking
Huichen Dai, Bin Liu, Yan Chen*, Yi Wang
Tsinghua University, China
*Northwestern University, USA
Outline
1.
2.
3.
4.
5.
6.
7.
8.
Background of Named Data Networking (NDN)
Pending Interest Table (PIT)
Problem Statement
Challenges
Measure PIT Size and Access Frequency
NCE to Accelerate PIT Access and Size Shrinking
Evaluation
Conclusion
2/33
Background of NDN (1/3)
• Newly proposed clean-slate network
architecture;
• Embraces Internet’s function transition from
host-to-host communication to content
dissemination;
• Routes and forwards packets by content names;
• Two kinds of packets:
– Interest packet and Data packet.
• Request-driven communication model (pull);
3/33
Background of NDN (2/3)
• Content Store (CS): the router’s buffer
memory that caches Data packets;
• Pending Interest Table (PIT): each entry
records the name of a pended Interest and the
interfaces from which the Interest comes in.
• Forwarding Information Base (FIB): NDN
forwarding table.
4/33
Background of NDN (3/3)
Control Plane
Interest
arrival
Content Store
CS
Data
Interest
depature
FIB
PIT
Create
PIT entry
Add interface
Discard Interest
to PIT entry
Pending Interest Table
Forwarding Information
Base
Data Plane
Found
Not found
Figure: Interest Packet lookup and forwarding process
Control Plane
Data
depature
Data
arrival
PIT
Cache Data
Discard Data
CS
Data Plane
Found
Not found
Figure: Data packet lookup and forwarding process
5/33
Pending Interest Table (1/3)
• A special table in NDN and no equivalent in IP.
• Keeps track of the Interest packets that are
received but yet un-responded.
• Brings NDN significant features:
– communication without the knowledge of host
locations;
– loop and packet loss detection;
– multipath routing support; etc.
6/33
Pending Interest Table (2/3)
• Read and write on PIT.
Interest Packet
Data Packet
com/google/maps/data/1/
com/google/maps
com/google/scholar
com/google/scholar/data/1/
lookup
PIT
Content name
Interface list
inserted
1
deleted
inserted
3
deleted
......
……
A Dynamic Process!
7/33
Problem Statement (1/2)
• A router inserts every incoming Interest into PIT, and
removes each received Data packet from PIT.
• High-speed packet arrival rate leads to a large PIT
size and extremely high access (lookup, insert and
delete) frequency.
• Two major problems about PIT arise:
Size?
– The size of PIT?
– The access (lookup, insert and
delete) frequency of PIT?
Frequency?
8/33
Problem Statement (2/2)
• Senior researchers (Lixia Zhang, Van Jacobson,
etc.) witness Internet’s function transition;
• NDN is empirically proposed based on their
experience;
• A gap from experience to practice: can PIT be
satisfied by off-the-shelf technologies?
• No previous study touched the feasibility issue.
• No statistics support.
9/33
Challenges
1.No NDN deployment;
2.No data;
3.No PIT;
10/33
A fresh look at NDN (1/2)
-- from application perspective
• NDN better satisfies the requirements of
users:
– browsing web pages,
– watching videos,
– sharing files, etc.
• Users demand the same network applications,
but require different implementations in NDN.
• The PIT entries are consumed by applicationtriggered requests.
11/33
A fresh look on NDN (1/2)
-- PIT entry filled by app-triggered Interests
• A sample PIT:
PIT
Content name
Interface list
com/google/maps
1
com/google/scholar
3
Avatar
2
P2P Interest
parc.com/email_service/van/
example@gmail.com/123
5
Email Interest
……
……
HTTP Interest
The applications do not change, solve the problems from
the application perspective!
12/33
Apps from IP to NDN (1/5)
• Major Internet applications:
– HTTP, FTP, P2P, Email, Online games, Streaming
media, Instant messaging, etc.
• Implement these applications using NDN’s
request-driven communication model.
• Afterwards, examine how these applications
construct and destruct PIT entries.
• HTTP/1.1 as an example.
13/33
Apps from IP to NDN -- HTTP (2/5)
• A request/response protocol;
– A client sends a request over a connection to a server,
– The server responds with a message.
• HTTP/1.1 adopts Persistent Connections:
– HTTP requests and responses can be pipelined on a
connection.
• Simplest Case:
14/33
Apps from IP to NDN -- HTTP(3/5)
• However, the actual case is complicated.
• For HTTP Response:
– HTTP response divided into multiple Data packets due to the
MTU,
– At 𝑡0 the corresponding PIT entry is created, and at 𝑡𝑛 the entry
is removed,
– Entry Lifetime: 𝑡𝑛 − 𝑡0 .
15/33
Apps from IP to NDN -- HTTP(4/5)
• For HTTP Request:
– HTTP requests can be divide into different
categories, including GET, HEAD, POST, PUT,
DELETE, TRACE;
– GET, HEAD – pull data from server;
– POST, PUT – push data to server;
– DELETE, TRACE – function calls on the server.
• NDN can only pull data from server.
16/33
Apps from IP to NDN -- HTTP(5/5)
• GET, HEAD – implemented by Interest packet
directly;
• Extend the function of Interest packet to
accommodate the rest methods;
• POST, PUT – add additional data block to the
Interest packet that contains the content to be
pushed to the server;
• DELETE, TRACE – extend Interest packet to
enable function calls.
17/33
Measure PIT Size and Access Freq.
(1/3)
• So far, we have
Applications
on IP
Applications
on NDN
• So we can:
IP trace
translate
NDN trace
• We get data – we quantify PIT size and access
frequency by analyzing the NDN trace.
18/33
Measure PIT Size and Access Freq.
– Metrics
(2/3)
• Size: the number of PIT entries at each snapshot
by examining the lifetime of each PIT entry.
• Lookup frequency: the # of Data packets
(excluding the last Data of a request/response
pair) arrive per second;
• Insert frequency: the # of emerging biflows (new
Interests arrive)
• Delete frequency: the # of disappearing biflows
(last Data packet of a request/response pair
arrive).
19/33
Measure PIT Size and Access Freq.
– Results
(3/3)
• A one-hour IP trace from a 20 Gbps link in China Education
and Research Network (CERNET).
• PIT size: 1.5 million
• Frequencies of the first 10 seconds: (1.4 M/s, 0.9 M/s, 0.9
M/s)
• Methodology is what matters.
20/33
Data Structure for PIT (1/3)
• PIT size and access frequency demand a wellorganized data structure to implement PIT.
• Inspired by the Trie structure in IP lookup, we
organize PIT by a Trie as well.
A sample IP trie,
bit-wise.
21/33
Data Structure for PIT (2/3)
• NDN name is different from IP address;
– IP addresses:
• Fixed length, short…
– NDN names:
• hierarchical, composed of component, enables
aggregation;
• Unbounded # of components, much longer than IP;
• E.g., /com/parc/bulletin/breakthroughs.html
• Therefore, we adopt Name Prefix Trie, which is
of component granularity.
22/33
Data Structure for PIT (3/3)
level-1
Name
/com/yahoo
/com/yahoo/news
/com/yahoo/maps/uk
/com/google
/com/google/maps
/cn/google/maps
/cn/sina
/cn/baidu
/cn/baidu/map
Ports
1
1
2
2
1, 2
3
2, 3
4
4
An example of PIT
level-2
level-3
level-4
level-5
4
3
5
2
7
1
A
9
sina
maps
maps
uk
6
8
B
C
D
map
E
Name Prefix Tree
Name Prefix Trie (NPT), each edge stands for a name
component.
23/33
Name Component Encoding (1/4)
• The Name Prefix Trie (NPT) makes PIT well organized, but
matching each component is slow:
– The length of each component varies;
– should be an exact match of string patterns.
• We further propose to use positive integers to encode
components – Name Component Encoding (NCE).
– An integer occupies smaller memory space compared to a component;
– Matching integers is much faster.
name
Encoding Module
(Encoding Function f )
code series
Lookup, Insert and
Delete Module
NCE framework
24/33
Name Component Encoding (2/4)
level-1
level-2
level-3
level-4
level-5
level-1
level-2
level-3
5
2
7
1
A
maps
maps
CAS
uk
3
6
2
7
1
8
A
1
B
1
6
1
B
9
map
5
8
9 sina C
D
level-5
4
4
3
level-4
E
Name Prefix Tree
2
C
D
1
E
Encode Name Prefix Tree
Name Prefix Trie (NPT) is transferred to Encoded Name
Prefix Trie (ENPT).
25/33
Name Component Encoding (3/4)
• Preparation: define the edges in the NPT leaving from the
same node as a Code Allocation Set (CAS) (dotted ellipse on
the NPT);
• The rules to assign codes to components:
– Assign each name component in a CAS a unique code;
– The code should be as small as possible (ensures the codes are
continuous).
• The ENPT is a logical structure and is eventually implemented
by Simplified State Transition Arrays (S2TA).
• Benefit: a component match on S2TA can be implemented by
a single memory access with the codes as indexes!
26/33
Name Component Encoding (4/4)
Transition1:
level
-1
level
-2
level
-3
3
2
9
4
level
-5
5 1 6
Transition2:
Transition3:
1 8
Transition4:
A 1 B
2 C
D 1 E
Transition9:
7
1
level
-4
Encode Name Prefix Trie
(ENPT)
TransitionA:
TransitionB:
2
1
2
Legend:
^
2
9
2
1
2
# of edges leaving
this state
^
3
7
2
1
2
^
4
5
3
1
2
^
D C A
1
1
^
B
Transitioni:
0
1
0
3
Pointer to PIT
entry
Name
/com/yahoo
/com/yahoo/news
/com/yahoo/maps/uk
/com/google
/com/google/maps
/cn/google/maps
/cn/sina
/cn/baidu
/cn/baidu/map
1st index
Next state
…
2nd index
Next state
Codes
/1/1
/1/1/1
/1/1/2/1
/1/2
/1/2/1
/2/3/1
/2/2
/2/1
/2/1/1
…
Ports
1
1
2
2
1, 2
3
2, 3
4
4
Simplified State
Transition Array (S2TA)
6
The data structure that really implements the ENPT, each transition stands for a
node and its outgoing edges. The codes are indexes of each outgoing edge.
27/33
Evaluation
• Two name sets to compose two PITs:
– Domain names collected from ALEXA, DMOZ and our web
crawler, around 10 Million;
– URLs extracted from the HTTP requests in the trace,
around 8 Million.
• Evaluation goals:
– memory consumption of S2TA,
– access frequency S2TA,
– Comparison with other methods.
28/33
Evaluation Results (1/3)
• Memory Consumption
Figure: 10M Name Set memory
consumption–original size, NPT size, ENPT.
compression ratio (10M Name Set): 63.66%
Figure: 8M Name Set memory
consumption–original size, NPT size, ENPT.
compression ratio (8M Name Set): 12.56%
29/33
Evaluation Results (2/3)
• Access frequency
Figure: Lookup, insert and delete
performance for the 10M Name Set
Average: 3.27 M/s, 2.93 M/s and 2.69 M/s
Figure: Lookup, insert and delete
performance for the 8M Name Set
Average: 2.51 M/s, 1.81 M/s and 2.18 M/s
30/33
Evaluation Results (3/3)
Figure: PIT access frequency speedup based on S2TA.
All these results reveal that PIT can be
implemented by off-the-shelf technologies.
31/33
Conclusion
• Measure the size and access frequency of a
specific PIT;
• Prove the feasibility of PIT with off-the-shelf
technologies;
• Propose NCE to accelerate PIT access
operations while reducing PIT size.
32/33
THANKS!
QUESTIONS PLEASE ^_^
33/33
Download