3. issues on ip multicast

advertisement
Push Technology
Humie Leung 981777020
Liangjie, Huo
University of Toronto
University of Toronto
humiel@ca.ibm.com
Liangjie.huo@utoronto.ca
Driven by the need to simplify and tame content delivery
problems in the internet, Push technology has emerged. In the
paper, we focuses on IP multicast-based Push to present the
concept of Push from the end-to-end perspective and the point
of view of network. We investigate Continuous Multicast Push
and Asynchronous Multicast Push, present Content Based
Multicast and Reliable Multicast Transport Protocols to address
issues due to the usage of the raw IP multicast for Push.
communication scheme by allowing multipoint-to-multipoint
data exchange in an optimized fashion. By multicasting, data
traverses the network exactly once to get to a large number of
users. [1] Multicast represents three important advantages for
group communications. First, it saves considerably the network
bandwidth necessary for an application to transfer the same
volume of data. Secondly, it allows passive and blind
communications: receiver may not know who the server is, and
the server may not know who are receiving the information.
Lastly, multicast is much more scalable to large groups of users.
[2]
1. INTRODUCTION
2. CMP AND AMP
Push technology stems from a very simple idea: rather than
requiring users to explicitly request the information that they
need, data can be sent to users without having them specifically
ask for it. The advantages of push technology over the
conventional pull technology are straightforward.
The
traditional pull approach requires that users know a priori where
and when to look for data. Users might also spend an inordinate
amount of time polling known sites for updates or hunting on
the network for relevant sites. [1] Typical examples of pull
technology include ftp, gopher, and WWW. While this model is
appropriate for some applications, it also has some important
limitations. To initiate the connection to a server, a user has to
know a server's identifier, usually in the form of a site/port pair.
A user also has to check the server periodically to get new or
modified information. [2] Push technology can relieve the user
of these burdens. By using the emerging push technology model,
applications bring the information of interest directly to mass
consumers rather than requiring thme to fetch it themselves. It is
also attractive because it allows users to receive information as
soon as it becomes available. Users may not have any
knowledge about virtual information servers. Typical
applications include news release, press distribution, software
distribution, and interactive games.
However, there are
problems with the push technology. Push technology transfers
control from the users to the data providers, raising the potential
that users receive irrelevant data while not receiving the
information they need. These potential problems can arise due
to issues ranging from poor prediction of user interests to
outright abuse of the mechanism, such as “spamming”. The
very nature of push technology brings both benefits and
disadvantages. [1]
Asynchronous multicast push (AMP) [10] and continuous
multicast push (CMP) [10] [11] [12] are two communication
paradigms exploited to push data by using IP multicast delivery
from the sender to several receivers. In the section, the paper
considers with the signaling between clients and servers from an
end-to-end point of view. The section 4 and Section 5 address
protocols developed to deal with issues for the network.
ABSTRACT
2.1 Asynchronous Multicast Push [10]
Clark and Ammar [13] have proposed a model where several
requests for the same Web document are grouped during a
certain time and answered via multicast transmission.
Nonnenmacher and Biersack [10] have referred to this mode of
distribution as asynchronous multicast push (AMP), which is
valid for the kind of information that is not delay sensitive.
AMP is based on a reliable multicast transport protocol using
parity transmissions and TCP-like congestion control over
several multicast layers. Meanwhile all mechanisms of AMP
work on a pure end-to-end basis between sender and receiver.
As mentioned before, this section focuses on the signaling
between AMP server and AMP client.
2.1.1 AMP Architecture
AMP consists of three parts: a client module, a server module
and a scheduling module as shown in figure 2.1
The focus of this paper is on multicast push technology.
Multicasting is a 1-to-n form of communication for transmitting
packets from one host to a set of member hosts in the same
group. [3] Multicast is a 1990's technology that has provided
users a way to go beyond the traditional client/server model.
More modern network communications rely on connectionless
transport protocols. Using multicast generalizes this
1
AMP Client
AMP Server
(User Interface)
(User Interface)
ARD
DTP
Reception daemon
Send Process
Reliable multicast
transport protocol
Reliable multicast
transport protocol
(Multicast loss
Repair
(Multicast loss
Repair
& Congestion
Control)
UDP
& Congestion
Control)
UDP
IP
IP
Multicast
Multicast
Schedule
Module
Database
In AMP, considerable bandwidth gains are achieved by
multicast and accumulating requests for highly requested data.
However, grouping requests can lead to unacceptable high
response times. Based on the above framework, we can also see
that AMP needs clients to have TCP connections with the
server. When a connection request arrives at the server end, a
new process has to be established to handle the request
associated with the connection. Even though through the
approach, the data access rate is known to the server before
transmission, this mechanism still does not fully take advantage
of the nature of multicast.
2.2 Continuous Multicast Push [12]
CMP is used to deliver a site’s most popular, frequently
changing and heavily requested web pages on the same multicast
address. Reliability is achieved through repetitive, cyclic
transmission of a requested page and ensured by end systems
(clients and servers). CMP also requires that network connecting
a server with its clients is multicast capable.
2.2.1 CMP Framework
(1) Server End:
(1) Data Requests
The server monitors the number of requests for a document to
decide which documents to multicast. The server then takes the
popular document and calculates some parity packets and then
cyclically sends them to a multicast address along the multicast
tree. Where the multicast tree forks off, the multicast router will
replicate the packets and send a copy on every outgoing branch
of the multicast tree. The server continues cyclic transmission as
long as it believes there is at least one requester trying to receive
the page.
A TCP/IP connection is established between a client and the
AMP server when the client requests data. The ARD reception
daemon will be called by the client to deal with the
establishment.
How to monitor the number of requests for a document at
any moment
Data Transmission
Figure 2.1 AMP Modules
AMP works as follows:
The scheduling module at the server end determines the time of
the next transmission of the data which is dependent on an
access statistic.
The server returns the multicast address, the data volume, the
relative time of the transmission and an internal registration
number to the client. The registration number is used for
identification purposes and to keep the ability for cancellation of
a scheduled data transmission to the client.
The client reserves the space needed for the data, registers for
the data with the reception time in his internal reception table,
and hands down all needed data over to the daemon which
creates a new entry in the list, which contains the information
for all requested data.
(2) Data Transmission
the client ARD reception daemon forks a thread for the
reception on the given multicast channel at the announced time,
and receives data on the previously reserved port. After the data
transfer from the DTP send process, the server updates the
transfer statistic to further optimize the scheduling of subsequent
transmissions.
Since clients simply join the multicast tree and stop dealing with
the server whenever they wish, the current number of clients of
the data is not accessible for the server. CMP needs to know the
number of requests/sec for a document that is being multicasted,
and to decide which data is popular and when to stop
multicasting the document.
One of the solutions suggested in [12] is that clients explicitly
notify the server after joining the multicast tree. Then, the server
estimates how many cycles it needs to keep sending in order to
satisfy all receivers given a certainty threshold. Another solution
mentioned in [12] is that the server can use an estimation
mechanism via feedback from receivers, such as a mechanism
proposed in [14] based on distributed timers.
(2) Client End:
Clients join the appropriate multicast group and remain as
member
How to join a multicast group:
The issue in fact is how a client find the available muliticasted
data, namely to get the corresponding multicast IP address when
it requires a document. Currently the unicast mapping is done
2
through domain servers (DNS) that provide the unicast address
of the final Web server. For a multicast case, something similar
could be done: session servers could map the name of the
requested document into the multicast address where it is being
distributed. One example is to through MBone. The session
directory is frequently used to allocate addresses and ports for
scheduled sessions in such a way that they do not collide.
Periodically, multicast sessions are announced to a well known
multicast address.
[12] also suggests to get multicast address through a TCP
connection as the same way as AMP does.
How to deal with mixed new content and rebroadcast
content
When IP multicasting is used, the content is repeatedly delivered
to all the clients that are current members of the group. The new
content and rebroadcast content are mixed together. The
duplicated reception increases the client-end reception
processing load and the network load. When a large number of
clients repeatedly receive duplicated content, the use of
multicasting may even have an adverse effect.
One solution is to use multiple groups to deliver content
according to the capabilities of each client, the network
bandwidth and so on. The client can selectively join the
multicast group best suited to their bandwidth, loss rate and the
like. One mechanism is Layered Multicast Repeat Delivery [15].
LMRD is a method wherein one or more groups for rebroadcast
content are prepared separately from the single group used fro
new content, and each client leaves the groups fro rebroadcast
content depending on how much of the rebroadcast content they
have received.
In short, the benefits of CMP in the case of such documents are
a very efficient use of network resources, a reduction of the load
on the server, lower response times, and scalability for an
increasing number of receivers. But if we look at the approach
from a network point view, an obvious shortfall is that CMP
requires that the connecting a server with its clients is multicast
capable as does AMP. We will address the issues related to
network in the section 3.
3. ISSUES ON IP MULTICAST-BASED
PUSH
As we have already addressed, using multicast-based push first
of all needs routers in the network to support multicast routing
and maintain the sate of each active multicast group. In certain
cases, a source or destination may not be attached to a multicast
enabled network. One approach is to use tunneling, which
encapsulates multicast IP packets in Unicast IP packets. MBone
the multicast backbone is spammed in this way connecting
islands with native multicast support.
Developing multicast-enabled applications could be literally
simple. Having datagram access allows any application to send
to a multicast address. A multicast application need only
increase the Internet Protocol (IP) time-to-live (TTL) value to
more than 1 (the default value) to allow outgoing datagrams to
traverse routers. To receive a multicast datagram, applications
join the multicast group, which transparently generates an
[IGMPv2, IGMPv3] group membership report.
However, enabling multicast support in applications and
protocols that can scale well on a heterogeneous network is a
significant challenge. Specifically, sending constant bit rate
datastreams, reliable data delivery, security, and managing
many-to-many communications all require special consideration.
[16] provides a survey of the various multicast service
requirements which includes:
Address Management - Selection and coordinated of address
allocation. The need is to provide assurances against "address
collision" and to provide address ownership.
Session Management - Multicast is simply a transport
mechanism that provides end-to-end delivery. All of the other
services are application-layer services that must be provided by
each particular application. Session management performs
application-layer services on top of multicast transport. These
services depend heavily on the application but include functions
like session advertisement, billing, group member monitoring,
key distribution, etc.
Heterogeneous Receiver Support - Host membership for
multicast is dynamic meaning that hosts can enter and leave a
group whenever they wish. The number of hosts in a multicast
group is not limited. A host can send multicast datagram to a
multicast group without being a member of that group.
Therefore, servers need feedback to monitor receiver
performance and specify the receivers with a wide variety of
bandwidth capacities, latency characteristics, and network
congestion.
Reliable Data Delivery - Ensuring that all data sent is received
by all receivers
Security - Ensuring content privacy among dynamic multicast
group memberships, and limiting senders.
Synchronized Play-Out - Allow multiple receivers to "replay"
data received in synchronized fashion.
Concerning about the Content or Structure of the
Information
Under the context of Push, we also would like to concern about
the content or structure of the information being delivered in the
network when using multicast. Content-Based Multicast [17] is
a good solution to perform extra content filtering at the interior
node of the IP multicast tree.
4. CBM [17]
Content based multicast (CBM) is proposed to deliver
personalized information to users by concerning about the
content and structure of the information being delivered.
Meanwhile, it aims at reducing network bandwidth usage and
delivery delay, as well as the computation required at the
sources and sinks.
CBM system model consists of subscription and matching
algorithm, and filter placement algorithm. From the network
perspective, the benefits of CBM depend critically upon how
well filters are placed at interior nodes of the multicast tree, and
the costs depend upon those introduced by filter themselves. In
[17], based on the assumption that IP multicast tree has been set
3
up using an appropriate protocol, they consider two objectives
for developing filter placements: minimizing total network
bandwidth utilization and minimizing mean information delivery
delay. In this paper, we just analyze the algorithm [17] used to
calculate new filter placements along the multicast tree to
minimize total traffic, as an example.
First, it needs to calculate the amount of information flow f (v)
required at each vertex based on the set of subscriptions. For
each leaf v the source calculates f(v) as the size of the
subscription request from that user. The source then repeats this
process recursively up the tree. This information can be obtained
by calculating successive unions when subscriptions are being
processed.
Then at each node v in the multicast tree, the algorithm
calculates the minimum total traffic in Tree (v) given that up to i
filters can be placed in Tree (v). When a filter is placed at v it
will ensure that only the flows strictly required in the left child
tree and the right child tree. If there is no need to put a filter at v,
the incoming information flow through the Tree(v) is restricted
by the lowest ancestor of v whose parent has a filter, p, because
the parent of p will ensure that only the flow required by Tree(p)
is send to p. Finally, the algorithm for finding a filter placement
repeats recursively up the root of the tree to minimize the total
bandwidth consumption for a given set of subscription.
Figure 4.1 shows an example with a binary tree
Figure 4.1 Filter Placement to minimize total traffic (Extracted
from [17])
This algorithm runs in time proportional to the square of the
number of vertices in the multicast tree. The paper[17] also
develops a heuristic that executes faster than the optimal
algorithm we mentioned above and evaluates the algorithms by
simulations. Their conclusion is that “filters can be very
effective in reducing total traffic as well as mean delay,
and that a relative small number of filters can be very
effective” [17].
5. RMTP
Reliable Multicast Transport Protocol (RMTP) is a protocol that
is designed to make efficient use of the underlying IP multicast
routing technology, and yet provide reliable delivery semantics
to end user applications. It is a transport control mechanism that
enables reliable data transfer from a sender to a large number (in
the order of 5,000 to 10,000) of receivers on a TCP/IP network.
Using IP multicast function as its delivery system, it realizes
data transport reliability based on retransmission and
handshake-based session control. [4]
5.1 History
Reliable Multicast Transport Protocol (RMTP) is a multimedia
information distribution technology that is a result of a joint
study program between IBM Tokyo Research Laboratory and
Nippon Telegraph and Telephone Corporation Information and
Communication Systems Laboratory since 1994. The first
version of the RMTP protocol was specified in May 1995. Two
sets of server and client prototypes were independently
developed by IBM and NTT based on the protocol specification.
In August 1995, the two sets were tested for interoperability
using two configurations, NTT server with IBM client, and IBM
server with NTT client. The result was successful. The main
purpose of RMTP was to distribute multimedia documents such
as electronic newspapers and marketing information to a large
number of users simultaneously in an effective manner. [5]
If v is a leaf then
T(v,i,p)=0 for all p,i
5.2 Assumptions
Otherwise 0<=j<i
There are three assumptions that are made in this discussion of
RMTP.
T (v, i, p)=
min{f(l)+f(r)+min[T(l,j,l)+T(r,i-j-1,r)], 2f(p)+min[T(l,j,p)+T(r,ij,p)]}
f (v): the information flow into vertex v
T(v,i,p): the minimum total traffic in Tree (v) given that up to i
filters can be placed in Tree (v) and the Lowest Tight Ancestor
of v is p
Lowest Ancestor of v: the lowest ancestor of v whose parent has
a filter
1. The receivers are grouped into local regions based on their
proximity in the network. [6]
2. A multicast tree is set up in the network layer with the sender
as the root node and the receivers as the leaf nodes. Data
generated by the sender flows through the multicast tree,
traversing each tree edge exactly once. [6]
3. RMTP is described as a protocol for point-to-multipoint
reliable multicast. Multipoint-to-multipoint reliable multicast is
possible if multicast trees are set up for each sender. [6]
5.3 Features
There are three main features in RMTP:
4
1. Reliability: RMTP is a protocol that uses IP multicasting for
reliable delivery of data to thousands of clients at the same time.
Since IP multicasting is a form of connectionless
communication, it is prone to lost packets, data errors, and
inconsistent arrival sequences. RMTP compensates for the
reliability shortcomings of IP multicasting by having periodic
transmission of status by receivers, and a selective-repeat
retransmission mechanism. RMTP uses the technique of
monitoring ACK (Acknowledge) responses from each receiver,
and resending data when an error is made in transmission. [4]
2. Scalability: Three design features make RMTP scalable.
First the state information maintained at each multicast
participant is independent of the number of participants.
Therefore, when a receiver joins or leaves a multicast group, it
does not affect the state information of the sender or the
receivers. Secondly, RMTP uses a receiver-driver approach. It
places the responsibility of ensuring sequenced, lossless data
reception on each individual receiver. Therefore, the sender is
relieved of the burden of tracking the status of each receiver.
Thirdly, RMTP groups receivers into local regions and uses a
Designated Receiver (DR) in each local region.
The
responsibilities of processing ACKs and performing
retransmissions are distributed among the DRs and the sender.
[7]
3. Heterogeneity: RMTP is able to handle receivers in
heterogeneous network environments in an efficient manner. In
particular, receivers in a relatively lossy network can be made
into a local region with a Designated Receiver responsible for
handling ACKs and retransmitting lost packets to the receivers
in the region. Therefore, the effect of a lossy network can be
confined to a small region without affecting other receivers of
the same multicast session. [6]
selective repeat retransmission scheme for higher throughput.
[8]
ACKs are needed from receivers in order to determine which
packets need to be retransmitted. However, ACKs from a large
number of receivers may overwhelm the sender which is known
as the ACK implosion problem. In addition, large number of
ACK packets destined for the sender and retransmitted data
packets generated from the sender may congest the sender’s
neighboring routers and local networks. Therefore, in order to
avoid this situation, RMTP uses Designated Receivers (DRs) to
assist the sender in processing ACKs and in retransmitting data.
[7]
The concept of using DRs is best illustrated by Figures 5.4.1a
and 5.4.1b.
Figure 5.4.1a shows a multicast tree with 14 receivers and a
sender. In the absence of DRs, the sender processes ACKs from
all 14 receivers. Figure 5.4.1b shows the same multicast tree
with DRs. The multicast tree is partitioned into three subtrees.
In this case, the sender processes ACKs from only 2 receivers.
A DR is a special receiver; it caches received data, emits ACKs,
and processes ACKs from the receivers in its subtree.
Conceptually, a DR hides a subtree of receivers from its up-tree
DR or the sender. Therefore, using DRs reduces the number of
ACKs the sender has to process, and improves the usage of
network resources. [7]
5.4 Protocol Description
RMTP provides sequenced, lossless delivery of data from one
sender to a group of receivers. The sender divides the data to be
transmitted into fixed-size data packets, with the exception of
the last one. A data packet is identified by packet type DATA,
and type DATA EOF identifies the last data packet. The sender
assigns each data packet a sequence number, starting from 0. A
receiver periodically informs the sender about the packets it has
correctly received by sending ACKs. An ACK includes a
sequence number L and a bitmap V. Sequence number L
indicates that a receiver has correctly received all the packets
with sequence number less than L. A 0 in the bitmap
corresponds to a missed or incorrectly received packet while a 1
in the bitmap corresponds to a correctly received packet. [7]
5.4.1 Designated Receivers
RMTP is based on a multi-level hierarchical approach, in which
the receivers are grouped into a hierarchy of local regions, with
a Designated Receiver (DR) in each local region. Receivers in
each local region periodically send acknowledgments (ACKs) to
their corresponding DR, DRs send ACKs to the higher-level
DRs, until the DRs in the highest level send ACKs to the sender.
DRs cache received data and respond to retransmission requests
of the receivers in their corresponding local regions, thereby
decreasing end-to-end latency. RMTP uses a packet-based
Figure 5.4.1a: Illustration of a group or receivers sending ACKs
to a sender. (Extracted from [7])
5
connection control block and stops emitting ACKs when it has
correctly received all data packets. A DR behaves like a normal
receiver except that it deletes its connection control block only
after the Tdally timer expires. Because receivers periodically
emit ACKs after connection establishment and the time interval
between consecutive ACKs is much smaller than Tdally, the
sender uses ACK reception during Tdally to deduce whether all
receivers have received all data. If it does not receive any ACK
during Tdally, the sender assumes either all receivers have
received every packet, or something exceptional has happened
that prevents the receivers from sending ACKs. Possible
“exceptional” situations include: network partition and receivers
voluntarily or involuntarily leaving the multicast group. [7]
5.4.4 Late Joining Receivers
Since RMTP allows receivers to join any time during an
ongoing session, a receiver joining late needs to catch up with
the rest. There are two features in RMTP which together
provide the functionality of allowing lagging receivers to catch
up with the rest: [6]
Figure 5.4.1b: Illustration of Designated Receivers used to assist
the sender in processing ACKs. (Extracted from [7])
5.4.2 Connection and Connection Parameters
An RMTP connection is identified by a pair of endpoints: a
source endpoint and a destination endpoint. The source endpoint
consists of the sender’s network address and a port number; the
destination endpoint consists of the multicast group address and
a port number. Each RMTP connection has a set of associated
connection parameters. [7]
(1) Immediate Transmission Request
When a receiver joins late, it receives packets being multicast by
the sender at that time, and by looking at the sequence number
of those packets, it can immediately find out that it has missed
earlier packets. At that instant, it uses an ACK_TXNOW packet
to request for immediate transmission of earlier packets. When
the DR/sender receives an ACK_TXNOW packet from a receiver,
it checks bit vector V, and immediately transmits the missed
packet(s) to the receiver. [6]
(2) Data Cache
RMTP allows receivers to join an ongoing session at any time
and still receive the entire data reliably. However, in order to
provide this flexibility, the senders and the DRs in RMTP need
to buffer the entire file during the session. This allows receivers
to request the retransmission of any transmitted data from the
corresponding DR/sender. [6]
5.4.5 Flow Control
Figure 5.4.2: RMTP connection parameters (Extracted from [6])
5.4.3 Connection Establishment and Termination
When the sender and the receivers receive the session
information, receivers initialize the connection control block
and stay in an unconnected state; the sender starts transmitting
data. On receiving a data packet from the sender, a receiver goes
from the unconnected state to the connected state and starts
emitting ACKs at Tack interval. Connection termination is timer
based. After it transmits the last data packet, the sender starts a
timer that expires after Tdally seconds. A DR also starts the
timer when it has correctly received all data packets. When the
timer expires, the sender deletes all state information associated
with the connection. Time interval Tdally is at least twice the
lifetime of a packet in an internet. Any ACK from a receiver
resets the timer to its initial value. A normal receiver deletes its
A simple window-based flow control mechanism is not adequate
in a reliable multicast transport protocol in the internet
environment. The main reason is that in the internet multicast
model, receivers can join or leave a multicast session without
informing the sender. A sender may not know who the receivers
are at any instant during the lifetime of a multicast session.
Therefore, in order to design a transport-level protocol to ensure
guaranteed delivery of data packets to all the current members of
a multicast session, without explicitly knowing the members, a
different technique for flow control known as rate-based
windowed flow control is needed for RMTP. [6]
Because the orientation of the multicast tree changes as new
DRs join or leave, the sender needs to operate in a cycle. The
sender transmits a window full of new packets in the first cycle
and in the beginning of the next cycle, it updates the send
window and transmits as many new packets as there is available
for in its send window. During window updates, the sender
makes sure that all the DRs that have sent status messages
within a given interval of time, have successfully received the
relevant packets before advancing the lower end of its send
window. Note that the advancement of send window does not
6
mean that the sender discards the packets outside the window.
The packets are kept in a cache to respond to retransmission
requests. In addition, the sender never transmits more than a
full window of packets during a fixed interval, thereby limiting
the maximum transmission rate. This scheme of flow control can
thus be referred to as rate-based windowed flow control. [7]
protocol processes and umrouted do not follow the IGMP
protocol standards to obtain multicast group membership
information because they encapsulate IGMP messages in UDP
and do not use data-link multicast. In conclusion, a multicast
delivery system is built at user level using MBone technologies.
[7]
5.4.6 Congestion Avoidance
5.6 Performance
RMTP provides mechanisms to avoid flooding a congested
network with new packets. RMTP uses a scheme known as
slow-start for detecting congestion. The retransmission requests
from receivers are indications of possible network congestion.
The sender uses a congestion window cong_win to reduce data
transmission rate when experiencing congestion. During Tsend,
a connection parameter for the time interval to send data
packets, the sender computes the number of ACKs, N, with
retransmission request. If N exceeds a threshold, CONGthresh,
it sets cong_win to 1. Setting cong_win to 1 reduces data
transmission rate to at most one data packet per Tsend. If N does
not exceed CONGthresh during Tsend, the sender increases
cong_win by 1. The procedure of setting cong_win to 1 or
linearly increasing cong_win is referred to as slow-start, and is
used in TCP implementation. The sender begins with a slowstart to wait for the ACKs from far away receivers to arrive. [6]
The following two graphs are extracted from [3]:
5.5 An Implementation
RMTP uses MBone technologies to deliver multicast packets.
MBone consists of a network of multicast capable routers and
hosts. MBone routers use IP tunnels to forward multicast
packets to IP routers that cannot handle multicast packets. A
MBone router consists of two functional parts: a user-level
process called mrouted and a multicast kernel. A mrouted
exchanges routing information with neighboring mrouteds to
establish a routing data structure in the multicast kernel. The
multicast kernel then uses the routing data structure to forward
multicast packets. To deliver multicast packets to receivers on a
local subnet, an MBone router uses data-link layer multicasting
such as Ethernet multicasting. [6]
Figure 5.6.1: Processing load (CPU time) vs. No. of clients
(Extracted from [3])
This paper focuses on a specific user level implementation of
RMTP by J.C. Lin [7]. Multicast packet forwarding and RMTP
protocol processing were implemented at user level in order to
make prototyping faster and debugging easier. The functional
part mrouted was modified to incorporate the routing functions
of a multicast kernel, and it is called umrouted Communications
among umrouted are via User Datagram Protocol (UDP).
Therefore, multicast packets travel on UDP-tunnels among
umrouted. By executing umrouted, a host with unicast kernel
becomes a user level multicast router. A user-level protocol
process implements the RMTP protocol. Application-level
receivers and senders use UDP to communicate with the RMTP
protocol process. To deliver multicast packets to protocol
processes on a local subnet, an umrouted uses UDP unicast
instead of data-link multicast. A protocol process uses a
configuration file to learn about the location of the umrouted
that handles its multicasting requests. When a protocol process
wishes to join a multicast group, it sends an Internet Group
Management Protocol (IGMP). An umrouted builds a list of
protocol processes’ host addresses that it handles. An umrouted
periodically sends an IGMP Host Membership Query message
to each protocol process it handles using UDP unicast. Note that
7
Figure 5.6.2: Required Delivery Time vs. No of clients.
(Extracted from [3])
[5] Nippon Telegraph and Telephone Corporation, “RMTP
A single Sun SS220 system was used as the server/sender
machine, and 12 Sun SS20s were used as the client/receiver
machines.
The network was 10 Mbit/s Ethernet LAN
comprising three subnetworks linked together with routers. For
the evaluation of 100 clients, the 12 client machines were used
to emulate them by running about 10 sets of client software
each. For the evaluation of more than 100 clients, a client
simulation was used in which a single client machine could
emulate up to 1000 clients. The amount of data delivered was
assumed to be 2 MB, which is equivalent to 3 pages of an
average daily newspaper. The data delivery time and server
processing load were used as criteria for evaluation. These two
graphs compare the evaluation results of using RMTP and a
conventional pull delivery method implemented using FTP. The
CPU load on the server in Figure 5.6.1 can be reduced a
hundred times at 5000 clients, and the delivery time in Figure
5.6.2 is reduced from 3 hours with the conventional pull method
to 3 minutes using RMTP. [3]
[6] P. Sanjoy, K. K. Sabnani, J. C. Lin, “Reliable Multicast
6. CONCLUSION
History”, http://info.isl.ntt.co.jp/rmtp/rmtphste.htm
Transport Protocol”, IEEE Journal of Selected Areas in
Communications, 1997
[7] J. C. Lin, “RMTP: A Reliable Multicast Transport
Protocol”, IEEE INFOCOM ’96, pp.1414-1424, March
1996
[8] Lucent Technologies, “RMTP: A Reliable Multicast
Transport
Protocol”,
labs.com/project/rmtp/rmtp.html
http://www.bell-
[9] G. Taskale, P. Stripe, “Performance Analysis of Reliable
Multicast Transport Protocol”, Reuters, 1999
[10] J.
Nonnenmacher, E.W. Biersack, “Asynchronous
Multicast Push: AMP”, In Proceedings of ICCC’97, pp.
419-430, Cannes, France, November 1997.
[11] K. V. Almeroth, M. H. Ammar and A. Fei, “Scalable
delivery of Web Pages Using Cyclic Best-Effort (UDP)
Multicast”, In the Proceeding of IEEE Communications
Magazine, pp. 170-178, June 1997
This paper focuses on a four push technologies: Continuous
Multicast Push (CMP), Asynchronous Multicast Push (AMP),
Content-based Multicast (CBM), and Reliable Multicast
Transport Protocol (RMTP). In summary, push technology is
one of the many choices for data delivery in distributed
information systems. It has been studied for years in the field of
communications, but it is still a hot topic for research today.
[12] P. R. Rodriguez, E. W. Biersack, “Continuous Multicast
7. REFERENCES
[14] J. Nonnenmacher, E. Biersack, “Optimal Multicast
[1] M. Franklin, S. Zdonik, “Data in Your Face: Push
Technology Perspective”, ACM Press, 1998
[2] GIB
Webcanal, “Push Technology
http://webcanal.inria.fr/arch/push.html
Architecture”,
[3] S. Kinoshita, T. Shiroshita, T. Nagata, “The RealPush
Network: A New Push-Type Content Delivery System
Using Reliable Multicasting”, IEEE Transactions on
Consumer Electronics, Vol. 44, No. 4, November 1998
[4] T. Shiroshita, T. Sano, O. Takahashi, “Reliable Multicast
Push of Web Documents over the Internet”, IEEE Network,
pages18—31, March/April 1998.
[13] R. J. Clark and M. H. Ammar, “Providing Scalable Web
Services Using Multicast Delivery”, In Proceedings of the
IEEE Workshop on Services in Distributed and Networked
Environments, Whistler, Canada, June, 1995
Feedback”, In Proceeding of IEEE INFOCONM, San
Francisco, CA, USA, March 1998
[15] S. Kinoshita, T. Nagata, T. Shiroshita, “Efficient Repeat
Delivery of Push Content Using Layered Multicast and its
Application to a News Distribution System”
[16] B. Quinn, K. Almeroth, “IP Multicast Applications:
Challenges and Solutions”, RFC3710, September 2001
[17] R. Shah, R. Jain, F. Anjum, “Efficient Dissemination of
Personalized Information Using Content-Based Multicast”,
IEEE-Infocom2002
Transport Protocol”, Nagatsugu Yamanouchi Reasearch
Laboratory, December 1996
8
Download