Manisha Manjul, Renu Dhir and Karan Singh - tifac

advertisement
1
Bandwidth Delay Quality Parameter Based Multicast Congestion Control
Bandwidth Delay Quality Parameter Based Multicast
Congestion Control
Manisha Manjul1, Renu Dhir2 and Karan Singh3
1,2Department
of Computer Science and Engineering,
National Institute of Technology, Jalandhar
3Department of Computer Science and Engineering,
Motilal Nehru National Institute of Technology, Allahabad
E-mail: 1manishamanjul@gmail.com, 2dhirr@nitj.ac.in, 3rcs0602@mnnit.ac.in
ABSTRACT: Computer Network is a most used phenomenon for digital information world. Computer network have
the various problem due to more demand such as congestion control security, reliability, scalability, fairness, etc.
Multicast service help the computer network to flow information from one end to another end. Multicast service also
has the big problem due to more data demands of receivers known as congestion. There are various multicast
congestion control scheme such as RLM, PLM, RLC, FLID/DL and BDP. In this paper, we are going to produce idea
to find out better data quality for receivers using modified congestion control scheme known as Bandwidth Delay
Quality Parameter (BDQP) in Based On Delay Parameter (BDP). BDQP Congestion Control Scheme minimizes the
congestion and provides better network performance to provide more quality data.
Keywords—Congestion Control, Multicast, Routing, Quality, Queue Delay.
INTRODUCTION
N
etwork Congestion is a fundamental problem in
packet switched [5] computer network when the
number of transmitted packets exceeds the capacity of the
network. In other words, congestion in network occurs
when increment in network load either leads only to small
increases in network throughput, or reduction in network
throughput. Thus, congestion is a state of excessive
accumulation or overfilling or overcrowding of the
network. Memory space, buffer, channel bandwidth,
processor capacity, number of users (network load), link
failure, etc. are the main cause of flooding of network and
consequently leading to the state of congestion. Following
scheme deal with minimize effect of congestion.
packet from highly loaded node to low loaded nodes. The
reducing of the load balancing congestion control schemes
detect congestion as well as to recover the congested
network using the appropriate feedback mechanisms. The
details working of functioning of congestion control [16]
mechanism is given below.
Congestion Control
The key parameters in congestion control are reducing the
load at node, de-route, load balancing on the occurrence of
congestion. Congestion avoidance [27] scheme employ the
worst case requirements for links, buffer, memory, etc,
where resources are under-utilized. For example, in order to
tolerate single link failure we require another link, i.e., in
congestion avoidance, two links are needed where only one
link is used in case there is no failure.
In contrast to the above open loop (congestion control)
techniques that aim to prevent occurrence of congestion,
closed loop congestion avoidance while congestion control
deal with congestion after it occurs. The basic idea behind
closed loop techniques is to monitor the network in case
congestion is detected; perform remedial through actions of
Fig. 1: Congestion Control
In Figure 1 data is transmitted from sender S to
destination D through path 1 where router 1, 2, 3 have low
capacity to as compared to router 4 and 6. In case arrival
data rate at router 5 is sufficiently high, congestion occurs.
Solution of this problem is to either reduce the data rate, or
de-route the packet for destination D through path 2 and 3.
Next sub-section deal with cause of congestion.
4
Mobile and Pervasive Computing (CoMPC–2008)
Factor Responsible for Occurrence of
Congestion
Limited memory space, channel bandwidth, router capacity
load of network, link failure, heterogeneous channel
bandwidths the same key forcing the network to stuck into
congestion. Detailed discussions of these factors are given
below:
Effect of Buffer Space
Fig. 2: Example for Congestion
The amount of buffer space given at a node is limited, the
amount of information that can be stored at the node. For
the case sufficient buffer is available more and more
packets received at the node may accumulate and get
transmitted later on. Here, the packets may suffer very
large delay and subsequently leads to congestion. However
for lower buffer space the packet may drop very frequently
when load is increased and have a lower throughput. This is
because packets have less room to wait for their chance.
Thus neither using very large amount of buffer at node nor
very less amount buffer is able to reduce congestion for all
the case for the application have random load higher buffer
size may be beneficial whereas medium range buffer is
preferred for constant load the effect of channel bandwidth.
Effect of Channel Bandwidth
Channel bandwidth tells us the amount of data that a
channel can transmit in particular time. For case of the
lower channel bandwidth and higher buffer size is available
packets may suffer larger delay, where as more and more
packets are dropped for lower buffer size and higher load.
If the channel capacity is high, starvation condition is
observed at lower load. However, increment in load gives
better performance in term delay as well as throughput and
this flow continue upto saturation point. After saturation
point, performance start decreases load on (channel
capacity is equal to network load) incremental and become
worst to worst at certain load. Thus, network stocking to
congestion and effect of congestion can be minimized by
enhancing the channel capacity that require more cost,
power, weight, etc.
Effect of Link Failure
On failure of link all packets placed over that link are lost
and requires to retransmits. The number of packet required
to retransmits depends upon the failure rate and channel
bandwidth. Thus, effective load of network is the some of
newly arrived packets actual load in case of fault free as
well as packets need to be retransmitted. For higher failure
rate, the effective load is very high as compared to actual
load and leads to overcrowding in the network. The effects
of above factors are explained with an example given in
Figure 2.
Consider first the network illustrated on Figure 2,
Sources 1 and 2 send traffic [18] to destination nodes D1
and D2 respectively. There are five links labeled 1 through 5
with different channel capacities shown on the Figure 2.
In case the link between x and y fail, we require to
transmits the packet from x to y, thus there is packet
accumulation at x that lead to congestion at node x. This
paper addresses the problem of maximizing congestion
control [6], through the use of adaptive adjustment of load
decrement.
The key parameter for congestion measurement is queue
delay suffered by a packet at each node. In case delay is
more, then we say there is congestion. The key parameter
in congestion control is to minimize the queue that a packet
buffer at each node and consequently release better network
utilization and throughput.
The rest of the paper is organized as follows. Section 2
deals with related work. BDP congestion control scheme
describes in section 3 and section 4 deal respectively
BDQP congestion control approach along with system
model. Finally, section 5 deals with concluding remarks.
RELATED WORK
Many applications need broadcasting of packets where a
single device is transmitting a packet to all other devices in
a given address range. In broadcasting, a packet send by a
node is being received by all the nodes lies in a given
network whereas group of nodes receive the packet in case
of multicasting. Most of the modern routers using multicast
enables a single device to communicate with a specific set
of hosts, not defined by any standard IP address and mask
combination. A special set of addresses are used for
multicast communication. This work is focused for
multicasting; however it is applicable for unicast by setting
number nodes in group one and a group consisting of each
node in network for broadcasting. We deal with related
congestion control approaches as given below following:
Smooth Multi-rate Multi-cast Congestion Control
(SMCC) [15] is an remarkable attempt to merge multi-rate
and single-rate approaches. Although SMCC is a multi-rate
protocol, it is sender adaptive and TCPFriendly Multicast
Congestion Control (TFMCC) [16] on each layer requiring
complex mechanisms to control congestion. A receiver
Bandwidth Delay Quality Parameter Based Multicast Congestion Control
willing to join the next layer must first join all BCLj where
j <= i at times t: 2j intention of single rate BCL join into
multi-rate is itself a complex problem. Loss rate measured
by non-CLR receivers does not provide accurate
information about the bottleneck problem incurred in
bandwidth.
The authors remark this shortcoming in the results
where receivers perform failed join-experiments due to
inaccurate estimates of the rate. SMCC poses a challenge to
multicast routing protocol with the requirement of highly
dynamic additional BCL layers. For a large session there is
an additional BCL for each layer above the base layer. This
leads to frequent changes in the multicast routing tree of the
additional layers. The finer the layer granularity, the more
frequently receivers change and harder it is for multicast
routers.
Receiver Driven Layered Congestion Control (RLC) [1]
utilization distribution congestion control may be able to
produce more accurate information about bottleneck.
Receiver-Driven Layered Congestion Control (RLC) is
based on distributed congestion control that requires no
communication among receivers nor feedback from
receivers to the sender. The lack of feedback RLC offers
virtually unlimited scalability. Shared learning in RLC is
implemented using special synchronization packets (SP).
Each receiver receives the SP packet at almost the same
time, and performs the join-experiments simultaneously.
This is crucial in scenarios where multiple receivers are
behind the same bottleneck link. Using SP’s and shared
learning they all experiment and learn simultaneously the
failed join-experiment. The sender periodically emits bursts
of packets to one layer at a time such that the bandwidth of
the layer is doubled. This must increase queue occupancy
and give more number of packet losses that indicate the
receivers not to increase the subscription level. There is
another RLM mechanism which provides the better
communication between receiver and source. ReceiverDriven Layered Multicast (RLM) [19, 24] utilizes concept
of splitting a multi-cast session into multiple layers and
having a passive sender taking no active role in the
congestion control. The passive sender merely emits data to
the layers. Shared learning is crucial to the scalability of
RLM.
FLID-DL [11] is a protocol for improving RLC, which
is presented in [19]. The protocol uses a Digital Fountain
encoding, allowing a receiver to recover the original data
upon reception of a fixed number of distinct packets,
regardless of specific packets losses. FLID-DL introduces
the concept of Dynamic Layering (DL) to reduce the IGMP
leave latencies. With such a dynamic layering, a receiver
can reduce its reception rate simply by not joining any
additional layer. FLID-DL has a few improvements over
RLC, but it still exhibits not very good fair behavior
towards TCP sessions.
5
Legout et al. proposed the PLM congestion control
protocol [2]. The difference in this approach, compared
with above protocols, is the use of two key mechanisms:
Fair Queuing (FQ) at routers and receiver-side Packet-Pair
(PP). Networks with FQ have many characteristics that
immensely facilitate the design of congestion control
protocols and also improve TCP-friendliness. In the FQ
network environment, the bandwidth available to a flow
can be determined by using the PP method. PP can sense
the bandwidth change in the network before congestion
happens, because it is join- experiment and equation based
approach which is not used by other protocols. PLM
becomes very intra-protocol and inter-protocol fair through
use of FQ. However, it is still unlikely that the FQ
scheduler can be implemented or configured in the same
way in all the routers over the whole internet in the future.
But for wireless systems it is possible for FQ be
implemented at all the Base Stations (BS) to support the
network. This lets PLM become a possible ideal congestion
control protocol for the future hybrid network.
However, through shared learning other receivers may
only learn what does not work. Unless shared learning
indicates failure of a join-experiment of a fellow group
member, the only way to learn is to perform a joinexperiment. RLM discovered the top of the iceberg and
lacks some desired properties. There are scenarios where
RLM provides poor fairness to different flows as well as to
other RLM flows. There are some network implications
such as receiver-consensus, group maintenance and fairness
which affect the performance of the RLM. Despite its
shortcomings, RLM inspired the community to study
receiver-driven approach producing significant advances in
the field.
Sender Adaptive Receiver Driven layered Multicast
[17, 23] is like RLM but there source takes decision in this
algorithm. Source control congestion according to
Leave/Join layered based on Network Bandwidth. In this
approach, sender sends multiple data stream and feedback
[12] request to receiver. All receivers accept the feedback
and give quick response to sender. Sender classified
receivers according to feedback into difference. Now
receiver calculates the available bandwidth and current
bandwidth of receiver. According to SARLM the current
bandwidth and available bandwidth, receiver Join/Leave
experiment. In next section, we are going to describe the
BDP congestion control scheme.
BDP CONGESTION CONTROL SCHEME
BDP (Based on Delay Parameters) is a receiver-driven
cumulative layered scheme. BDP uses delay parameters
and estimate the path's available bandwidth. If a path's
available bandwidth can sustain one more layer, a BDP
receiver can join one more new layer. The congestion
control of BDP is based on arrival delay interval and
6
Mobile and Pervasive Computing (CoMPC–2008)
packet-loss event. The working of the BDP is governed by
two major points.
(i) Arrival delay interval for Network congestion control.
(ii) One way delay and Bandwidth estimation.
Both these points can be claimed under the delay
properties. The main BDP algorithm's parameters are:
Network have various link, the one way delay of packet
is sum of all delay (transmission, stabilization, queue and
error) at each link along the path. If transmission,
stabilization and error are constant then difference between
two successive packets i and packet i+1 delay will be one
way delay.
Packet is sent at a time called sending time and received
at destination is called receive time. For end to end path,
one way delay will be difference between receiving time
and sending time. Inter arrival time is sum of sending time
with adding one way delay or difference of two successive
packets receiving time.
According to one way delay (OWD) receiver calculate
minimum, maximum and average delay. The average OWD
is sum of all delay divided by total no. of packets. Using
average delay receiver, calculate the two other factors
increase delay (SRdeg) and decrease delay (SRdet) [4]
which help to find out available bandwidth for receiver to
join/leave layer.
Main steps of BDP are following:
1. BDP sender encodes the multimedia signal into layers
and transmits each layer as a separate multicast group
with rate.
2. The sender periodically sends bandwidth probing stream
for layers .
3. BDP receiver establish a unicast connection to the
sender for obtaining the session information about the
rate multicast group and description of each layer.
4. The receiver subscribe to correct number of layers
according to estimation of path available bandwidth by
measuring the parameters SRdeg and SRdet.
5. If packet loss and an inter arrival delay is greater than a
specific threshold value of packet loss, congestion
mechanism is performed.
Where bandwidth calculation parameter is if k is no. of
packet
(1)
(2)
We can find the better network utilization using
modified multicast congestion control scheme which is
describe in next section.
BDQP MULTICAST CONGESTION CONTROL
Modified approach is Bandwidth Delay Quality Parameter
(BDQP) sender adapts [14] to some condition such as
source rate, network bandwidth, and buffer at node, node
processing capacity of packets, queue delay etc. and take
decision on IP multicast to get best quality [22, 26] of data.
Beside BDP, following modification are done in proposed
scheme to enhance the quality of received data. When
receiver return the feedback, source take decision on the
basis of Queue delay parameter that have lower packet loss
as well as better network utilization.
Architecture for BDQP
We are using the architecture for BDQP is as shown in the
figure 3. It consists of one sender also known as server and
’n’ receivers. Server sends the packets stream through IPmulticast to the receiver. If the receivers have the capability
to receive all the stream of data packets, then the quality of
received stream will be high. If server does not have the
capability to accept all the streams send by server, then
quality of received stream will be degraded. Every client
sends feedback through internet unicast to the feedback
analyzer of sender. According to the feedback received, the
server gives the decision to the client whether they should
join or leave the layer.
Fig. 3: BDQP Architecture
The sender’s role is to dynamically generate multiple
data streams and in turn generate multiple multicast groups.
Each receiver monitors its network condition, and sends
sparse feedback packets containing statistical information
about its network conditions back to the analyzer. The
receivers also make their subscribing decisions.
As an important system component, the feedback
analyzer collects the feedback packets. Sender first
classifies the receivers into groups dynamically based on
their reported network parameters, and then adjust the
layered multicast scheme in such a way that each group of
receivers are expected to subscribe certain group. This
jointly provides the best stream quality for the receivers in
each group under their given network conditions. The
Bandwidth Delay Quality Parameter Based Multicast Congestion Control
sender leaves the freedom of choosing which multicast
groups to join to the receivers, therefore this approach is
yet receiver-driven at this point.
System Model for BDQP
In this paper, BDQP approach deals with IP based
multicast for sending data from sender to destination
through Distance Vector Multicast Routing Protocol. Also,
multi-layer management scheme is used in the modified
model to organize various layered and group management
to manage group of receivers. Drop-Tail active Queue
Management is used for managing the different types of
queues used in the model. The next section deals with
IP-based multicasting followed by routing and layer
management.
IP Multicast: IP Multicast is an advanced group
communication mechanism designed to operate on the
Internet Protocol (IP) [21] and User Datagram Protocol
(UDP). Normal IP packets contain a source and destination
address and routers use the destination address to forward
the packets in right direction. In multicasting, the addresses
are interpreted in a different way. By definition, multicast
packets may have multiple recipients, all of which are
impossible to address in an IP packet. Instead of stamping
packets to specific recipients, the packets are destined to a
multicast group, independent of recipients.
Theoretically, any host in the Internet may be a sender,
and any host in the Internet may subscribe to a multicast
group. When a packet is sent to the group, all members of
the group receive the packet, independent of the physical
(compare to LAN broadcast) or logical location (Compare
to IP broadcast). The multicast [9] routers replicate the
packet only once per link. If there are N > 1 recipients
behind a link, then unicast communication would require N
packets with the same content to be sent explicitly to each
of the N recipients. Multicast requires the packet to be
forwarded only once.
In BDQP architecture, server generates the multiple
packet streams and receiver or client accepts the packet
stream. Every client has unique IP address (logical address)
and sender sends the packet stream via IP multicasting was
one copy of packet is send to a group of address. But
feedback is send by unicast, meaning that every client fills
it own feedback and is send to the server.
Routing and Group Management: When a packet destined
to a multicast group < g > is received from interface isrc the
router forwards packet to all interfaces isrc ≠ i, that have
subscribed members. The challenge of multicast routing is
in the group management. Multicast routers must keep
track of memberships such that they can make decisions
any time whether to forward packets of group g to an
interface isrc. The more dynamic a group is, the more
routers must interact to share information on group
memberships. A host joins a multicast group by Distance
Vector Multicast Routing protocol (DVMRP) [10].
7
DVMRP which was originally defined in RFC 1075 was
driven from Routing Information Protocol (RIP) with the
difference being that RIP forwards the unicast packets
based on the information about the next-hop toward a
destination, while DVMRP constructs delivery trees based
on the information on the previous-hop back to the source.
The earlier version of this distance-vector routing algorithm
constructs delivery trees based on Truncated Reverse Path
Broadcasting (TRPB) algorithm.
Later on, DVMRP was enhanced to use RPM.
Standardization of the latest version of DVMRP is being
conducted by the Internet Engineering Task Force (IETF)
which is a Inter-Domain [25] Multicast Routing (IDMR)
working group. As shown in figure 3, sender sends a packet
stream to a group of receiver. These packets are flood to
receiver on hop by hop basis (IP multicasting). When
sender end a packet stream to next hop using DVMR
algorithm, the host checks and find out the minimum cost
among the surrounding neighbor hosts. The minimum cost
next hop is found on the basis of delay, bandwidth.
Layered Management: In a layered multicast session a
source emits data to a set of cumulative multicast groups
called layers. Each layer carries data with bandwidth Li in
such a way that the cumulative data rate on layer i is:
Bi   j  0 L j
i
where i = 1, 2, 3……n, we call this
cumulative layered data organization. In cumulative
layered data organization the sender must emit data such
that the cumulative data carried on layer is a subset of
cumulative data carried on layer j for all i < j . That is, the
base layer carries the minimum set of data to deliver. The
second layer delivers additional data that may be decoded
in conjunction with the base layer. The third layer provides
additional data, and so on. As shown in Figure 3, the packet
is delivered by server to every node. The client checks
available layer bandwidth and current bandwidth. If
available bandwidth is more than current bandwidth, the
clients join the layer otherwise, leave the layer.
Active Queue Management: Active Queue Management
[3, 13] Congestion Control is performed at end systems by
adjusting the data sending rate to accommodate the
congestion state of the network. An explicit congestion
signal can be a packet loss explicitly dropped by a router)
or a marked packet (marked by a router instead of
dropping). The simplest queue management algorithm is
Drop-tail. It simply drops packets when they can no more
fit in the output queue of a router interface.
Drop-tail is easy to implement: it is inherently present in
all algorithms in the form of finite buffer space. The
unfortunate property of Drop-tail algorithm is that it signals
congestion only after it has occurred.
As shown in the architecture for BDQP, the sender
sends the packet stream to the client. Packet stream is
stored and forwarded to receiver through hop by hop. A
network node has a queue in which packet store and
forward occurs.
8
This queue is based on FIFO and is called drop tail
queue in this architecture. Queue is attached to every node
such as server node, router node and client node. We can
find all properties such as queue delay, dropped packets,
enqueue packets, dequeue packets etc using queue
functions. In queue, if buffer is full, the packets will drop.
According to architecture, if server sends more data
streams then queue buffer will overflow and more packets
will be dropped at the node.
BDPQ PROPOSED APPROACH
We propose a modified congestion control scheme based
on Based on Delay Parameter called Bandwidth Delay
Quality Parameter (BDQP). As we describe above, BDP
algorithm took adaption on the basis of network interarrival delay. This scheme control congestion based on
delay parameter to calculate the bandwidth. In proposed
scheme, adaptive decisions are taken at the same node on
the basis of feedback it receives from receiver nodes. When
receivers return back the feedback, source take decision on
the basis of Queue delay parameter to control congestion
and provide the better data quality. Receiver join or leave
group according to Queue Delay, receiver join a group
means receiver may accepts the all packet layer such as
other group members (same address) are accepts layer. In
contrast leave the group. Main steps are following:
1. We measure Queue Delay of storing packets in queue
means packet stream are store and forward one hop to
other hop and store in a queue, if there are some
problem in network such as link failure, route congested
then wait in queue.
2. In case Network is congested receiver will leave group.
As discussed above packet stream follow the store and
forward scheme and packet queue delay may be
decreased and increased, if it is more than a threshold
value then we may say “Network is congested and
leaved up a layer by receiver and degrade the quality of
data”.
3. In case Network is under-utilized receiver will join to
get more quality data. It may happen, packet stream
come and forward to next hop, it wait in queue for very
less time, so we may say “Network is under-utilization
and receiver may join a layer”.
Sender sends the feedback request to receivers and
receives response of feedback come from receiver. From
feedback sender determine whether congestion occurred or
not. In our approach, packet send by source are stored at
intermediate node (node have buffer). In case packet size
increases more than buffer size, packet loss will occur.
Waiting time at buffer is defined as queue delay. Packet is
send by source and packet are received by a node (Router
or receiver). In case link capacity is more than source rate,
packet are stored in Queue which is bounded by maximize
size known as queue limit. The time at which packets are
Mobile and Pervasive Computing (CoMPC–2008)
arrive in queue is called Enqueue Time, ET (i). The time at
which ith packets are Departure from queue is called
Dequeue Time DT(i). If links are bottleneck, packets wait
in the queue. Queue Delay of ith packets
QD(i).
QD (i) = DT (i) – ET (i)
(3)
In BDQP approach, sender sends multicast packet to
receivers. In this approach, queue delay is calculated at
receiver node as like in equation 3. We compare this Queue
Delay with threshold value of Queue Delay (QTh). If queue
delay of packet is more than threshold value of delay, we
say that congestion occurred. If congestion occurs, receiver
leave a layer and it will send feedback reply to sender.
Sender sends the acknowledgement of Congestion,
decrease the rate of traffic. If queue delay is less then
threshold value of delay, Network will be under-utilized.
So receiver will join the layer and send the feedback to
sender. When send find the acknowledgement of Network
under-utilization, it will increase the rate of traffic to
increase the quality of data.
CONCLUSION
We have proposed a new mechanism to minimize the
congestion which are based on queue delay. In this
proposed, approach the network source adapts the network
feedback which is sent by receiver and increase or decrease
traffic rate according to queue delay. When threshold queue
delay is more than packet queue delay, congestion occurred
and source decrease the traffic rate to minimize the
congestion while source increase the traffic rate to utilize
the network to provide the better quality of data. We are
going to simulate to analysis and compare the BDP and
BDQP in NS-2 [20].
REFERENCES
[1] Legout, A. and Biersack, E.W., Pathological behaviors for
RLM and RLC, International Conference on Network and
Operating System Support for Digital Audio and Video,
Chapel Hill, NC, USA, pg. 164–172, June 2000.
[2] Legout, A. and Biersack, E., “PLM: fast convergency for
cumulative layered multicast transmission schemes”, in Proc.
ACM SIGMETRICS'2000, Santa Clra, CA, USA, pp. 113–
22, June, 2000.
[3] Arthuraliya, S., Li, V.H., Low, S.H. and Yin, Q., REM,
Active queue management, IEEE Network, vol. 15, pg. 48–
53, October, 2002.
[4] Ze-qiang, B.J.W., Hao-he, K. and Guang Zhao, Z.,
“Congestion Control Protocol Based on Delay Parameters
for Layered Multicast Communication” IEEE Computer
Society of India, 2007.
[5] Yang, C. and Reddy, A.V.S., A Taxonomy for Congestion
Control Algorithms in Packet Switching Networks, IEEE
Network, vol. 5, pg. 34–4, 1995.
[6] Congestion Control in Computer Networks: Issues and
Trends, IEEE Network Magazine, pg. 24–30, May, 1990.
Bandwidth Delay Quality Parameter Based Multicast Congestion Control
9
[7] Sisalem, D. and Wolisz, A., MLDA: A TCP-friendly
congestion control framework for heterogeneous multicast
environments, International Workshop on Quality of Service,
June, 2000, Pittsburgh, Pa, USA.
[17] Jain, M. and Dovrolis, C., “End-to-end available bandwidth:
measurement methodology, dynamics, and relation with TCP
throughput”, IEEE/ACM Trans. on Networking, Vol. 1, 1(4),
pp. 537–549, 2003.
[8] Yin, D.S. and Liu, Y.H., et al., “A new TCP-friendly
congestion control protocol for layered multicast”, in proc.
IASTED conference on Internet and Multimedia Systems
and Applications, Innsbruck, Austria, Feb., 2006.
[18] Welzl, M., Network Congestion Control managing internet
traffic, Wiley, India, pg. 7–15, 69–77, 93–96, 2005.
[9] Deering. S, Multicasting Routing in Inter-network and
Extended LANs, SIGCOMM Summer 1988 Proceeding,
Aug., 1988.
[19] McCanne S., Jacobson V. and Vetterli, M., Receiver-driven
layered multicast, Proceedings of ACM SIGCOMM, pp.
117–130, August, 1996, New York, USA.
[20] ns2: http://www.isi.edu/nsnam/ns/
[10] Distance Vector Multicast Routing Protocol, Request for
Comments 1075, November, 1988.
[21] Postel, J., Internet protocol, Request for Comments 791,
September, 1981.
[11] Byers, J., Frumin, M. et al., “FLID-DL: congestion control
for layered multicast”, in Proc. NGC2000, Palo Alto, USA,
pp. 71–81, Nov., 2000
[22] Pu Su and Michael Gellman “Using Adaptive Routing To
Achieve Quality Of Service” International Journal
Performance Evaluation , Science Direct 57, 2004.
[12] Bolot, J.C., Turletti, T. and Wakeman, I., Scalable feedback
control for multicast video distribution in the internet,
Conference of the Special Interest Group on Data
Communication ACM, pages 58–67, SIGCOMM ’1994.
[23] Qian Zhang, Quji Guo, Qiang Ni, Wenwu Zhu and Ya-Qin
Zhang, Source adaptive multi-layered multicast algorithms
for real-time video distribution, IEEE/ACM Transactions on
Networking, vol. 8, no. 6. pp. 720–733, 2006.
[13] Bennett, J.C. and Zhang, H., “Hierarchical packet fair
queuing algorithms”, IEEE/ACM Trans. on Networking,
Vol. 5(5), pp. 675–689, 1997.
[24] McCanne, S., Vetterli, M. and Jacobson, V., Lowcomplexity video coding for receiver-driven layered
multicast, IEEE Journal on Selected Areas in
Communications, vol. 15, no. 6, pg. 982–1001, 1997.
[14] Karan Singh, Rama Shankar Yadav, Raghav Yadav and
Kumaran, A.R. Siva “Adaptive Multicast Congestion
Control” published in International Conference on
Information Technology, at Haldia institute of Technology,
Haldia, 19–21 March, 2007, PP. 299–306, ISBN: 81-8976674-0.
[15] Kwon, G.I. and Byers, J. Smooth multi-rate multicast
congestion control, Proceedings of IEEE Infocom, vol. 2, pp.
1022–1032, March, 2003.
[16] Vicisano, L., Rizzo, L. and Crowcroft, J., TCP-like
congestion control for layered multicast data transfer, in
Proc. Conference on Computer Communications,
pg. 996–1003, March, 1998.
[25] Kumar, Satish, Radoslavov, Pavlin, Thaler, David,
Alaettinoglu, Cengiz, Estrin, Deborah and Handley, Mark,
The MASCBGMP Architecture for Inter-domain Multicast
Routing, in ACM SIGCOMM, April, 1998, pp. 93–104.
[26] Johansen, Stian, Kim, Anna N. and Perkis, Andrew, “Quality
Incentive Assisted Congestion Control for Receiver-Driven
Multicast” IEEE Communications Society, ICC, 2007.
[27] Jacobson, V., Congestion Avoidance and Control, in Proc.
ACM SIGCOMM, 1998. [9] Vicisano, L., Crowcroft, J. and
Rizzo, L., TCP-like congestion control for layered multicast
data transfer, Proceedings of IEEE Infocom, volume 3, pages
996–1003, March, 1998.
Download