CS 218 Final Project

advertisement
CS 218 Final Project
Fall 2003
Evaluation of Explicit Control Protocol (XCP) and
Comparison with TCP Westwood
Shiva Navab shiva_n@ee.ucla.edu
Jinsong Yang jinsung@ucla.edu
Professor Mario Gerla
Tutor: Ren Wang
1
Abstract
Nowadays, a huge amount of network layer protocols exist, aiming at different network
problems. In this paper, we evaluate XCP, a protocol proposed in [3], by comparing its
performance with TCP Westwood protocol, which is proposed in [1].
In our comparison, we model XCP and TCP Westwood using a control theory framework.
Our work focuses on demonstrating the performance (stability, efficiency) of the protocols in the
respect of the link capacity, the round trip delay, ant the number of sources.
1. Introduction
1.1. XCP
TCP becomes inefficient, and prone to instability, regardless of the queuing scheme [3]. XCP
(Explicit Control Protocol) is a new congestion control protocol designed to address this
problem. XCP generalizes the Explicit Congestion Notification proposal (ECN). In addition,
XCP introduces the new concept of decoupling utilization control from fairness control.
1.1.1. Congestion Header
Every XCP packet carries a congestion header, which has three fields: H_cwnd, H_rtt,
H_feedback. Figure 1.1.1-1. XCP uses the congestion header to communicate a flow’s state to
routers and feedback from the routers on to the receivers. The field H_cwnd is the sender’s
current congestion window, whereas H_rtt is the sender’s current RTT estimate. These are filled
in by the sender and never modified in transit. The remaining field, H_feedback, takes positive
or negative values and is initialized by the sender according to its requirements. Routers along
the path modify this field to directly control the congestion windows of the sources.
Figure 1.1.1-1 XCP congestion header
1.1.2. Sender Algorithm
XCP sender maintains a congestion window of the outgoing packets, cwnd, and an estimated
round trip time, rtt. When a packet leave the source, the sender attaches a congestion header to
the packet and sets the H_cwnd field to its current window size cwnd, and H_rtt to it current
estimated rtt. If the packet is the first packet of a flow, the H_rtt field is set to zero to tell the
routers that the send does not have a valid estimated rtt yet.
The sender set the H_feedback with its desired window increase. The sender calculates the
desired window increase using the following equation:
2
H _ feedback 
r * rtt  cwnd
cwnd
(1)
where r is the desired application rate, cwnd is the number of packets in the current congestion
window.
Upon the arrival of an ACK, the sender exams the H_feedback field of the ACK header. If the
H_feedback contains a positive number, the sender increases its window size. Otherwise, it
decreases the window size. The formula used to calculate the new cwnd is:
cwnd  max( cwnd  H _ feadback , s)
(2)
where s is the packet size.
In case there is a packet loss, XCP does this in a similar manner to TCP [3].
1.1.3. Receiver Algorithm
Similar to TCP receiver, an XCP receiver send back an ACK upon arrival of a package. In
addition, the receiver copies the congestion header from the packet and attaches it to the ACK.
1.1.4. Router Algorithm
XCP’s queuing scheme is an extension of RED or Drop-Tail. The queuing scheme has two
parts: the efficiency controller (EC) and fairness controller (FC). The router maintains an
average RTT of the flows traversing the link. Both EC and FC compute estimates over this
average RTT.
The Efficiency Controller
The purpose of the efficiency controller (EC) is to maximize the link utilization while
minimizing drop rate and persistent queue [3]. It deals with only the overall traffic and does not
care about the fairness among flows, which is the job of fairness controller (FC).
The EC uses the following equation to compute the desired increase or decrease that the
aggregate traffic transmits over an average RTT:
   *d * S   *Q
(3)
where  and  are constants, which are set to 0.4 and 0.226 respectively [3]. d is the average
RTT. S is the spare bandwidth defined as the difference between the incoming traffic rate and
the link capacity. Q is the persistent queue size.
Equation (3) makes the feedback proportional to the spare bandwidth because, when S = 0, the
link is underutilized and we want to send positive feedback, while when S < 0, the link is
congested and we want to send negative feedback. To drain the persistent queue XCP makes the
aggregate feedback proportional to the persistent queue too. Finally, since the feedback is in
bits, the spare bandwidth S is multiplied by the average RTT.
3
The feedback is divided into small chunks that will be assigned to different flows, which is the
job of the FC.
The Fairness Controller
The job of the FC is to apportion the feedback to individual packets to achieve fairness [3].
The FC relies on the same principle TCP used to converge to fairness, namely Additive-Increase
Multiplicative-Decrease (AIMD). The per-packet feedback is computed according to the
following policy:

If  > 0, allocate it so that the increase in throughput of all flows is the same.

If  < 0, allocate it so that the decrease in the throughput of a flow is proportional to its
current throughput.

If  = 0, the convergence is achieved by employing a technique called bandwidth
shuffling. This is the simultaneous allocation and de-allocation of bandwidth such that
the total traffic rate does not change, yet the throughput of each individual flow changes
gradually to approach the flow’s fair share.
h  max( 0,  * y   )
(4)
where y is the input traffic in an average RTT.
To compute the per-packet feedback that allows the FC to enforce the above policies, XCP
computes the feedback assigned to packet i as the combination of a positive feedback pi and a
negative feedback ni :
H _ feedbacki  pi  ni
(5)
Thus, positive feedback pi is given by:
pi   p
rtt i2
cwnd i
(6)
h  max(  ,0)
, where
d
max(, 0) ensures that we are computing the positive feedback. And i can be derived as:
where i is a constant. The total increase in the aggregate traffic rate is
4
p 
h  max(  ,0)
rtti
d *
cwnd i
(7)
Similarly, we compute the negative feedback as:
ni   n * rtt i
(8)
And n is computed as:
p 
h  max(  ,0)
d *L
(9)
1.2. TCP Westwood
TCP Westwood is a sender side modification of the TCP congestion window algorithm that
improves over the performance of TCP Reno in the wired as well as wireless network. The
improvement, however, is more significant in lossy wireless networks. TCP Westwood employs
end-to-end bandwidth estimation to discriminate the cause of package loss. TCP Westwood
fully complies with the end-to-end TCP design principle and does not require inspection and/or
interception of TCP packets at intermediate nodes [2]. In this report we do not explain TCP
Westwood scheme in detail. There are several papers on TCP Westwood and we have listed a
few of them in our Reference section.
2. Comparison of XCP and TCP Westwood
2.1. Common Design Features between Both Protocols
2.1.1. Window-based congestion control protocol
Both TCP Westwood and XCP use windows based congestion control mechanism. Both
protocol distinguish error losses from congestion losses. XCP uses precise congestion signaling,
while TCP enhances congestion control via Eligible Rate Estimates (ERE). The estimation is
computed at the sender by sampling and exponential filtering methods.
2.1.2. Same way to handle ACK losses
In case of random loss, both protocol react the same, which is they cut the window size by
half.
5
2.2. Differences
2.2.1. XCP decouples the utilization control and fairness control
The XCP queuing scheme has two parts: the efficiency controller (EC) and fairness controller
(FC). The router maintains an average RTT of the flows traversing the link. Both EC and FC
compute estimates over this average RTT.
2.2.2. XCP puts control state in the packets
For TCP Westwood, the rate estimation is done based on ACK rate. To deploy TCP
Westwood, only sender side modification is required. XCP gathers network state based on the
data on each packet header. To deploy XCP, modifications on senders, receivers and routers are
required.
3. Simulation
3.1. Simulation Set Up
3.1.1. Ns-2: The Simulator
We use the packet level network simulator ns-2 to conduct the performance simulations. We
extended the ns-2 with XCP module, which is based on [3]. The TCP Westwood NR Adapt
module is from TCP Westwood website. Minor modification is carried out to suite for the needs
of this project.
3.1.2. Queuing Schemes
We compare XCP with TCP Westwood NR. TCP Westwood NR is equipped with the
following queuing schemes: Random Early Discard (RED) [9], and Drop-Tail.
For XCP, only RED/XCP is considered because there is no difference between RED and
Drop-Tail over XCP. XCP almost never drop packets [3].
3.1.3. Topologies
We use the topology shown in Figure 3.1.3-1 for our simulation.
6
Long Flow
Bottleneck
Figure 3.1.3.1 Bottleneck Topology
In most of our simulation, the criteria are attached to the bottleneck link. We want to monitor
the impacts of a set of parameters, such as bandwidth, delay and number of flows.
3.2. Simulation Results
In this section, we first summary the results with the respect to link capacity, RTT time,
number of flows, loss condition and time. After that we consider some more specific cases with
varied topology, loss conditions, etc.
3.2.1. Impact of Capacity
The subject of this simulation is to decide the impact of bandwidth on the performance of the
protocols. We use the following parameters for this simulation. The bandwidth varies from
1Mb to 500Mb.
First we show the impact of bandwidth on a short lasting flow. The time that the flows last is
10 sec. Figure 3.2.1 shows the result. We observe the following facts:

XCP and TCP run equally well bandwidth when the bandwidth is low.

XCP outperforms TCP Westwood a little when the bandwidth is high. This is due to the
fact that XCP needs shorter time than TCP Westwood to reach the full link capacity.
7
Avg Throughput
Avg Throughput(Mbps)
120
100
TCPW
XCP
80
60
40
20
0
1
5
10
20
50
75
100
BW(Mbps)
Figure 3.2.1-1 Different throughput when the link capacities are
different. The delay is 10 ms.
Next, we show how XCP flows and TCP Westwood flows change over a period of time.
Figure 3.2.1-2 shows the result. As we can see that, XCP reaches the full bandwidth very
quickly and stay at the full speed stably. TCP Westwood takes long time to reach the full
capacity and, while it goes at the full speed, it has little jumping.
Throuput Comparision
Throughput (Mbps)
25
20
15
XCP
10
TCPW
5
0
0
2
4
6
8
10
12
Time (sec)
Figure 3.2.1.2
Bandwidth=20Mbps and delay=10 ms.
8
14
3.2.2. Impact of Link Delay
The subject of this simulation is to exam the impact of link delay on the performance of the
protocols. We use the following parameters for this simulation. The delay varies from 10 to
1000 ms
The results are shown in the following figures. TCP Westwood outperforms XCP, regardless
the queuing schemes. The sharp drop of TCP Westwood is actually due to the fact that we are
running the simulation in a short time and trying to see how TCP works when it is still in its slow
start stage.
Throughput Comparison Different Delay
Throughtput (Mbs)
25
TCPW-Avg-Thrp
20
XCP-Avg_Thrp
15
10
5
0
0
200
400
600
800
1000
Delay (ms)
Figure 3.2.2.1 Different delay. Bandwidth=20Mbps
3.2.3. Impact of Number of Flows
The subject of this simulation is to exam the impact of number of FTP flows on the
performance of the protocols. We use the following parameters for this simulation. The number
of flows varies from 1 to 200.
Parameter
Bottleneck Bandwidth
Bottleneck Delay
Simulation Running Time
Bottleneck Loss Rate
Value
10 Mb
20 ms
25 sec
0.0
The link utilization is the total among all flows. See Figure 3.2.3.1 and Figure 3.2.3.2. The
following facts are observed:
9

XCP has better performance for small number of flows as well as large number of flows.
TCP Westwood can’t fully utilize the link capacity when the number of flows is small
and the running time is short.
The fact that TCPW can’t fully utilize the bandwidth is due to the slow start nature of the TCP.
When the number of flows increases, the bandwidth share of each decreases. From an individual
flow’s point of view, this is essentially the same as decreasing available bandwidth. An
individual flow can quick take all its share of bandwidth when the share is small. From the
overall network’s point of view, however, this improves the overall bandwidth utilization.
Packet Drop (Packet)
10
Throughput (Mbps)
9.9
9.8
9.7
XCP
RED
DropTail
9.6
9.5
9.4
1
2
5
10
20
5000
4500
4000
3500
3000
2500
2000
1500
1000
500
0
XCP
RED
DropTail
1
50 100 200 500
2
5
Number of Flows
10
20
50 100 200 500
Number of Flows
Figure 3.2.3.1 Number of flows and link utilization.
Bandwidth = 10 Mbps, delay = 20ms
Figure 3.2.3.2 Number of flows and packet drop.
Bandwidth = 10 Mbps, delay = 20ms
To show this, another simulation is conducted with all the parameters unchanged. But at this
time, let one TCP flow run for a period of time long enough to let the flow reach it peak
performance. In this simulation, we set the running time to about 300 times of the flow’s RTT,
which gives us 30 sec. This result is shown in Figure 3.2.3.3.
3.2.4. Impact of Loss Rate
Link errors cause packet loss. Loss study becomes more important in a wireless network,
where the link error rate is high. We use the ns-2 default loss model to simulate the link error.
The loss happens on the bottleneck link.
The following are the parameters we use to perform the simulation.
Parameter
Number of FTP Flows
Bottleneck Bandwidth
Bottleneck Delay
Simulation Running Time
Value
1
20 Mb
10 ms
15 sec
10
XCP
Avg Throughput
Loss rate= 0.5%
TCPW
25
25
20
20
Throughput
(Mbps)
Throughput (Mbps)
Avg Thoughput
Loss Rate= 0.1%
TCPW
XCP
15
15
10
10
5
5
0
0
0
2
4
6
8
10
12
14
0
16
2
4
6
8
10
12
14
Time (sec)
Time(sec)
Figure 3.2.4.2
Figure 3.2.4.1
Bandwidth=20Mbps and delay=10 ms.
Bandwidth=20Mbps and delay=10 ms.
Throughput Comparison
Diffe re nt Loss Rate
Throughput (Mbps)
25
20
15
TCPW-Avg-Thrpt
XCP-Avg-Thrp
10
5
0
0
0.05
0.1
0.15
0.2
0.25
Loss Rate
Figure 3.2.4.3 Bandwidth=20Mbps, delay=10 ms.
As shown in figure 3.2.4.1 and 3.2.4.2 TCP Westwood has a better behavior comparing to
XCP and the reason is XCP cuts its window by half (that’s why it goes to zero at some points).
XCP then recovers fast so it has a lot of oscillations. As we see in figure 3.2.4.1, after some point
(Loss rate=0.025) XCP outperforms TCPW and that’s because TCPW doesn’t have a chance to
grow its window when several losses happen.
11
16
3.2.5 Fairness Study
Figure 3.2.5.1 and 3.2.5.2 show three flows coming in at different times. We see that both
XCP and TCPW converge to their fair share but XCP converges faster. Link capacity is 100M
and the delay 20ms.
120
120
Flow 0
Flow 1
Flow 2
100
Throughput
80
Throughput
Flow 0
Flow 1
Flow 2
100
60
40
20
80
60
40
20
0
0
0.1
0.6
1.1
1.6
2.1
2.6
0.1
0.6
1.1
Time
1.6
2.1
2.6
Time
Figure 3.2.5.1 Three flows running on TCP Westwood
converge to their fair share
Figure 3.2.5.2 Three flows running on XCP converge
to their fair share
Different RTT
Now we want to see how flows compete for their share of bandwidth when they have different
RTT. We set up the simulation the way so that three flows come into action at different time.
The flows have RTT of 10, 50 and 100 ms respectively. Link capacity is 20M.
25
Flow 0
Flow 1
Flow 2
20
Flow 0
Flow 1
Flow 2
20
Throughput
Throughput (Mbps)
25
15
10
15
10
5
5
0
0
0.1
5.1
10.1
15.1
20.1
Time (sec)
0.1
25.1
5.1
10.1
15.1
20.1
25.1
Time (sec)
Figure 3.2.5.3 Three flows running on TCP
Westwood converge to different shares.
Figure 3.2.5.4 Three flows running on XCP converge
to their fair share.
12
3.2.6. Impact of Web-like Traffic
In Figure 3.2.6.1 we have 500 flows which are randomly distributed over the 30 sec simulation
time. So we expect to have 500/30=17 short flows plus the long lived flow on average. Link
capacity is 10M so we expect the long flow to occupy 10M/18= 0.55M. We see that TCP
Westwood is getting almost 0.5M but XCP about 2M so we conclude XCP is not friendly to
short flows.
Parameter
Number of Web-like FTP Flows
Bottleneck Bandwidth
Bottleneck Delay
Simulation Running Time
Loss Rate
70
3.5
60
50
XCP
RED
DropTail
3
XCP
RED
DropTail
Throughput (Mbps)
Packet Drop (Packet)
Value
500
10 Mb
50 ms
30 sec
0.0
40
30
20
10
2.5
2
1.5
1
0.5
0
0
0.1
5.1
10.1
15.1
20.1
25.1
0.1
5.1
Time (se)
10.1
15.1
20.1
25.1
Time (sec)
Figure 3.2.6.1: BW 10 Mb, # of Short Flows 500, Start @ Random Time
Running for 1 sec, Link Delay 50 ms
4. Conclusion
In this project we compared TCP Westwood and XCP behaviors from different aspects. After
we explained the basic idea of each protocol, we presented our simulation results to support our
comparison. We observed that XCP behaves closer to perfect behavior in most of the
environment than TCPWestwood but in most cases the difference is not significant and
considering the high cost of XCP (deployment of routers) it is not fair to conclude XCP is better
than TCP Westwood!
13
5. Acknowledgment
We would like to thank Ren Wang, our tutor who guided us during this project.
6. References
[1]
M. Gerla, M. Y. Sanadidi, R. Wang, A. Zanella, C. Casetti, S. Mascolo, "TCP Westwood:
Congestion Window Control Using Bandwidth Estimation", In Proceedings of IEEE
Globecom 2001, Volume: 3, pp 1698-1702, San Antonio, Texas, USA, November 25-29,
2001
[2]
Mascolo, S., C. Casetti, M. Geral, M. Y. Sanadidi, R. Wang. TCP Westwood: Bandwidth
Estimation for Enhanced Transport over Wireless Links
[3]
Katabi, D., M. Handley, C. Rohrs.
Bandwidth-Delay Produc
[4]
Ren Wang, Massimo Valla, M. Y. Sanadidi, and Mario Gerla, Adaptive Bandwidth Share
Estimation in TCP Westwood, In Proc. IEEE Globecom 2002, Taipei, Taiwan, R.O.C.,
November 17-21, 2002
[5]
Claudio Casetti, Mario Gerla, Saverio Mascolo, M.Y. Sansadidi, and Ren Wang, TCP
Westwood: End-to-End Congestion Control for Wired/Wireless Networks, In Wireless
Networks Journal 8, 467-479, 2002
[6]
TCP Westwood papers on http://www.cs.ucla.edu/NRL/hpi/tcpw/
[7]
Network simulator ns-2. http://www.isi.edu/nsnam/ns
[8]
Red parameters http://www.icir.org/floyd/red.html
[9]
TCPWestwood Slides by professor Mario Gerla, CS218 Fall 2003 Slides
Internet Congestion Control for Future High
14
Download