III. Multiple TCP over Ad hoc

advertisement
CS 218 – Fall 2003
TCP over Multipath Routing in Ad Hoc Networks
Jiwei Chen, Matt Azuma
Advisor: Kaixin Xu, Dr. Mario Gerla
Computer Science Department
University of California, Los Angeles
ABSTRACT:
We investigate the TCP performance using two paths
concurrently. Multipath TCP uses two paths at the
transport layer and controls individual path’s congestion
window. Multiple path routing(SMR) is adopted to
discover two maximally disjoint paths. Previous research
shows that TCP performance degrades by frequent
duplicate acks generated by out-of-order packet delivery
via different path. In our scheme, we have no out-of-order
problem and we use two paths concurrently. During the
project, we found several interesting problems: SMR is not
scalable and TCP multi-congestion window control is not
very successful. Therefore our scheme doesn’t show clear
advantage over standard TCP. Problems include: the
routing overhead for SMR is clearly is much higher than
DSR even it can find maximum disjointing paths; TCP
controls for two paths are coupled together. The problem
of one path could cause another path to stop functioning
and therefore our scheme hasn’t show clear advantage
over standard TCP. We propose several interesting issues
for further investigation.
I. INTRODUCTION
Ad hoc TCP application is not as successful as wired
networks. In mobile networks, TCP faces various
challenges, such as node mobility, hidden/exposed
terminal problems, packet/ack contention on the same
channel, unpredictable radio medium, external
interference/jamming. Most of problems are still in
research to improve the TCP performance in wireless
networks.
It is a well known fact that TCP has very low
throughput in Ad hoc network. There are two main
reasons for it. By far, the standard TCP send packets
unanimously on the default route, which is suitable for
wired network, however, it is unsuitable for wireless
network because the packets and acks will contend for
the same channel and there is hidden terminal problem.
Another reason is frequent link breakage due to mobility.
If link breakage happens, TCP will cut the window and
its RTO(retransmission timeout) becomes progressively
larger leading to unacceptably low throughput.
Some TCP modifications are proposed to tackle these
two problems. A typical solution is to detect link failures
and freeze the TCP state(including congestion window
and RTO) until a new path is re-established[1][2]. But, if
link failures happen frequently, TCP still suffers
significant performance degradation because TCP will go
to the frozen states repeatedly and simply wait for
discovering new path without sending any new data. To
overcome this problem, multipath routing is used to
improve the path availability[3][4][5][6][7]. Multipath
routing protocol maintains several paths to the
destination simultaneously, so the probability of
delivering packets to the destination is improved.
Multipath routing also alleviate the hidden terminal
problem by spreading packets on different available
paths. However, most of performance evaluation is based
on the UDP traffic because running TCP is not so
straightforward. One problem is that RTT(round trip
time) estimation is never accurate under multipath
routing. Different delays on different paths would mess
up the RTT computation and RTO. Thus, if the
maximum RTT on the longest path is much larger than
the minimum RTT on another path, TCP could
prematurely timeout packets on the longest path due to
the incorrect RTT estimation. Moreover, it is likely that
packet going through different paths and arriving at
destination out-of-order. Out-of-order packets could
trigger duplicate packets, which in turn could trigger
unnecessary TCP congestion reaction, namely fast
retransmit/recovery. A TCP preliminary evaluation over
multiple routing [8] pointed out that using multiple paths
simultaneously would actually degrade the TCP
performance in most cases because of these duplicate
acks. Unfortunately, that paper didn't take any measures
to address the out-of-order problem and didn't address
packet spreading on paths.
In our paper, we study TCP performance with an on
demand multipath ad hoc routing protocol similar to Split
Multiple Routing(SMR)[3], which is built on top of
Dynamic Source Routing(DSR)[9] protocol. Different
with work in [8], we use multiple paths concurrently for
TCP. Some simple and intuitive methods for the out-oforder problem and spreading packet problem are
proposed.
The rest of paper is organized as following. We
briefly review some related work in section II. The
1
CS 218 – Fall 2003
details of multipath TCP and SMR implementation are
presented in section III. We present our simulation results
of SMR and DSR, TCP performance while multiple paths
are used concurrently and comparisons with standard
TCP. Several important issues related with multipath
TCP are discussed in section IV. In section V we
conclude our work.
II. RELATED WORK
Recently, TCP performance is not very successful in
Ad hoc networks and still in active research. Many new
challenges exists in wireless networks, such as the
hidden/exposed terminal problem, link failure due to
mobility, interference and fading. Two of the major
factors of degrading TCP performance are link failure
and interference between packets and acks. One fatal
result is link failure may cause unnecessary exponential
backoff of the retransmission timeout(RTO). Backup
Routing(BR) is propose in [8] and it only uses one path at
any time. Backup path multipath routing maintains two
paths from a source to a destination. TCP only uses one
path at a time and keep one other path as backup route.
When current path breaks, it can quickly switch to other
alternative paths. With this backup path routing, they
tried several ways to choose backup paths, such as
Shortest-Hop Path / Shortest-Delay, Shortest-Delay Path
/ Maximally Disjoint Path. They pointed out that some of
the negative results of TCP traffic are due to the fact that
TCP reacts to RTT and other network parameters very
sensitively. While utilizing multiple paths concurrently,
the average RTT measured by TCP sender is not
accurate. Thus, more premature timeouts may happen.
The more serious problem is out-of order packet delivery
via different paths, which will trigger TCP congestion
control scheme, i.e., fast retransmit/fast recovery. In their
simulations, the packet retransmission ratio of TCP over
SMR is much higher than that of DSR. The high packet
retransmission ratio prevents the TCP congestion window
from growing large enough to achieve high throughput.
Although they did observe that multiple paths can
alleviate route failures, the overall throughput always
consistently implies that simple use of multiple paths
doesn’t provide prominent benefit to TCP performance.
Instead, it degrades TCP performance in some degree. In
their paper, they didn’t address the solution to the out-oforder problem and how to do packet scheduling on
different paths. In this paper, we will try to address these
issues.
An end-to-end transport layer approach called pTCP is
proposed in [10] that effectively performs bandwidth
aggregation on multi-homed mobile hosts. That paper
studies the problems involved in achieving bandwidth
aggregation when an application on a mobile host uses
multiple interfaces simultaneously, use a transport layer
approach that effectively addresses the problems. They
propose a purely end-to-end transport layer approach
called pTCP (parallel TCP) and present it as a wrapper
around a slightly modified version of TCP that we refer
to as TCP-v (TCP-virtual). For each pTCP socket opened
by an application, pTCP opens and maintains one TCP-v
connection for every interface over which the connection
is to be striped on. pTCP manages the send buffer across
all the TCP-v connections and decouples loss recovery
from congestion control, performs intelligent striping of
data across the TCP-v connections, does data reallocation
to handle variances in the bandwidth-delay product of the
individual connections, redundantly stripes data during
catastrophic periods (such as blackouts or resets), and has
a well defined interface with TCP-v that allows different
congestion control schemes to be used by the different
TCP-v connections. Their simulations show that pTCP
outperforms both simple and sophisticated schemes
employed at the application layer. Our proposal of
individual congestion is different with theirs and more
complex.
III. MULTIPLE TCP OVER AD HOC
In our project, we would let TCP use multiple paths
simultaneously over multipath routing protocol(SMR),
scattering TCP packets according to a spreading function.
TCP will keep two sets of path information, such as RTT,
scatter ratio, timer, etc. For the out-of-order problem, we
propose a simple and effective way to prevent TCP from
entering fast retransmit/fast recovery. We compare our
multiple path performance with standard TCP over DSR.
Our initial results show it is possible to achieve more
throughput, but there are some issues we haven’t
addressed and more expected to be improved.
A. Split Multipath Routing
In our study, we chose the Split Multipath Routing
(SMR) [5] protocol as the underlying multipath routing
protocol. It is basically an extension to DSR [10]. Its
basic route discovery procedure is quite similar to DSR.
When the source needs a route to the destination but has
no route information in hand, it floods the RREQ
message to the entire network. Only the destination nodes
are allowed to send back the RREP packets. No cached
entries are exploited. To discover multiple paths, the
destination will return several RREP packets, one for
each different path. The criteria for selecting multiple
paths here is to build maximally disjoint paths for
minimizing contention and local congestion. In detail, a
destination will return the RREP packet to the first
RREQ packet immediately. For later RREQs coming
from different paths, it selects the maximally disjoint
route and sends RREP packets back to the source via the
chosen route. If there are more than one route that are
2
CS 218 – Fall 2003
maximally disjoint with the first route, the one with the
shortest hop distance is chosen. If there still remain
multiple routes that meet the condition, the path that
delivered the RREQ to the destination the quickest
between them is selected.
SMR has two ways to response to route failures. In the
first scheme, it performs the route discovery when any
route to the destination is invalidated. Another alternative
way is to only perform the route discovery when all
routes to the destination are invalidated. According to
simulation experiments in [3], the second scheme always
outperforms the first one. Thus, in this paper, we use the
second scheme for responding to route failures. However,
in our simulation, we found the problem of routing
overhead still exists.
Fig.1 Delivery Ratio for 5 connections
B. Routing Performance Evaluation
To compare the performance of SMR and DSR, we
implemented the SMR within the QUALNET. Our
simulation modeled a network of 50 mobile hosts placed
randomly within a 1500 X 1500 meter area. Each node
has a radio propagation range of 375m and channel
capacity was 2 Mb/s. Each run executed for 10 seconds
of simulation time. There are two sets of traffic sessions,
one is of 5 CBR connections, another is 10 CBR
connections. The sources and the destinations are
randomly selected with uniform probabilities. The size of
data payload was 512 bytes. We generated various
mobility degree by setting node speed as 0, 10, 20, 30m/s
with pause time 0, i.e., constant speed continuous
moving. The simulation shows the average results of 5
runs for every mobility speed.
Figure 1 and 2 show the delivery ratio in packet
delivery fraction. Packet delivery ratio is obtained by
dividing the number of data packets correctly received by
the destinations by the number of data packets originated
by the sources. The delivery ratio is similar in 5
connections for DSR and SMR, but SMR is much worse
in 10 connections than DSR.
Fig.2. Delivery Ratio for 10 connections
The most probable reason for this performance
degradation is because that SMR sends much more
duplicate RREQ than DSR. Fig 3 and Fig 4 illustrate the
control overhead in normalized routing load. Normalized
routing load is the ratio of the number of RREQ packets
forwarded by every node in the network and the number
of data packets received by the destination nodes. This
value represents the protocol efficiency and show how
scalable SMR can achieve.
In SMR, even though it takes the filtering measure for
RREQ, namely, intermediate nodes forward the duplicate
packets that traversed through a different incoming link
than the link from which the first RREQ is received, and
whose hop count is not larger than that of the first
received RREQ. However, SMR still has the scalability
problem in the face of large amount of connections and
mobility. Mobility and traffic interference will cause
route failure with high probability and SMR will discover
the new path by sending RREQ frequently. Moreover,
SMR will send too much more duplicate RREQ in dense
network or there are many connections. We will address
this problem in the future, for example, since DSR will
learn the new routes by snooping passing packets. It is
3
CS 218 – Fall 2003
possible to take advantage of route cache to find new
fresher disjoint path without sending out new RREQ.
Fig.3 Routing Overhead for 5 connections
Fig.4. Routing Overhead for 10 connections
C. Multipath TCP
Multipath TCP operates in much the same way as
conventional New Reno TCP without timestamps,
delayed ACKs, or SACK. For each connection, there is a
single congestion window, control block, and port. Each
connection uses up to N paths, and each route has its own
retransmission timer, congestion threshold, and state
information. To manage the paths, two new components
were introduced: a spreading function, and a sent-packet
queue. The spreading function determines the path each
data packet will take, and the packet sequence number
and the path it took is recorded in the sent-packet queue
to counter the out-of-order problem. In addition to these
new components, some of the existing TCP New Reno
conventions were modified to account for multiple paths.
1. Sent-Packet Queue
Much of previous work in multiple path transmissions
were based on connectionless protocols such as UDP.
One reason for this is the difficulty in coordinating the
use of two paths in a reliable connection. The out-oforder problem is one example of this. When sending data
over multiple paths from the same TCP connection with a
single set of sequence numbers, data packets have a
greater chance of arriving at their destination out-oforder. Because TCP uses cumulative ACKs, out-of-order
packets cause the receiver to send back many duplicate
ACKs. Suppose a multipath TCP connection uses two
paths A and B. The first packet is sent on path A, which
is very slow, and the next three packets are sent on path
B, which is much faster. The three packets from path B
arrive at the destination before the packet from path A.
Since TCP uses cumulative ACKs, the receiver will
return three duplicate ACKs for the first data packet.
These three duplicate ACKs will fool the sender into
thinking there is congestion, causing it to fall into fast
retransmit and recovery, and thereby penalize the
connection merely because the packets arrived at the
receiver out-of-order.
Our solution required two changes to TCP. First, the
receiver is required to return the packet on the same route
on which it came. This would allow TCP to compute a
more reliable RTT estimation. Second, the sender records
the packet sequence number and the path on which it was
sent in a sent packet queue. Duplicate ACKs received on
a path other than which the data was sent do not count
towards fast retransmit. From the example above, the
ACKs would be received on path B, and the sender
knows from the sent packet queue that the data was sent
on path A, so the duplicate ACKs would not be counted.
If duplicate ACKs are received on the path on which the
data was sent, they are counted as normal. To manage
the size of the queue, packets that have already been
ACKed are deleted.
2. Spreading Function
Although ACKs are always sent on the path on which
the data arrived, the spreading function can use any type
of algorithm to determine the path for each data packet.
We intended to maximize throughput without sacrificing
congestion control. We attempted this by emulating a
separate TCP connection on each path. Our current
spreading function is a probabilistic function with two
stages. When all paths are in congestion control, the
probability of a route being picked is based on its RTT,
and when one or more path is in slow start or fast
recovery, the probabilities are based on the congestion
window. For the RTT stage, the minimum RTT path has
the highest probability of being chosen, and all other
paths have probabilities that reflect the difference
between their RTT and the minimum path RTT. Upon
the reception of new RTT information, the probability is
recalculated.
4
CS 218 – Fall 2003
3. Congestion Avoidance
Probability of minimum RTT path A: Pa = ((RTTa /
RTT1) + … + (RTTa / RTTn))^(-1)
Probability of other path i: Pi = (RTTa / RTTi) * Pa
When one or more route enters slow start or fast
retransmit and recovery, the congestion window is
divided up conceptually between the paths, according to
their current probabilities. If we can send more packets
on a path, we want to increase that path's probability, so
when an ACK is received, it increases that path's
probability in proportion to the path's increased "share"
of the congestion window. The path's state also reflects
how much we open the congestion window. When we
receive an ACK for a path in fast recovery, however, we
are only sliding that path's share of the window forward
temporarily one packet at a time until recovery. The
congestion window is effectively one for that path.
Instead of reducing the path's probability, we keep it the
same, in order to speed the exit from fast recovery.
Entering Slow Start
Probability of slow start path A: Pa = MSS / (cwnd * (1 Pa) + MSS)
Probability of other path i: Pi = Pi * (cwnd / (cwnd * (1 Pa) + MSS))
Entering Fast Retransmit and Recovery
Probability of Fast Retransmit path A: Pa = ((flight / 2) +
(dup_acks * MSS)) / (cwnd * (1 - Pa) + ((flight /
2) + (dup_acks * MSS)))
Probability of other path i: Pi = Pi * cwnd / (cwnd * (1 Pa) + ((flight / 2) + (dup_acks * MSS)))
Received an ACK for a path in slow start
Probability of slow start path A: Pa = ((cwnd * Pa) +
MSS) / (cwnd + MSS)
Probability of other path i: Pi = (cwnd * Pi) / (cwnd +
MSS)
Received an ACK for a path in fast recovery
Do not change probability
Received an ACK for a path in congestion avoidance
Probability of slow start path A: Pa = ((cwnd * Pa) +
((MSS * MSS) /cwnd)) / (cwnd + ((MSS *
MSS) / cwnd))
Probability of other path i: Pi = (cwnd * Pi) / (cwnd +
((MSS * MSS) /cwnd))
Path Deletion
Probability of deleted path A: Pa = 0
Probability of other path i: Pi = Pi / (1 - Pa)
In the above formulas, RTT is the round trip time, MSS
is a maximum segment size, cwnd is the congestion
window, and Pa and Pi are probabilities of paths.
4. Other Changes / Fine Tuning
While building multipath TCP, we discovered some
unexpected challenges, the first of which was path
probability starvation. Upon initiation of the multipath
TCP connection, the routing layer may provide less than
the requested number of paths. If a new path is
discovered after the connection has been active for a
while, the congestion window may already be large, and
the new path's share of the window (and thus, its
probability) will be small. Because the probability of a
path can only increase if it is used, in our early tests,
multiple paths would be available, but one path would
tend to starve the other out, even in our symmetric
topologies. Our early solution was to impose a low
initial congestion threshold (about eight times the
maximum segment size) to force the first path into
congestion avoidance early, and give the other path a
chance at catching up. This limited the rate of our
congestion window growth, however. Currently, we
force a data packet out on a route that has entered slow
state, to give its probability an initial boost, but it does
not prevent starvation. A more aggressive method is
required.
Our early test revealed a large number of timeouts,
which were especially apparent when one path was
starving another. In normal TCP, the retransmission
timer is turned off when all data to be sent has already
been ACKed. A starved route in multipath TCP may
have no packets in flight, but its retransmission timer is
still on because there is still data to being sent on other
routes. Our solution was to tie the path retransmission
timer reset to the number of packets sent on that path
which are in flight, which is stored in the sent-packet
queue. Similarly, connection-wide timers, such as the
persistence timer, must account for all paths instead of
any one particular path.
The out-of-order problem causes multipath TCP to
have many more duplicate ACKs than normal TCP. In
our early implementation, we simply ignored these
ACKs, and only used the in-order ACKs to increase the
congestion window. Each of these duplicate ACKs,
however, indicates a packet leaving the network. In a
normal TCP connection, these ACKs would not be outof-order, and thus would be serving to increase the
congestion window and update the rtt. Therefore, we
now use duplicate ACKs, if they ACK a packet sent on
another active path, to open the congestion window. We
cannot use these duplicate ACKs to update the path's rtt,
however, since the round trip covered different paths.
5
CS 218 – Fall 2003
Another particularly obstinate challenge was the
deletion of a path. If a path is no longer available, it is
deleted from the control block. If a data packet sent on
that path was lost (which is likely), the window will be
unable to slide forward, and eventually one of the
remaining paths will time out. Our current solution is,
during path deletion, to flag all packets in the sent packet
queue that were "orphaned" by the deleted path, marking
them as "anyone's property". The remaining paths can
count duplicate ACKs for these orphaned ACKs, causing
fast retransmit and recovery rather than a timeout. An
alternative is to simply assume a data packet was dropped
and proactively resend all the data packets in flight for
the deleted path. When one of these packets is resent, it
becomes the property of the path it was sent on, and is so
marked in the sent packet queue. This makes timeouts
and fast retransmits very unlikely, at the cost of sending
duplicate data packets, possibly for no reason. This
method has not yet been implemented and tested.
Not all of the orphaned ACKs would come back on a
path in the sender's control block. In fact, in most cases,
not enough of them come back on existing paths to set
off fast recovery. To compensate for this, if an orphaned
ACK comes back on an unrecognized path, it is assigned
a parent probabilistically, and we proceed as if the ACK
was received on that path.
Fig.5. Two Path Topology
There are times when no path is available. In these
cases, data packets are sent to the network layer with a
special path ID "any path". While no paths are available,
the congestion window is limited to one MSS, and certain
variables such as RTT are not recorded.
5. Simulation
We implemented multipath TCP set to a two path
maximum and modified DSR on the Qualnet network
simulator. To compare it to normal TCP, we created
three static networks. The first is a "two path" network of
16 nodes, with a source and a sink and two disjoint paths
of seven nodes each, outside of each other's interference
range. The second network was a square grid of 25
nodes spaced by 300 meters, where the source and sink
were on opposite corners. The third is the same as the
second, except that on opposite corners of the grid are a
constant bit rate source and sink, sending 1460 bytes
every 0.1 seconds, to introduce cross traffic. The MAC
protocol used was 802.11b, where the maximum
transmission range was about 375m. The maximum
segment size for TCP was 1460 bytes. The traffic was
FTP data, sent sets of consecutive randomly sized files.
For each set, the averages of 30 runs using different
random seeds were recorded. Three sets (20, 30, and 40
files) were run.
Fig. 6. Grid Topology without Cross Traffic
6
CS 218 – Fall 2003
(b)
Fig.7. Grid Topology with Cross Traffic
6. Analysis
Multipath TCP throughput performed slightly worse
than New Reno. For the two path network and the 25
node grid network with cross traffic, multipath TCP
throughput ranged between 10 and 20 kbps beneath New
Reno, between about 280 and 290 kbps. Multipath had
consistently higher numbers of timeouts and retransmits
in all networks, except in the two path network in which
New Reno had more retransmits. We do not know why
the performance of multipath TCP does not exceed
normal TCP New Reno. In theory, we believe that, for a
connection with a two path maximum, the throughput
should be upper bounded by slightly less than twice the
throughput for a normal New Reno connection, and
lower bounded by a New Reno connection. Simulations
showed, however, that our implementation only achieved
the same performance as New Reno at best.
(a)
(c)
Fig. 8. Multipath TCP over Two Path
One possibility for the poor performance could be
inter-path congestion and/or interference, which could
result in worse performance than a single path
connection. The likelihood of this "self-congestion" of
multipath TCP depends on the number of paths used, and
the physical topology of the network. The two-path
topology (two disjoint, widely-spaced paths between the
source and sink) was designed to minimize self-ongestion
and give multipath TCP the greatest dvantage over
normal TCP. Raw packet traces revealed that multipath
TCP's poor performance in the two-path topology was
most likely due to the large number of path changes at
the routing level. It is not clear why paths became
unavailable and available in this static network, but
because multipath TCP's congestion window, timeout
and fast retransmit/fast recovery logic is tied to paths, it
performed a lot of collapsing and re-ramping up as paths
disappeared and appeared. Further, if all paths disappear,
multipath TCP is reduced to a single MSS congestion
window that does not open, because we assume that there
actually are no physical paths available between the
7
CS 218 – Fall 2003
source and sink. These path changes are invisible to
normal TCP, and thus do not have as much affect on it.
(a)
(a)
(b)
(b)
(c)
Fig. 9. Multipath TCP over Grid Topology
without Cross Traffic
(c)
Fig. 10. Multipath TCP over Grid Topology
with Cross Traffic
Packet traces also revealed a large amount of starvation
in the 25 node grid network with cross traffic. Starvation
occurs when the probability between paths becomes so
lopsided, packets are effectively only being sent on a
small subset (or just one) of the paths. It is not yet clear
what the cause of the starvation is, but it is possible that
8
CS 218 – Fall 2003
the effect of cross traffic pushed one of the paths into fast
recovery or timeout, unbalancing the probability, from
which it never recovers. The two-path network and the
grid network without cross traffic did not exhibit a large
amount of starvation.
Although the simulation results of multipath TCP are
disappointing, we believe that we have not reached the
upper bound of its potential performance. Preventing
starvation would probably yield some improvement.
Since our simulations used a two-route maximum, the
starvation of one route would upper bound our
performance by a normal TCP connection.
The
spreading function choice is directly responsible for
starvation, so a new function should be determined. A
baseline function for comparison could be a random
function that spreads packets evenly. Another function
might assign probabilities based solely on a route's state.
A route in slow start would receive more packets than
one in congestion avoidance, because it is still ramping
up to it's bandwidth limit. A path in fast recovery would
receive less packets, since it is assumed that packet loss
is due to congestion (although in wireless networks,
random loss is also a possible cause).
IV. FUTURE WORK
A. Routing
Previous DSR doesn’t have a route cache, so it will
discover a new path if current path fails. DSR now has a
powerful route cache, which learns all the routes in the
packets going through the node. So SMR itself is not so
attractive as before and it faces routing scalability
problem. It is possible to take advantage of route cache to
improve the multiple path routing and find new
mechanism to prevent the routing message explosion in
the face of mobility and high traffic scenario.
Another important thing is that the path selected will
not last long because of interference or high traffic. It is
really difficult to keep the same path available all the
time. Frequent path failure will cause TCP’s performance
to suffer. How to find stable paths and mask the path
failure from TCP is our future work.
B. TCP
Our current scheme and implementation didn’t show
advantage over standard TCP. First we suspect there are
errors remaining in the multipath implementation of
timeouts and fast recovery. Excessive timeouts and
overly long periods in fast recovery are the primary cause
for multipath TCP's poor performance in the two path
network. More thorough checking needs to be performed
to ensure that these two elements are working as
expected in this multipath TCP implementation.
Congestion control of multipath TCP is currently
coupled together and needs to be further investigated. A
simpler mechanism is expected. Our approaches only
provide an experiment to see the problems existing in this
field. We would search new and simple mechanism to
complete it.
V. CONCLUSION
In this paper, we present Multipath TCP over Ad hoc
network. Instead of using one path in standard TCP, we
try to use two paths concurrently at the transport layer to
improve TCP performance over Ad hoc network.
Multiple path TCP runs on top of SMR(Split Multipath
Routing), which is a multipath on-demand routing. SMR
builds maximally disjoint routes. Our scheme uses two
routes for each session; the shortest delay route and the
one that is maximally disjoint with the shortest delay
route. We attempt to build maximally disjoint routes to
avoid having certain links from being congested, and to
efficiently utilize the available network resources.
Providing multiple paths is useful in ad hoc networks,
because when one of the route is disconnected, the source
can simply use other available routes without performing
the route recovery process.
Multipath TCP is similar to TCP Newreno, except
that we use multiple paths. We introduce two new
components: spreading function and sent-packet queue.
The spreading function determines load balance ratio
between paths, and sent-packet queue is used to address
the out-of-order problem. Other congestion window
control measures for multiple paths are experimented to
emulate standard TCP operations.
Our results show that it is possible to get better
throughput by carefully scheduling the packets and
preventing fast retransmit/fast recovery caused by
duplicate acks, however, if the management for two paths
are coupled together, for example, one problematic path
could cause another to wait and eventually both go to
timeout, the performance is not guaranteed and even
worse than standard TCP. Clearly, our scheme is just an
experiment and provides guidance for future search.
VI. REFERENCE
[1] . Holland and N. H. Vaidya, “Analysis of TCP performance
over mobile ad hoc networks,” Proceedings of ACM
MobiCom’99, Aug. 1999.
[2] K. Chandran, S. Raghunathan, S. Venkatesan, and R.
Prakash, “A feedback-based scheme for improving TCP
performance in ad hoc wireless neworks,” IEEE Personal
Communications Magazine, vol. 8, no. 1, Feb. 2001.
9
CS 218 – Fall 2003
[3]S. J. Lee and M. Gerla, “Split multipath routing with
maximally disjoint paths in ad hoc networks,” Proceedings of
IEEE ICC’01, June 2001.
[4] A. Nasipuri, R. Castaneda, and S. R. Das, “Performance of
multipath routing for on-demand protocols in mobile ad hoc
networks,” MobileNetworks and Applications, vol. 6, no. 339349, 2001.
[5] M. Marina and S. Das, “On-demand multipath distance
vector routing in ad hoc networks,” Proceedings of IEEE
International Conference on Network Protocols (ICNP)’01,
Nov. 2001.
[6] L. Zhang, Z. Zhao, Y. Shu, L. Wang, and O. W. Yang,
“Load balancing of multipath source routing in ad hoc
networks,” Proceedings of IEEE ICC’02, Apr. 2002.
[7] S. Vutukury and J. J. Garcia-Luna-Aceves, “MDVA: A
distance-vector multipath routing protocol,” Proceedings of
IEEE INFOCOM’01, Apr. 2001.
[8] Haejung Lim, Kaixin Xu, and Mario Gerla, " TCP
Performance over Multipath Routing in Mobile Ad Hoc
Networks, " In Proceedings of IEEE ICC 2003.
[9] D.B. Johnson and D.A. Maltz, “Dynamic Source Routing in
Ad HocWireless Networks,” In Mobile Computing, edited by
Tomasz Imielinski and Hank Korth, Chapter 5, Kluwer
Academic Publishers, 1996, pp. 153-181.
[10] H.Y. Hsieh, R. Sivakumar, “A Transport Layer Approach
for Achieving Aggregate Bandwidths on Multi-homed Mobile
Hosts”, ACM MobiCom’02, 2002, Atlanta, Georgia.
10
Download