Implementing distributive congestion control in a delay tolerant

advertisement

Implementing distributive congestion control in a delay tolerant network on different types of traffic

As volume of traffic grows, the congestion is bound to occur more frequently. Thus there is a need of investigation techniques and schemes for reducing congestion with growing traffic. In this paper, we propose a technique of controlling congestion by distributing traffic in different levels according to priority, thereby decreasing congestion. This will ensure minimal bounded delay to high priority traffic (e.g. traffic of time sensitive nature) at the cost of some increased delay to time insensitive data. Such distributive congestion in effect can also be implemented on delay tolerant networks for different traffic with different requirements. The investigation is supported by experimental results and the network model is tested on Weka simulator (open source software).

Keywords: Congestion Control, Bounded Delay, Traffic, Delay Tolerant Networks,

DTN,network.

Introduction

: Congestion in a network arises due to inability of a network to handle traffic which user(s) are sending over a network. If the data arrival rate at any node is more than the data processing rate, data or packets get queued up at the buffer and this gradually leads to congestion. When congestion occurs, then the node must either reject packets further coming or should free the buffer to make way for incoming packets.

Either way, packets are discarded. Otherwise, flow control may be imposed on the source which means controlling the number of packets sent by the source. Thus based on current congestion levels, a source may increase or decrease the amount of data sent. But, this technique imposes a limit on amount of data sender can send.

In our approach, we wish to suggest a congestion control technique which removes any such restriction, decreases congestion in a network and thus ensuring speedy delivery of packets of utmost importance. In our technique, congestion control is based on dividing the packets into different levels, according to their priority or delay it can withstand in a network .The arriving packets are divided into 4 categories. Packets having higher priority are ensured minimum bounded delay whereas those with least priority are discarded at higher congestion levels. This is quite useful considering in today’s scenario; where there is a lot of traffic involving time sensitive multimedia packets such as voice, video, or images.

Parameters used to describe congestion in the network:

We have assumed a network model having mainly three parameters related to congestion:

1 Change in input rate

2 Available Buffer Size

3 Time to fill the buffer

The parameters are explained as follows:

Change in input rate

Congestion mainly depends on how fast data is arriving at the node. That is, if the data or packets are arriving very fast at the node, then the buffer fill up quickly and so, there is a chance of congestion occurring in the near future. Thus even though input rate may be low; sometime due to data arriving in bust form, may suddenly increase the congestion level. Thus change in input rate is an appropriate parameter which helps in measuring the congestion levels .When the change in input rate is high on the positive side, it can be stated that congestion will most likely occur now. Note that here change in input rate relates to current input rate minus the previous input rate.

Available Buffer Size

The available buffer size is an indicator of the congestion levels. The packets arriving at a node are stored in the buffer till they are processed further and dispatched. This is because some routing delay is associated with the nodes. Whenever a packet arrives at a node, and then based on current congestion level and routing information, the node has to decide which one of the many paths to forward that packet. Thus, a queuing delay as well as node processing time is associated with any arrived packet. Till the packets are forwarded, they are stored in temporary locations which we call buffers.

The available buffer size increases or decreases depending on factors such as number of packets arrived/forwarded, condition of network at that instant, bandwidth of the network etc. but available buffer size is the buffer that is meant for newly arrived packets. If it becomes too small, then it may happen that buffer will fill up soon after which all arriving packets must be discarded. Thus choosing a high buffer size is quite necessary in any network. But at the same time buffer cannot be of infinite size. Thus, limited buffer size can lead to congestion in a network. Lower the available buffer size, more likelihood of occurring congestion. If factor comes out to be negative it implies buffer is freeing up, and congestion is decreasing.

Time to fill the buffer

: This factor indicates how fast a buffer of a particular node is filling. Although, it may appear that this factor is depended on above two, it can be shown that these three factors are orthogonal.

Buffer Size

Time to fill the Buffer = ________

Change in Input Rate - Change in Output Rate

Factor comes out to be negative when Change in Output Rate is higher then Change in

Input Rate, which means instead of filling, available buffer is increasing , indicating congestion is decreasing.

Concept:

We have used a network model where each node may be lying in one of the 6 levels of congestion depending on traffic at that instant. The levels have been categorized as 1-5 depending on the numerical values of the three parameters defined before. It can be seen that the above three parameters are sufficient to calculate congestion level of a network at any instant.

Packets arriving at a node are divided into 4 priority levels.

00

01

10

11

Least priority

Less priority

High priority

Highest priority

This is similar to the traffic in real network systems where higher priority is associated with time sensitive packets having multimedia content such as voice or video. However, a user can oneself decide the basis of dividing the packets into these categories.

Implementation:

For evaluating the technique proposed above, a C++ code is written whose output is then given input to Weka Explorer. Each of the three parameters defined before is generated randomly in the code and depending on its value, it is given an index. Thus in one set of data associated with the network, we have generated corresponding numerical values of parameters. In fact, these values correspond to a state of any network at an instant. Based on these values generated, the congestion level at that instant is calculated. For calculating the congestion level, a simple averaging of the three parameter index values is taken.However, higher weightage is given to change in input rate, than to available buffer size, or time to fill buffer. Now these set of values are given as an input to Weka

Explorer, which associates a Congestion level index with each set of values.

Weka is an open source data mining tool of machine learning algorithms which on being fed a set of values, estimates the congestion indices(one of the inputs to Weka) better by techniques such as interpolation, interdependencies between the input values, normalization etc.

In this way, the current situation of network can be determined and hence the congestion level of a node is found at any instant.

Now, if it is found that network is in high level of congestion, then in order to decrease the congestion in the network, it is desired to decrease the congestion level of a node.

This is done through a method described below.

Divide the incoming packets into four categories as defined before,i.e.packet with header

00 has the lowest priority,whereas packet with header 11 has the highest priority.

If a node has a higher congestion level at any instant, i.e., node is in congestion level 3 or higher,then incoming packets of only higher priority are transferred and those having

lower priority are either discarded or stored in a buffer. When the lower priority packets are stored in the buffer,it means these packets will be transferred at a later time when node will come down to a lower level of congestion. Similarly, if node is already having a lower level of congestion, packets of all priorities are transferred according to their priorities.

The strategy used at different levels of congestion is as follows:

Congestion levels Packet transfer strategy at the node

1

2

3

Allow packets of all priorities to pass

Allow packets of all priorities to pass

Discard/store in buffer the packets with header 00

4

5

Discard/store in buffer the packets with headers 00 and 01

Discard/store in buffer the packets with headers 00,01 and 10

Storage of packets with lower priorities at higher congestion levels actually increases the delays of these packets. The delays of lower priority packets are increasing at the cost of higher priority packets which are allowed to pass through. In fact, when we store the lower priority packets at the buffer, then we are decreasing one parameter at the cost of another. It means available buffer size will be less now for new packets. However, input rate will also decrease. Decrease in available buffer is an indicator of higher congestion, whereas decrease in input rate is an indicator of lower congestion. However, input rate has a higher contribution to congestion level than available buffer size (due to the higher weightage given to it).Hence, ultimately the congestion level decreases. This is particularly important in delay tolerant networks where buffer size is almost infinite due to intermittent connectivity. But owing to cost factors, packet discard is not allowed in delay tolerant networks because of huge cost of retransmission. Hence DTNs can implement this strategy as far as transfer of priority packets is concerned. We must note that delays for lower priority packets will increase by this approach.

Experimental Result:

Following results were obtained after implementing above technique:

Frequency of data occurrence for 2000 samples--------------------------

Congestion levels : 1 2 3 4 5

FIG 1

Frequency of data occurrence for 2000 samples--------------------------

Congestion levels : 1 2 3 4 5

FIG 2

Discussion:

The above output has been generated with the help of Weka explorer on a set of 2000 random samples initially generated with the C++ code. Fig 1 indicates the congestion levels without applying any congestion strategy whereas fig 2 indicates congestion levels after applying the congestion control strategy. It is seen in fig 2 that more samples have a congestion level at 1 and 2, whereas samples at higher levels of congestion have decreased. This is because when we apply the congestion control at levels 3, 4 and 5, the higher congestion level falls to some lower level of congestion. This is done by having more delays for lower priority packets. The detailed Weka analysis of the above figure is shown on results and analysis

page.

Similarly we have the statistics for the higher frequency samples

Frequency of data occurrence for 4000 samples--------------------------

Congestion levels : 1 2 3 4 5

Modified approach for 4000 samples-------------

Congestion levels : 1 2 3 4 5

Frequency of data occurrence for 8000 samples--------------------------

Congestion levels : 1 2 3 4 5

Modified approach for 8000 samples-------------

NO OF

SAMPLES

2000

4000

6000

8000

10000

PACKET-00 PACKET-01 PACKET-10 PACKET-11

0.131024

0.184223

0.201383

0.212053

0.159936

0.650067

0.662533

0.671833

0.699634

0.64948

0.938991

0.968426

0.950679

0.95839

0.948905

1.00286

1.002752

1.002533

1.002423

1.002851

Congestion levels : 1

Performance analysis :

2 3 4 5

A computer network is judged on the basis of throughput and delay.

We calculated the throughput and found following results :

Throughputs for packets with headers 00,01,10,11 for different samples of data inputs

Throughput

1.2

1

0.8

0.6

0.4

0.2

0

PACKET-00 PACKET-01 PACKET-10 PACKET-11

Different Header Packets----->

The result obtained seems to be consistent with our approach. As expected, packet having priority 11, i.e., highest priority is having highest priority of almost unity. (on the scale of

0 to 1) and since higher priority data is preferred over low priority data, data having lower priority has lower throughput (Throughput for packets with header 11 showing value slightly greater than 1 is just a programming limitation) A packet with header 11 is allowed to pass through at any level of congestion and should strictly have a throughput of 1.The numbers 2000,4000 etc shown on right hand side of graph indicates the number of samples for which throughput has been found.

Delay Calculations yielded the following results:

Delays for packets with headers 00,01,10,11 for different samples of data inputs

PACKET-00 PACKET-01 PACKET-10 PACKET-11 NO OF

SAMPLES

2000

4000

6000

8000

10000

7.632174

5.428199

4.965664

4.715794

6.252518

1.538303

1.509359

1.488465

1.429319

1.539693

1.064973

1.032603

1.05188

1.043417

1.053846

0.997148

0.997255

0.997473

0.997583

0.997158

2000

4000

6000

8000

10000

Delay

8

7

6

5

4

3

2

1

0

PACKET-00 PACKET-01 PACKET-10 PACKET-11

Different Header Packets-------------->

Here too, results obtained are consistent with our approach and are according to expectations. Results show higher priority data have a lower delay (Ideal delay when no congestion is there is assumed 1).

Highest priority is having lowest delay of almost unity. (on the scale of 1 and above) and since higher priority data is preferred over low priority data, data having lower priority has higher delay (Delay showing value slightly lower than 1 for packets with header 11 is just a programming limitation) A packet with header 11 is allowed to pass through at any level of congestion and should strictly have a delay of 1.

This shows that technique proposed of dividing the data in different priorities according to their importance is fruitful and yielding satisfactory results.

Actually, in this technique delay is distributed over entire traffic in such a manner so that packets of highest priority suffer a minimum delay and thereby ensuring minimal bounded delay.

Concluding Remarks:

1) Although, this work is done in view of Delay Tolerant Networks, it can be applied to Real Time Networks.

2) Input is assumed to be random, where Gaussian distribution would have been the better assumption.

3) Technique can be further analysed by implementing it on Network simulators e.g., OMNET ++.

References:

[1] Delay Tolerant Networking Research Group, http://www.dtnrg.org

.

[2].”Intermittent connectivity”: Forrest Warthman,

Delay-Tolerant Networks (DTNs): A

Tutorial v1.1

, Mar 2003

[3].Data Communications and Networking –Behrouz A.ForouZan (TMH)

2000

4000

6000

8000

10000

Download