CONGESTION CONTROL IN ATM-BASED BROADBAND ISDNS Sam Hsu and Mohammad Ilyas

advertisement
CONGESTION CONTROL IN ATM-BASED
BROADBAND ISDNS
Sam Hsu and Mohammad Ilyas
Department of Computer Science and Engineering
Florida Atlantic University
Boca Raton, Florida 33431
Tel: (561) 367-3855
Email: [sam,mohammad]@cse.fau.edu
ABSTRACT. In this paper, we present a new congestion control technique for
ATM networks. The technique includes admission control, and traffic shaping.
The network traffic consists of real-time traffic and data traffic. Call acceptance is
based upon the effective bandwidth and data traffic flow is controlled by effective
buffer. Effective bandwidth for a switching node is defined as a vector of
bandwidth and an estimated maximum delay at the node. Effective buffer is
defined as a scalar of buffer size. The proposed scheme is analyzed by simulation
and the results are presented in comparison with other studies under similar traffic
conditions.
1. INTRODUCTION
Broadband Integrated Services Digital Networks (B-ISDNs) are rapidly developing high
speed networks that will provide an infrastructure to support transportation of information
from a wide variety of services including video, audio, and data. The switching technique to
be used in B-ISDNs is called Asynchronous Transfer Mode (ATM). All the information in
ATM-based networks is transported in the form of cells. A cell is 53 byte long. It includes
48 bytes of information and 5 bytes of header which contains information about routing, and
some other management operations.
An integrated mixture of packetized information from a variety of services is highly bursty in
nature and hence poses a variable demand for network resources. If such traffic is not
appropriately controlled, the network may become congested and that may affect the network
performance (throughput as well as delay) very adversely. In slow speed networks, such
situations are handled by taking appropriate steps to reduce the effects of congestion, after the
congestion sets in and is detected. Such remedial steps are referred to as reactive techniques.
However, in high speed networks, reactive techniques are not feasible simply because there is
not enough lead time to react to a congested state of the network. Therefore, in B-ISDNs,
we need to develop preventive techniques to avoid network congestion. Preventive
congestion techniques have received significant attention from researchers in recent years.
Several techniques have been proposed and most of the techniques are applied in the form of
traffic admission control [1,2,3].
In this paper, a preventive congestion avoidance technique is proposed. This technique
includes admission control, and traffic shaping [4]. The network traffic in this study is
assumed to consist of real-time traffic and data traffic. Call acceptance is based upon the
effective bandwidth and data traffic flow is controlled by effective buffer. Effective
bandwidth for a switching node is defined as a vector of bandwidth and an estimated
maximum delay at the node. Effective buffer is defined as a scalar of buffer size. The
proposed scheme is analyzed by simulation and the results are presented in comparison with
other studies under similar traffic conditions.
The next section of this paper describes the proposed congestion avoidance technique. In
section III, some simulation results are presented that show the feasibility of the proposed
congestion avoidance technique. Finally, some conclusions are presented in section V.
2. PROPOSED TECHNIQUE
The proposed congestion control technique features parameterized call acceptance.
Available bandwidth and maximum node delay are two crucial parameters used for setting up
connections. Bandwidth is pre-allocated for real-time traffic based on prescribed mean bit
rates. Available buffers are the control parameter for admitting non-real-time cell transfers
on a link-by-link basis. Details are shown below [4].
1. Two types of traffic are defined in the model: a) Real-time Traffic (RT): Cells of this type
are delay-sensitive. They must be delivered to the destination within a predefined time
frame. b) Data Traffic (DT): Cells of this type are delay-insensitive, but they are
loss-sensitive. All cells must be delivered.
2. EB (Effective Bandwidth) is the criterion used for call acceptance. There exists a separate
EB for each type of traffic and for each node. EB is a two-element vector with the format of
EB = (x, y).
The EB of a node is defined as follows:
EBi = (CAVAILi , Mi)
Where
(1)
EBi = the EB of node i
CAVAILi = the available (unallocated) channel capacity at node i
Mi = the maximum node delay at node i
Note: For simplicity, all definitions in this model are time-implicit. The time factor is
syntactically omitted but intuitively understood. For example, EBi is short for EBi(t),
denoting the EB of node i at time t (the time node i is inquired).
The EB of a RT traffic is defined as:
EBRTi,j = (BRTi , DRTi,j)
Where
(2)
EBRTi,j = the EB of the ith RT traffic at node j
BRTi = the prespecified mean bit rate of the ith RT traffic
DRTi,j = the allowable maximum node delay of the ith RT traffic at node j, and
DRTi,j = DRTi,pred(i,j) - Mpred(i,j)
Where
pred(i,j) = the predecesor of the jth node of the ith traffic
The EB of a DT traffic is defined as:
EBDTi,j = (0DTi , D+DTi,j)
Where
(3)
EBDTi,j = the EB of the ith DT traffic at node j
0DTi = the prescribed mean bit rate of the ith RT traffic is zero (at the connection
setup time)
D+DTi,j = a quantity that is larger than the allowable maximum node delay for the
ith RT traffic at node j
The operation of EBs is defined as follows:
1 if x 1  x 2 and y 1
O( EB1, EB 2)  
0 otherwise
where

y2
(4)
EB1 = (x1, y1) and EB2 = (x2, y2)
A RT connection request is granted only if its EB can be satisfied by all intermediate nodes
on the route; i.e., RTi can be granted its connection request only if O(EBj, EBRTi,j ) = 1 is true
for all j’s on the routes. A DT traffic is also connection-oriented. However, a DT connection
request is always granted. From the EB definition for DT traffic (Definition 3) we know that
acceptance is instantaneous. In this case, a route can be selected randomly by the entrance
node.
EF (EFective buffers) is the major criterion used to grant cell transfer requests for DT traffic
from node to node. There exists a separate EF for each DT cell transfer request and for each
node. EF is a scalar quantity.
The EF of a node i is defined as follows:
EFi = the available (unallocated) buffer at node i
(5)
The EF of a DT cell transfer request is defined as follows:
EFDTi = the buffer requirement of the current DT cell transfer request at
node i
The operation of EFs is defined as follows:
1 if EF 1  EF 2
O( EF 1, EF 2)  
0 otherwise
(6)
(7)
A transfer request to node j from node i is granted only if O(EFj, EFDTj ) = 1. We assume
that there exists at the entrance node a device that can mark all cells of this traffic before they
enter the input buffer. It is obvious that DT transfer requests are done on a node-by-node
basis, subject to the availability of EF at the next node. More precisely, this is a receiver
credit-based windowing mechanism – it is up to the receiver to decide the number of cells
allowable for transfer dynamically. In a node, DT cell transfer requests targeting the same
next node from different incoming nodes can be merged to form a single request.
3. SIMULATION MODEL
The performance of the proposed congestion control technique is evaluated by using
simulations. We assume the following for all our simulations: a) Channel capacity
allocation is based on the prescribed mean arrival rate for each input source. b) RT traffic is
shaped by employing a leaky bucket method [5] based on the channel capacity allocated. In
our simulations, the leaking rate of a leaky bucket queue coincides with the service rate for
that queue. c) Each DT input source is allocated a large buffer (a fat bucket policy) to
accommodate sudden bursts of cells without risking any loss. d) The system is in
equilibrium and running at full speed (all channel capacity allocated) when it is analyzed.
Nodes can be either edge nodes or internal nodes. Both support RT and DT traffic. RT
traffic is represented by VBR video traffic. There are n (n  1) channels in the category;
each may have different levels of burstiness. We define burstiness as a ratio of peak rate to
mean rate – the higher the ratio, the burstier the input traffic. Cells carrying video picture
elements are organized into frames. A frame is generated every 1/30 seconds on each
channel. Cell arrivals are based on the first-order autoregressive Markov model [6]
i (n) = aii (n - 1) + Gi (n)
(8)
where
i (n) = the source bit rate during the nth frame of class i (i=1: low motion, i=2:
medium motion, i=3: high motion)
ai (n) = a random coefficient that takes one of three possible values (examples of
specific values for ai will be given shortly.)
Gi(n) = a Gaussian random process with mean i and variance i2
Note that video conferencing is an example of low motion video, in which the background
stays the same for most of the time, and the foreground scene may change slightly from time
to time. Sports video is an example of high motion video, in which both the background and
foreground scenes change rapidly from one moment to the next. Within each frame, cells
are uniformly distributed. However, frames are not synchronized among channels. Each
video channel is provided with an input buffer, allocated according to the following formula:
Bi =
D max i * Calloci
Ttrans
(9)
where
Bi = buffer allocated to the ith channel (cells)
Dmaxi = maximum node delay allowed for the ith channel (seconds)
Ttrans = cell transmission time (seconds)
Calloci = channel capacity allocated to the ith channel (percentage)
This formula ensures that all cells buffered in an input queue can be transmitted within the
allowable node delay for that channel. Our implementation of the leaky bucket concept is
based on channel capacity allocation: cells queued in a buffer will be served (leaked)
according to the available channel capacity allocated to the buffer. An arriving cell finds
that the buffer full will be dropped. Cells within each buffer are served in an FCFS manner.
There are m (m  1) input channels for DT traffic. No data cell is discarded at a node.
Arriving data cells are queued until their turn for service. Each channel is allocated a
relatively large buffer to avoid cell overflow. Cell arrivals of DT traffic are modeled
following the Poisson distribution: interarrival times are independently identically
exponentially distributed. Each arrival is batched, with geometrically distributed batch size.
Different channels may have different levels of burstiness. An FCFS service discipline is
imposed on each data input traffic buffer. DT traffic is allocated a large buffer without input
shaping. No data cell will be lost but the trade-off is a potentially longer node delay.
All cells that have been successfully processed through the server are sent directly to the
output channel. No output buffer is provided at a node. Time is slotted at the node, with
each slot accommodating exactly one cell. All slots have the same length. Cells may arrive
at the node at any time; however, service of a cell begins only at the start of a time slot. To
take advantage of statistical multiplexing, channel capacity is allocated to each input traffic
based on its prescribed mean arrival rate regardless of traffic type. The queueing perspective
of the proposed congestion control technique is depicted in Figure 1.
Figure 1: A queueing perspective of the proposed congestion control technique.
4. SIMULATION RESULTS
Two sets of simulation studies are carried out: a one-node and a two-node. The purpose of
the one-node study is to provide some insight into the interrelationships among different
traffic when multiplexed together in a node. The purpose of the two-node study is to analyze
the behavior of a node that accepts a mix of external and internal traffic.
4.1 One-node study
This simulation study is to show the performance at a node under different levels of data
traffic burstiness. To simplify, we assume the node under study is an edge node with no
internal traffic. The channel capacity at this node is 100 Mbps. Incoming bit streams are
grouped into cells of 53 bytes each, as specified by the CCITT standards for ATM. The
node supports both RT and DT traffic. Its RT traffic components consist of three different
motion levels of video channels. Using the data obtained in [6], we select a1 = 0.55, a2 =
0.79, a3 = 0.83 (coefficients used in Formula 8) to represent low, medium and high motion
levels, respectively. They will be referred to as Channels R1, R2, and R3, respectively. The
node channel capacity allocated to real-time traffic is evenly divided among these three
channels. The remaining capacity is allocated to data traffic. The data channel will
hereafter be designated as D1. All channel capacity allocations are based on specified mean
input rates. Also, we may view D1 as an aggregate of several data input channels since the
superposition of two independent Poisson process results in a Poisson process.
Simulation results for this one-node study are depicted in Figures 2-5. The overall
observations are summarized in the following two tabular forms which outline the
relationships between performance measures and their corresponding input factors for both
DT and RT traffic.
Data burstiness
Table 1: Data traffic
mean delay
Direct
Table 2: Real-time traffic
Cell loss
Data burstiness
Inverse
motion level
Direct
max buffer
Direct
mean delay
Inverse
Direct
Note: The following four figures all have the following common traffic characteristics:
Poisson arrivals for data traffic
First order autoregressive Markov process for real-time traffic
Channel capacity: 100 Mbps (fully allocated)
Constant service time: 4.24sec/cell
Real-time traffic input burstiness: 2:1
Max node delay for real-time traffic: 5 milliseconds
Max buff size for real-time traffic: 389 cells
Figure 2: Mean node delays for DT traffic
Figure 3: Max buffer requirements for DT traffic
Figure 4: Mean node delays for RT traffic
Figure 5: Cell loss probability for RT traffic
4.2 Two-node study
The configuration for this two-node ATM network study is depicted in Figure 6. Both are
edge nodes; however, Node 1 accepts only external traffic, whereas Node 2 accepts both
external and internal traffic. There are four input sources for Node 1: two data input
channels, designated D1 and D2, respectively; two real-time input channels, designated R1 and
R2, respectively. All output traffic from Node 1 goes through an internal link to Node 2. For
Node 2, there are two external input sources; one real-time channel, designated R3, and one
data channel, designated D3. The internal link for Node 2 contains four channels which are
multiplexed traffic of R1, R2, D1, and D2 from Node 1. R2 and D2 terminate at Node 2. For
simplicity, all external real-time channels (R1 through R3) have the same input parameters.
All external data channels (D1 through D3) have the same input parameters. Node channel
capacity is 100% allocated at both nodes. The two nodes differ in that Node 1 is saturated
with external traffic, whereas Node 2 is saturated with a combination of external and internal
traffic.
Real-time Traffic
Data Traffic
Real-time Traffic
R1, R2
R3
D3
R1, R2, D1, D2
R1, R3,
D 1, D 3
Node 1
Node 2
To Next Node
D1, D2
Data Traffic
R2, D2
Outgoing Traffic
Figure 6: A two-node configuration
Simulation results show that the performance of the external traffic remain largely the same
at both nodes; however, the internal traffic at Node 2 shows a substantially better
performance than at Node 1. For real-time traffic: a) Cell loss probability at Node 2 are zero
(about 5% at Node 1), and b) mean cell delays at Node 2 are roughly 18% that of at Node 1.
For data traffic: a) Buffer requirement for data traffic at Node 2 are about one-third of that
required at Node 1. b) The mean cell delays at Node 2 are roughly 10% of the
corresponding input channel at Node 1. These results seem to be understandable since the
internal input traffic to Node 2 has already been regulated at the previous node, resulting in a
steadier and less bursty traffic. The simulation results from this two-node study show when
nodes in a network are saturated with input traffic (channel capacity 100% allocated), channel
utilization remains roughly the same at all nodes; however, a node with internal traffic has in
general a much better performance with respect to its internal input channels than a node with
only external traffic.
4.3 Comparison with other models
The results of this study are compared with that reported in [7] under similar conditions.
The results in [7] are for four different versions of leaky bucket scheme. In version I, a cell
is immediately dropped if there is no token available on its arrival. In version II, a cell is
queued if it arrives at a time when there is no token available, and is dropped if the queue is
full. No buffer is provided in version III. An arriving cell is tagged and then sent to the
output channel directly if there is no token available upon its arrival. Version IV is a
combination of versions II and III. The method used in our proposed technique is similar to
version II. However, instead of a token-based scheme, our technique uses a channel capacity
allocation scheme to determine the availability of service to a cell. A performance
comparison in terms of cell loss probabilities between the proposed technique and the above
four methods for similar traffic settings is shown in Figure 7. The comparison shows that
the proposed technique outperforms versions I-III, and is comparable to version IV.
5. CONCLUSION
In this paper, we have presented simulation analysis of a new congestion control technique
for ATM networks. The technique includes admission control, traffic shaping and deals with
real-time traffic and data traffic. Call acceptance in this technique, is based upon the
effective bandwidth and effective buffer. Effective bandwidth for a switching node is
defined as a vector of bandwidth and an estimated maximum delay at the node, whereas,
effective buffer is defined as a scalar of buffer size. The performance of the proposed
technique is compared with one of the existing (leaky-bucket based) technique under similar
conditions. The comparison shows that the performance of the proposed technique is better
than that of leaky-bucket based schemes, in most of the cases.
Log (cell loss probability)
0
I
-1
II
-2
III
-3
-4
Proposed
Technique
IV
-5
-6
0.2
0.4
0.6
0.8
1.0
Offered Load
Figure 7: Performance comparison between proposed technique and other models.
REFERENCES
[1]
Bae, J., and Suda, T., “Survey of Traffic Control Schemes and Protocols in ATM Networks”,
Proceedings of the IEEE, Vol. 79, No. 2, February1991, pp. 170-189.
[2]
Ilyas, M. and Mouftah, H.T., “A Comparative Simulation of Congestion Avoidance Techniques in
Broadband ISDNs”, Conference Record of IEEE Global Telecommunications Conference GLOBECOM
‘90, December 1990.
[3]
Perros H., and Elsayed K., “Call Admission Control Schemes: A Review”, IEEE Communications
Magazine, vol. 34, no. 11, 1996, pp. 82-91.
[4]
Hsu, S., “YACAD: Yet Another Congestion Avoidance Design for ATM-based Networks”, Ph.D.
Dissertation, Florida Atlantic University, Aug. 1993.
[5]
Turner, J., “New directions in Communications (or Which Way to the Information Age?)”, IEEE
Communications Magazine, vol. 24, no. 10, Oct 1986, pp. 8-15.
[6]
Yegenoglu, F., Jabbari B., and Zhang Y., “Modeling of Motion Classified VBR Video Codecs”, IEEE
INFOCOM ’92, 1992, pp. 105-109.
[7]
Chao, H., “Design of Leaky Bucket Access Control Schemes in ATM Networks”, Conference Record
of the IEEE International Conference on Communications, June 1991, pp. 180-187.
Download