A Stop-and-Go Queueing Framework for Congestion Management

advertisement

A Stop-and-Go Queueing Framework for Congestion Management

S. Jamaloddin GoIestani

Bell Communications Research

445 South Street, Morristown, NJ 07960/1910

ABSTRACT

A framework for congestion management in integrated services packet networks based on a particular service discipline, called stop-and-go queueing, is proposed. In this framework, loss-free and bounded-delay transmission is provided to the class of traffic with stringent delay and loss requirements, e.g., real-time traffic, while the bursfy traffic without such requirements is treated on a different basis to achieve high transmission efficiency. Loss-free and bounded-delay transmission is accomplished by means of an admission policy which ensures smoothness of the traffic at the network edge, and the stop-and-go queueing which maintains the traffic smoothness throughout the network. Both the admission policy and the stop-and-go queueing are based on a time framing concept, addressed in a previous paper. This concept is further deveIoped here to incorporate several frame sires into the strategy, thereby providing the necessary flexibility in accommodating throughput and end-to-end delay requirements of different connections on an as- needed basis.

1 Introduction

The problem of congestion control, or more generally traffic management, in packet switching networks has been the subject of extensive research

Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the ACM copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Association for Computing

Machinery. To copy otherwise, or to republish, requires a fee and/or specific permission.

@ 1990 ACM 089791-405-8/90/0009/0008...$1.50 a over the past two decades. A variety of congestion control strategies have been proposed over the past years and some, more or less successfully, ap lied to conventional data communication networks lJ ~1 i3J. Nevertheless, the behavior of packet networks in the presence of congestion and the proper ways of handling traffic in order to make the worst-case loss and delay performance more reliable or predictable are not yet sufficiently understood[4j.

This difficulty partly stems from the uncertainties involved in modeling the statistical behavior of many types of traffic sources. But, more importantly, it is due to the complicated way in which different traffic streams interact with each other within a packet network.

Congestion control and traffic management in a broadband integrated services environment is further complicated by the high speed of transmission and by the diverse mix of traffic types and service requirements encountered in such networks. High speed of transmission calls for simplicity of traffic management algorithms in terms of the processing power required for their execution. Over the recent years, the increase in data processing speeds has not kept up with the fast growth of data transmission speeds. Therefore, packet processing time in the network nodes has more and more become the scarce resource, and the processing required for any control function should be kept to a minimum.

Most of the strategies proposed for congestion control in conventional data networks are

acknowledgment-based. This means that short-term feedback information (e.g., acknowledgments) from the destination node or some intermediate nodes are used to regulate incoming traffic and to decide about admittance of new packets into the network or about forwarding packets from one node to the next. At broadband transmission rates, packet duration, or the time required to serve a packet by the link, is very short. Therefore, propagation delays, when measured in terms of the packet duration, are much higher than in narrowband networks. Consequently, acknowledgment-based control strategies will tend to work more slowly and may be unable to keep up with the pace of events occurring in the network.

Services that may be encountered in an integrated services network range from voice and video communications to different forms of data services, such as interactive data communications and file transfers. These represent a wide variety of both traffic characteristics (e.g., average rate and burstiness) and service requirements (e.g., end-to- end delay, delay jitter, packet loss probability, call blocking probability, and error rate). In particular, real-time traffic, e.g., voice and video, are delay- sensitive and must be delivered to the destination within a short period of time. This makes excess packet delays unacceptable and packet loss recovery through retransmission - ineffective.

Obviously the tasks of resource management and congestion control are more involved in this integrated environment than in a conventional data network. Here, control algorithms, besides having to deal with a wide range of traffic characteristics, need to be more effective in yielding predictable network behavior and must be more flexible in accommodating different service requirements.

In a previous pape#l , we developed a framing strategy for congestion control, which has several desirable features: it maintains loss-free communication, it provides bounded end-to-end delay, and it is simple to implement. These features make the strategy an attractive solution for the transmission of real-time traffic and other forms of time-critical information in broadband packet networks. The strategy is based on a particular service discipline at the nodes of the network, called stop-and-go queueing.

Our goal in this memorandum is to build upon the previous stop-and-go queueing and framing strategy to obtain a more general congestion management framework. In section 2, after a brief review of the framing strategy, its delay performance is discussed.

Then in section 3, the determining factors in the choice of the frame size are considered and the case for using multiple frame sizes is presented. In section 4, we relax the assumption of a single frame size and obtain a generalized framing strategy based on multiple frame sizes. The realization and performance of the multiple framing strategy is discussed in sections 5 and 6. Finally the problem of integrating the framing strategy with other congestion control schemes is considered in section

7. We determine a queueing structure by which our framing strategy and alternative congestion management schemes for less demanding traffic types can be incorporated into the same network.

2 Review of the Framing Strategyrq

The strategy is composed of two parts, a packet admission policy imposed per connection at the source node, and a particular service discipline at the switching nodes, named stop-and-go queueing.

Central to both parts of the strategy is the notion of time frames, hence the name framing strategy. We start from a reference point in time, common to all nodes, and divide the time axis into periods of some constant length T, each called a frame (Fig.

1). In general, it is possible to have different reference points for. different links or nodes151.

However, in order to simplify the present discussion, we assume a common reference point

(or time-origin) network-wide.

2.1 Packet Admission Policy

We define a stream of packets to be (r,T)-smooth if during each frame of length T the arrived packets collectively have no more than r. T bits.

Equivalently, if only packets of a fixed size I’ are encountered, a stream of packets is (r,T)-smooth when the number of received packets during each

1 frame is bounded by +r ST .

The packet admission part of the strategy is based on the definition of the smoothness. After a transmission rate rk is allocated to a connection k , its packet arrival to the network is required to-‘be

(rk ,T)-smooth. In-other words, during each frame, the aggregated length of the received packets from connection k should not exceed Ik’T bits, and any new packet which violates this limit is not admitted until the next frame starts. Alternatively, we may require that the allocated rate rk be large enough so that the stream of packets arriving to the network

2.2 Stop-and-Go Queueing

The above packet admission policy guarantees that the traffic stream of each connection k, with an allocated rate rk , iS

(rk , T)-smooth upon admission to the nehuork. If this property continues to hold as

9

FrameS

0 T :2T 3T 4T

-T

0 T ZT 3T

Fig: 1 : Dividing the time axis into frames of length T

-2T -T 0 T 2T

Fig. 2 : Departing and arriving frames of link ( the packet stream of each connection arrives to the intermediate switching nodes, then the problem of congestion is indeed resolved. Unfortunately, this is often not the case. We have show&1 that in a network with conventional first-in first-out (FIFO) queueing, as packets of a connection proceed to the destination node, they tend to cluster together and form longer and longer bursts which violates the original smoothness property. Formation of long packet bursts can cause serious degradation in the delay and loss performance of the network, and therefore must be avoided.

The stop-and-go queueing is an alternative to the conventional FIFO queueing that solves this problem and guarantees that once the (rk,T)- smoothness is enforced on all connections k at their source nodes, the property will continue to hold at any subsequent switching nodes. To facilitate the description of this queueing scheme, let us first introduce the notion of departing and arriving frames on a link. Over each link, we view the time frames as traveling with the packets from one end of the link to the other end, and on to the processing device at the receiving end. Therefore, if we denote by rf the sum of the propagation delay plus the processing delay at the receiving end of link C, the frames at the receiving end (arriving frames) will be To seconds behind the corresponding frames at the transmitting end (departing frames), as shown in Fig. 2.

Let us consider a special case in which re , for each link !, is both a constant and a multiple of T, i.e., rL = m *T for some integer m . In this case, at any node, all the arriving frames over different incoming links will be synchronous with the departing frames over outgoing links (Fig. 3). In a practical case, we may introduce some additional delay into each link in order to make rf a multiple of T and to synchronize the departing and arriving frames.

The stop-and go queueing discipline in the above synchronous case is based on the following simple rule: transmission of a packet which has arrived at any link I! during a frame f should always be the oulgolng IInk.

0 1 21 37

4r

Fig. 3 : When packets arriving on each frame become eligible for transmission. postponed until the beginning of next frame.

This rule is graphically illustrated in Fig. 3 which shows when packets that arrive during each frame become eligible to receive service. It can ba proved is] that if this rule is implemented in a network with the foregoing admission policy, then:

9 Any packet that has arrived at some link e during a frame f

, will receive service before the frame following f expires. ii) The packet stream of each connection will maintain the original (r ,T)-smoothness property throughout the network. iii) A buffer space of at most 2 Cl* T per link e is sufficient to eliminate any chance of buffer overflow, where CL is the capacity of the link.

2.3 Realization of the Stop-and-Go Queuing:

The stop-and-go queuing does not correspond to a specific service discipline, and in particular does not require that packets be served on a FIFO basis.

Nevertheless, complying with the FIFO rule is advantageous both because it simplifies the realization and because packets will always be delivered in sequence.

Incorporation of the FIFO rule into the stop-and-go queueing results in what we may call a delayed-

FIFO queueing. Fig. 4 illustrates two possible realizations of this strategy for the case of synchronous departing and arriving frames. In Fig.

4.a we have a double-queue structure in which during each frame one FIFO queue is loaded in, while the other one is served from. At the end of

10

b-

Fig. 4 : Realization of the stop - and -go queuelng strategy. a) A double - queue structure b) A single - queue structure each frame the order is reversed. This arrangement ensures that packets received in a particular frame are not served during that same frame. Fig 4.b shows how the same thing can be accomplished using a single FIFO queue. Here, a service controller, at the beginning of each frame, marks the present load in the queue as eligible and then connects the transmission facility to the queue just for the time needed to serve the eligible load. After that, the service is interrupted until the beginning of the next frame, so that any packet which is received during the current frame, and therefore is not yet eligible, does not get transmitted before the frame is expired.

In practical cases where the sum of propagation and processing delays of the links are neither negligible nor a multiple of T, the simplest way to envisage the stop-and-go queueing is by introducing an additional delay into each link, thereby making the total delay of the link a multiple of T and synchronizing the arriving and departing frames.

Denoting this added delay for link L by 0~ and defining ‘i! as the first multiple of T larger than or equal to 71 , we notice that s,=;i,-- 7~ = (-Tf) mod T

(1) and

Q_<e,<T.

(2)

This additional delay can be incorporated into the structure of Fig. 4.b by modifying the service controller function as follows: the service controller of link e at the end of each departing frame f should register the number of packets received during f . These packets become eligible for transmission tiL seconds later. So transmission should be resumed 19t seconds after the end of f , just for the period of time required to serve the eligible load. Considering the extra buffer space needed due to this additional delay, the required buffer size can now be as big as (2T + 0,) CL . This is always less than 3C1-T .

In conclusion, the stop-and-go queueing may be implemented with a FIFO queueing structure and some simple addition the service controller at each link. We emphasize that the time instances of interrupting or resuming service of each link is determined locally and no exchange of information between nodes is required for this purpose.

3 Impact of Frame Size on Queueing

Bandwidth Allocation

Delay and

The foregoing packet admission and queueing policy, beside eliminating buffer overflows, leads to an attractive pro

P erty in terms of packet delays. It can be shown 5l that all packets of any given connection undergo the same amount of delay in the network, except for a small jitter. In other words, the end-to-end delay of a packet of a given connection may be represented as:

D’=D+d , (3) where D, defined as the connection delay, is constant for all packets of the connection, and d, the delay jitter, varies from packet to packet. The delay jitter is limited to the size of a frame:

-T<d<T.

(4)

The connection delay D itself can be broken into two parts:

D=r+Q,

(5) where T is sum of the end-to-end propagation and processing delays and Q represents the total queueing delays of the connection. Furthermore, Q is bounded as follows:

H-T 2 Q < 2H.T ,

(6) where, H is the number of links traversed by the connection.

These results are summarized in Fig. 5 which illustrates a typical distribution of packet delays and the involved parameters. The end-to-end packet delay is constant except for a small delay jitter.

The constant delay term D is greater than the total propagation and processing delay T by Q, which is no larger than 2H. T and no smaller than 23. T .

Illustrated in this figure is also a typical distribution of the packet delays in a network with conventional

11

acket Delay islributlon

Convennonal

FIFO Queuelog

H.Tc Q< 2H.T

Framing strategy

Shaded area roughly corresponds to packet loss probability

Flg. 5 : Comparison of typical delay distributions in a packet network with framing strategy and one with conventional FIFO queueing.

FIFO queueing. The delay in such a network can be smaller on the average than that of the framing strategy. However, it is distributed over a wide range, which makes the behavior of the network rather unpredictable and subject to buffer overflow and packet loss. We conclude that the delay performance resulting from the framing strategy is attractive in cases where the term Q, or the total queueing delay of the connection, is acceptable.

As mentioned before, the total queueing delay of the connection, Q , is a fixed value between H*T and ZH . T, where H is the number of links traversed by the connection. On the other hand, the required buffer space per link & for providing congestion-free communication is at most

(2T + B&CL which is almost proportional to T.

Therefore, by choosing a sufficiently small frame size T, one could arbitrarily reduce queueing delays as well as buffer requirements of the network.

However, a small frame size T can be chosen at the cost of reduced flexibility in bandwidth allocation over the links of the network, which can be done in incremental steps of one packet per frame.

In order to study the trade-off between queueing delays and flexibility in bandwidth allocation, we consider the case where all packets have a fixed length r. In this case the incremental step of bandwidth allocation,

Ar, is:

Ar = 5 bits/set .

For a given connection which traverses H hops, the queueing delay can be expressed as

Q = (r-H-T ,

(8) where (Y is some constant between 3. and 2, depending on the source-destination paih of the given connection. It follows that:

AreQ =a-H-I’ , 150 <2 .

(9)

This equation clearly states that for a fixed source- destination route and a fixed packet size, Ar and Q can not be simultaneously decreased, and a reduction in one would lead to a proportional increase of the other.

As a numerical example, for packet length of

I=400 birs and a connection with H=5 and (~=1.6 :

Ar n Q = 3200 bits .

(10)

Therefore, if T is chosen to yield a queueing delay of 8 msec for this connection (T=l msec), Ar becomes 400 kbits /set . On the other hand, if a bigger T is chosen to reduce Ar to 8 kbirs/sec

(T=SO msec), the queueing delay becomes

400 msec .

4 Stop-and-Go Queueing with Multiple Frame

Sizes

The foregoing discussion shows that in order to have small queueing delays (on the order of a few milliseconds) for certain types of connections, and still be able to allocate bandwidth to other connections in small increments (of few kil.obits per seconds), more than one frame size is needed. In this section a generalization of the framing strategy is presented, which allows the use of more than one frame size.

We consider G frame sizes T1, T2, . . . TG, and assume that each Tg is a multiple of Ts+r, i.e.,

Tg = Ks-Te+I, g=l,..., G-l, for some integer Kg.

Again, we start from a reference point in time, common to all nodes, and divide the time axis into periods of length Tg, each called a type g frame.

We repeat this for every value of g, starting from the same reference point. This is illustrated in Fig.

6 for the case of G=2 and T,=3T2.

Frames of any type are viewed as traveling with the packets over the links. Again, we initially consider a case where rr , for each link e, is a multilple of T,

(and therefore a multiple of any T,), so that the departing and arriving frames of each type become synchronous.

Every connection in the network is set up as a type g connection for some g =1,2,...,G, in which case it is associated with the frame size Ts. The admission policy requires that the packet arrival of any connection k of type g , g=l,. . .,G , with the allocated transmission rate rk be (rk,Tg)-smooth.

12

T2. mnles

0 TZ

2T2 3T2 4T2 ST2 m2

Fig. 6 : Synchronous frames with size T2 and T1 = 3T 2

We refer to the packets of a type g connection, g=l,..., G, as type g packets, and require that g be indicated in the header of each packet.

The stop-and-go queueing in this case is based on the following rules:

A) A type g packet which has arrived at some link L during a type g frame f , does not become eligible for transmission by e until f expires.

B) Any eligible type g packet, g=2,3,. . . ,G has non-preemptive priority over packets of type g’<g -

Theorem 1: Let the packet stream of each connection k of type g with the allocated transmission rate rk be (rk ,T,)-smooth upon entering the network. Assume that the network initially contains no packets, and that the stop-and- go queueing, as stated by the above rules A and B, is practiced at every network node. Furthermore, assume that the aggregate transmission rate allocated over each link e to all type g connections, denoted by Cg , satisfies the following:

5 cg<c,-* g=g0 go

) g0=2,...,G ,

(11) where Pmax is the maximum length of packets in the network. It then follows that: i) Any type g packet that has arrived at some link L during a type g frame f , will receive service before the type g frame following f expires. ii) The packet stream of each connection will maintain the original smoothness property throughout the network, i.e., for a connection of type g with the allocated transmission rate rk, the packet

Stream maintains the (r, ,T,)-smoothness throughout its path to the destination. iii) A buffer space of 5 2Cf. Tp per link C is g=l sufficient to eliminate any chance of buffer overflow. This required buffer size is always less than or equal to 2CL. Tr.

The formal proof of the theorem is presented in the appendix. Before proceeding to the next discussion, however, we would like to see what the inequality constraint (11) implies with regard to the aggregate allocated Ftes Q, and why this is necessary. The term 5 in this inequality is a rate

T go equivalent to the transmission of one packet of the maximum length per frame Tgo. Therefore this constraint requires that for each link e and for any go 22, part of the link capacity equal to at least one maximum-size packet per frame Tgo should remain unallocated to the connections of type go or higher.

This is necessary because a type go frame could start while a packet of type g<go is being served.

Since the priority assumption is on a non- preemptive basis, service of such a packet cannot be interrupted. This can reduce the effective bandwidth available for the transmission of type r g>go by at most F . Constraint (11) does not go apply to go=l, meaning that any unallocated part of the capacity can always be assigned to connections of type 1. Since, one packet per frame normally accounts for a small percentage of the total link capacity, we conclude that the above requirement does not imply a significant waste of capacity, nor it does impose any important restriction on capacity allocations.

5 Realization

Queueing of the Multi-Frame Stop-and-Go

Fig. 7 shows an architecture for the realization of the multi-frame stop-and-go queueing based on the

FIFO concept. In this structure, at every link, there are G FIFO queues g=l,...,G, one associated with each packet type. Packets of different types are separated by the demultiplexer (which may be part of the switch) and loaded into corresponding FIFO queues.

In order to comply with rules A and B, the service controller in Fig. 7 functions as follows: In the beginning of each type g frame it marks the present load in queue g as eligible. In the beginning of each type G frame (smallest frame), it starts serving eligible packets of queue G until done. In the beginning of each type G-l frame (which always coincides with the beginning of one type G frame), after the eligible load of queue G is served, transmission of the eligible load of queue G-l is started. Notice that during each type G-l frame,

13

Fig. 8: A typical pattern of service shifts between the queues for the cese of G I 2 and Tt = 3 T2

Fig. 7 : Realization of the stop-and-go queueing for multi-size frames. service of queue G-l may be interrupted several times, on a non-preemptive basis, before its eligible load is transmitted. This is because each time a new type G frame starts, new packets may become eligible at queue G . Similarly, when the eligible load of queue G-l has been served, service of queue G-2 starts. This procedure continues all the way down to queue 1. Once there is no eligible load at any queue, the server stays idle. A typical pattern of service shifts between the queues for the case of G=2 is illustrated in Fig. 8.

Similar to the single frame case, the function of service controller can be modified to account for the aspchronous arriving and departing frames.

Let ?t be the first multiple of Tg larger than or equal to rL. Define flj’ L ?f - rr = (- 7-J mod Tg ,

(12) where 0g is the additional delay term that should be incorporated into link e for type g packets, in order to synchronize the arriving and departing frames of type g over the link. This can be done by the following modification in the function of service controller of Fig. 7: at the beginning of each type g frame, mark the load present in queue g. Then, Sg seconds later designate this marked load as eligible for transmission.

Considering the extra buffer space needed due to this additional dela

2 the required buffer size can now be as big as C (2T,+Of)*Cf . This is always g=l less than 3T1*Ce.

6 Queueing Delay and Bandwidth Allocation in the

Multi-Frame Case

In order to illustrate the trade-off between bandwidth allocation and queueing delay in the multi-frame case, it is again helpful to consider the case where all packets have the same length r. The bandwidth can be allocated to a connection of type g in incremental steps of one packet per Tg seconds, or

Arg = F bits/set .

8

(13)

The delays associated with the packets of a type g connection are the same as if the stop-and-go queueing was practiced on a single-frame basis with the frame size equal to Tg . This can be seen by considering properties i and ii of theorem 1, and noticing that for packets of type g , these properties are exactly the same as those previously stated for single-frame case with T = T8 .

Consequently, the end-to-end delay of a packet of a type g connection can be expressed as

D’g=Dg+dg=r+Qg+dg , (14) where Dg is the total connection delay, & is the packet delay jitter, and Qg is the connection queueing delay. Furthermore

-Tg <dg c Tg

05) and

HsTg < Qg < 2H*Tg .

(16) where H is the number of hops traversed by the connection. We summarize the above rlesults by concluding that

Arg*Qg =

This result cu*H*I , l<aC2 * (17) is similar to the previous result in

14

section 4 (Eq. (9)), except that now the coupling between the queueing delay and the incremental steps of bandwidth allocation, as expressed by (17), applies, separately, to each category of connections, rather than globally to all connections. In other words, it is now possible to provide small queueing delays for certain type of connections, and allocate bandwidth in fine segments to other connections.

As a numerical example, consider a network with link capacities of 150 Mbifs/sec and packet size of

I’=400 bits, and assume that only two frame sizes

T, = 16 mSec and T2= 1 msec are employed. The incremental steps of bandwidth allocation for type 1 and 2 connections will be Arl = 25 Kbits/sec and

Ar2 = 400 Kbits/sec . The maximum queueing delay for a type 1 or type 2 connection with H=5 will be

Q1 = 160 msec and Q2 = 10 msec . This suggests that continuous bit stream oriented traffic with high vulnerability to end-to-end delay should be set up in this network as type 2 connections, while low speed data communication services can be designated as type 1 connections. Furthermore, in regard to constraint (ll), one packet per type 2 frame, or less than 0.3% of the capacity of each link should remain unallocated to type 2 connections.

Finally we look at the buffer requirements in the above example. The maximum buffer requirement per link C is:

BL(mux) = (2T1 + dt) - Ce c

3T, * CL = 7.2 Mbits .

(18)

However, if it is anticipated that no more than 10% of the capacity of link C, i.e., 15 Mbits /set , ever needs to be allocated to type 1 connections, the maximum required buffer space would be:

Bi(max ) =

O.lCe - (2T, + 0;) + 0.9CL - (2T, + 6;) <

3(O.lCl - T1 + 0.9Ce - T2) = 1.125 Mbits . (19)

Clearly, if the smaller buffer is provided at link !, care must be taken to ensure that at most 10% of the link capacity is allocated to type 1 traffic.

7 Integration of Framing Strategy with Other

Congestion Control Policies

The benefits of congestion-free and bounded-delay transmission provided by the framing strategy are basically accomplished at the cost of a strict admission policy to enforce the smoothness property on packet arrivals, i.e., admitting only rk *T bits per connection k in each frame of size T.

Since an averaging period of only a fraction of second is often insufficient to smooth out the statistical fluctuations of traffic sources, this

Fig. 9 : Integration of the stop-and go queueing with other queueing disciplines admission policy practically requires that capacity be allocated based on the peak rate of the connections. If the framing strategy is uniformly applied to all of the services in a broadband network, low utilization of the transmission capacity is inevitable. In this section we alleviate this problem by combining the framing strategy with other traffic management policies.

Consider the queueing structure of Fig. 9. Here, a new category of packets, which we may refer to as rype 0 rruffic, is loaded into a new queue - queue 0.

The function of the service controller is similar to

Fig. 7 except that now queue 0 has replaced the idle position of the server and during any period of time that other queues are not served, the server is connected to queue 0. This is actually the time that the server would have stayed idle in the configuration of Fig. 7, and is comprised of the following two parts: a) The fraction of the frames corresponding to any part of the transmission bandwidth not allocated to the traffic of type ~21. b) The fraction of the allocated bandwidth to the traffic of type g>l which is being underutilized by the corresponding sources.

Theorem 2: With the inclusion of type 0 traffic in the network, theorem 1 remains valid given that constraint (11) also applies to gO=l, namely:

5 q<cl-+, g=L?o go go=l,...,G

The proof is presented in the appendix.

.

(20)

In the absence of the type 0 category, application of the framing strategy to bursty connections would lead to inevitable underutilization of the network, since bandwidth must be allocated according to the

15

peak rates. The provision of type 0 services helps to alleviate this problem in two distinct ways. First, any bursty traffic without stringent loss and delay requirement can now be included in the type 0 category and need not be serviced according to the framing strategy. Secondly, in regard to the remaining bursty connections of type g>l, the unused portion of the allocated bandwidth can now be utilized by the type 0 traffic.

In this integrated environment, the framing strategy is best suited for the traffic with negligible burstiness, real time traffic, and other services with stringent packet loss and delay requirements, while the services with less sensitivity to loss and delay should be included in the type 0 category.

It is important to notice that the new category of traffic may be regulated according to any desired policy or transmitted in any order, and can itself comprise different classes of connections, as well as datagrams, each being subject to a different congestion control policy. The presence of packets in this category does not cause any disruption in the proper functioning of the framing strategy as long as the rules of stop-and go queueing are followed.

8 Summary

A framework for congestion management in broadband integrated services packet networks was presented. This framework allows for guaranteed delivery of delay-sensitive real time traffic, as well as efficient transmission of highly bursty traffic.

Time framing and stop-and-go queueing were used as the essential mechanisms to protect real-time traffic against packet loss and excess delay. In order to retain the bandwidth allocation flexibility, the stop-and-go queueing on a multiple frame basis was developed,

Appendix

Proof of Theorem 1: Let us first define t$ f m .Tg , where g and m are integer and pl,..., G , m=-co, . . . , 00. Without loss of generality we assume that each interval (t$-i ,t,$] coincides with one type g frame. We prove theorem

1 by first establishing the following lemma:

Lemma: The following conditions hold true for any gd,..., G, andm=-co, . , . ,oc): a) Any type g packet that has arrived at any link e on or prior to t$-i will be completely transmitted by ti . b) For any link .! and any connection k of type g using it, the aggregated length of the packets belonging to k that are transmitted e during frame (tk-i , ti] is bounded by

Yk.T*.

Proof of the Lemma: We prove this lemma by applying induction with respect to two parameters, m and g. First, in steps 1, we focus on the highest priority traffic (g=G) and use induction in regard to m to show that the Lemma is true for g=G and any m. Next, in step 2, we apply induction to g to establish the Lemma for any g and m .

Step 1 - g=G : We show that the lemma is correct for g=G by applying induction with respect to the variable m. Let the lemma hold true for ,g=G and m<M, where M is some arbitrary integer.

Consider any link L. First we argue that the aggregated length of the packets of any c,onnection k of type G which arrive at t during (rii-i ~$1 is bounded by r,. TG . For those connections that have e as the first network link along the path to their destination, this is true due to the smoothness property assumed for packet streams upon their admission to the network. For other type G connections this claim is true since we have assumed that 1) condition b holds true for g=G and any m<M and 2) the sum of propagation and processing delays of any link is a multiple of TG .

Next we know from condition a that any type G packet which has arrived at C on or before t$-i, is completely served by G We also know from Rule

A that packets which arrive at e after t$ are not eligible for transmission during (f$, ts,i ] .

Therefore, the only type G packets eligible for service by link L, during (t$, t$+,] , are those that have arrived during (t$-i , l$] . The aggregated length of these packets, according to 1) the previous arguments and 2) constraint (ll), is bounded by:

All type G connections k traversillR link 1

(Ce-5) TG = CL (TG-9) ,,

(21)

Therefore, transmission of these packets by link e takes no more than TG- r

5

Ce seconds. On the other hand, during any type G frame, service of eligible type G packets starts no later than - r max

Ce seconds after the beginning of frame, since these packets have non-preemptive priority over the rest of traffic. We conclude that at time t$+i all of the type G packets that have arrived at e on or before

16

tfi, are completely served. Furthermore, the aggregated length of the packets of any connection k of type G that are transmitted by e during

(t$, t$+r] is bounded by rk * TG. Since the choice of link e was arbitrary, we conclude that conditions a and b also hold true for g=G and m=M+l.

Finally, since the network initially contains no packet, these conditions are indeed true for g=G and any m less than or equal to some integer MO .

The lemma, for the case of g=G, follows by induction.

Step 2 -IQ <G: We apply induction in regard to variable g to prove the lemma for l<g <G. Take any integer go , l<gO<G. Assume that the lemma holds true for all g , go<g<G , and all m . We prove that it then must hold true for g=go and all m.

To show that the lemma is true for g=go and all m , we now apply induction in regard to m , similar to what was done in step 1. Let the lemma hold true for g=g, and any m less than or equal to some integer A4. Consider an arbitrary link e. Following an argument similar to case 1, it can be concluded that 1) the aggregated length of the packets of any connection k of type go that arrive at C during

(&‘-i ~$1 is bounded by rk .Tgo , and 2) the only type go packets eligible for transmission by C during

(&,&+i] are those arrived at t during (t$-i ,tz].

From this result, first it follows that the aggregated length of the packets of any connection k of type go that are transmitted by e during (t$‘,&+r] is bounded by rk.Tgo , i.e., condition b of the lemma is valid for g=go and m=M+l. Secondly, the aggregated length of type go packets that are eligible for transmission during (&“,&+r] is bounded by

c rk ’ Tgo

= C;O.TBn .

(22)

Since for any g >go , 1) condition b by assumption holds true, for all m , and 2) an integral number of type g frames cover the type go frame (t~,r~+, ] , we conclude that the aggregated length of all of the type g packets, g>g, , that are, or become, eligible for service by C during (&,f~+i] is bounded bY

(23) g=g0 g=g0

For the case of go>l, according to constraint (11) this bound is less than or equal to

Tg;(C+-=)

I? go

= Ct*(Tgo-

(24)

This means that the service time of eligible type g>goypackets during (&,&+i] takes no longer than

T80- max seconds. Since these packets have non-

CL preemptive priority over packets of lower type, r their transmission starts no later than 2 seconds

CL after & , and then, as long as any eligible type go packet is left, the server does not start transmitting a packet of lower type. We conclude that all of the type go packets will be transmitted before the frame

(t$,tz+i] expires. This is the same as condition b for g=go and m=M+l.

For the case of gs=l, there is a slight difference in the argument. Since constraint (11) does not apply to go=l, we can only say that the aggregated length of the packets of type g 2 go = 1 ( or indeed of any type) that are, or become, eligible for transmission during (t&,tnfi+i] is bounded by

T,-&<Ct.T1 g=l

.

(25)

Transmission of these packets takes at most T1 seconds and, given that there is some eligible type 1 packet, starts right at rh . Therefore, all of the eligible type 1 packets will receive service by fh+l .

In summary, we have argued that if the lemma is true for 1) g >go and all m , and 2) g=go , and m <M, it then has to be true for g=go and m=M+l. Since the network is initially empty of packet and the lemma holds true for m<M,, where

MO is some integer, we apply induction to m to conclude that the lemma is indeed true for g=go and all m . Next, since we showed in step 1 that the lemma is true for g=G, by applying induction to g it follows that the lemma is valid for any l<g<G and any m. This completes the proof of the

Lemma.

Coming back to the proof of theorem 1, we notice that parts i and ii of the theorem are actually equivalent to parts a and b ’ of the lemma. To establish part iii, consider an arbitrary link e, an arbitrary traffic type g, and some time instance to.

According to part i, at to, the only type g packets that are being served or waiting for service at e are those received during the current or previous type g frame. The aggregated length of these packets, is limited to 2Cf. Tg since, according to part ii, packet streams maintain their smoothness property upon arrival to e. It follows that the total buffer space required at e is:

17

g=l g=l

2T,s 5 Cf < 2CL.T1 g=l

(26)

Proof of Theorem 2: The proof of Theorem 2 is similar to the proof of Theorem 1 except for a small change in regard to the case of go=1 in step 2 of the

Lemma. Here, at time r& , transmission of eligible packets of type g>l may not start r;ght away; it may have to be postponed for up to msx seconds

CL later, since a type 0 packet could be receiving service at tnfi. However, in view of the new constraint (20), Eq. (25) should also be changed as follows:

C,(T,-F) . (27)

This guarantees that all of the eligible type g packets, g>l, will be transmitted before the frame

(t&,th+r] expires.

Acknowledgment

The author would like to appreciate Ernst W.

Biersack, Mario P. Vecchi, and Mark Garrett for helpful comments.

REFERENCES

1. R. Kahn and W. Crowther, “Flow Control in a

Resource-Sharing Computer Network:“, IEEE

Trans. Commun., June 1972, pp. 539-5,46.

2. M. Gerla and L. Kleinrock, “Flow Control: A

Comparative Survey”,

IEEE Trans. Commun., 1980, pp. 553-574.

3. S. J. Golestani, A Unified Theory of

Flow

Control and Routing in Data Communication

Networks. PhD thesis, MIT, Dept. of Electrical

Engineering and Computer Science, Cambridge,

MA, 1980.

4. V. Jacobson, “Congestion Avoidance and

Control”, Proceedings of

SIGCOMM

Symposium, 1988, pp. 314-329.

5. S. J. Golestani, “Congestion-Free Transmission of Real-Time Traffic in Packet Networks,”

Proceedings of INFOCOM, San Francisco,

California, June 1990.

18

Download