advertisement

S. Jamaloddin GoIestani Bell Communications Research 445 South Street, Morristown, NJ 07960/1910 ABSTRACT A framework for congestion management in integrated services packet networks based on a particular service discipline, called stop-and-go queueing, is proposed. In this framework, loss-free and bounded-delay transmission is provided to the class of traffic with stringent delay and loss requirements, e.g., real-time traffic, while the bursfy traffic without such requirements is treated on a different basis to achieve high transmission efficiency. Loss-free and bounded-delay means of an admission policy which ensures smoothness of the traffic at the network edge, and the stop-and-go queueing which transmission maintains the is accomplished traffic by smoothness throughout into the network. Both the admission policy and the stop-and-go queueing the strategy, are based on a time framing concept, addressed in a previous paper. This concept is further deveIoped here to incorporate several frame sires thereby providing the necessary flexibility in accommodating delay requirements throughput of different and end-to-end connections on an as- needed basis. 1 Introduction The problem of congestion control, or more generally traffic management, in packet switching networks has been the subject of extensive research Permission to copy granted provided direct commercial without advantage, fee all or part of this material that the copies are not made or distributed the ACM copyright is for notice and the title of the publication that copying and its date appear, and notice is by permission of the Association is given for Computing Machinery. To copy otherwise, and/or specific permission. or to republish, @ 1990 ACM 089791-405-8/90/0009/0008...$1.50 requires a fee a over the past two decades. A variety of congestion control strategies have been proposed over the past years and some, more or less successfully, ap lied to conventional data communication networks lJ ~1 i3J. Nevertheless, the behavior of packet networks in the presence of congestion and the proper ways of handling traffic in order to make the worst-case loss and delay performance more reliable or predictable are This difficulty partly stems from the uncertainties involved in modeling the statistical behavior of many types of importantly, it is due to the complicated way in which different traffic streams interact with each other within a not yet sufficiently understood[4j. traffic sources. packet network. But, more Congestion control and traffic management in a broadband integrated services environment is further complicated by the high speed of transmission and by the diverse mix of traffic types and service requirements encountered in such networks. High speed of transmission calls for simplicity of traffic management algorithms in terms of the processing power should be kept to a minimum. required growth of data transmission speeds. for their execution. Over the recent years, the increase in data processing speeds has not kept up with the fast Therefore, packet processing time in the network nodes has more and more become the scarce resource, and the processing required for any control function Most of the strategies proposed for congestion control in conventional data networks are

acknowledgment-based. This means that short-term feedback information (e.g., acknowledgments) from the destination node or some intermediate nodes are used to regulate incoming traffic and to decide about admittance of new packets into the network or about forwarding packets from one node to the next. At broadband transmission rates, packet duration, or the time required to serve a packet by the link, is very short. Therefore, propagation delays, when measured in terms of the packet duration, are much higher than in narrowband networks. Consequently, acknowledgment-based control strategies will tend to work more slowly and may be unable to keep up with the pace of events occurring in the network. Services that may be encountered in an integrated services network range from voice and video communications to different forms of data services, such as interactive data communications and file transfers. These represent a wide variety of both traffic packet characteristics (e.g., average rate and burstiness) and service requirements (e.g., end-to- end delay, delay jitter, packet loss probability, call blocking probability, and error rate). In particular, real-time traffic, e.g., voice and video, are delay- sensitive and must be delivered to the destination within a short period of time. This makes excess delays unacceptable and packet loss recovery through retransmission - ineffective. Obviously the tasks of resource management and congestion control are more involved in this integrated environment than in a conventional data network. Here, control algorithms, besides having to deal with a wide range of traffic characteristics, need to be more effective in yielding predictable network behavior and must be more flexible in accommodating different service requirements. In a previous pape#l , we developed a framing strategy for congestion control, which has several desirable features: it maintains loss-free communication, time-critical it provides bounded end-to-end delay, and it is simple to implement. These features make the strategy an attractive solution for the transmission of real-time traffic and other forms of information called stop-and-go queueing. in broadband packet networks. The strategy is based on a particular service discipline at the nodes of the network, Our goal in this memorandum is to build upon the previous stop-and-go queueing and framing strategy to obtain a more general congestion management framework. In section 2, after a brief review of the framing strategy, its delay performance is discussed. Then in section 3, the determining factors in the choice of the frame size are considered and the case for using multiple frame sizes is presented. In section 4, we relax the assumption of a single frame size and obtain a generalized framing strategy based on multiple frame sizes. The realization and performance of the multiple framing strategy is discussed in sections 5 and 6. Finally the problem of integrating the framing strategy with other congestion control schemes is considered in section 7. We determine a queueing structure by which our framing strategy and alternative congestion management schemes for less demanding traffic types can be incorporated into the same network. 2 Review of the Framing Strategyrq The strategy is composed of two parts, a packet admission policy imposed per connection at the source node, and a particular service discipline at the switching nodes, named stop-and-go queueing. Central to both parts of the strategy is the notion of time frames, hence the name framing strategy. We start from a reference point in time, common to all nodes, and divide the time axis into periods of some constant length T, each called a frame (Fig. 1). In general, it is possible to have different reference points for. different links or nodes151. However, in order to simplify (or time-origin) network-wide. the present discussion, we assume a common reference point 2.1 Packet Admission Policy We define a stream of packets to be (r,T)-smooth if during each frame of length T the arrived packets collectively have no 1 more than r. T bits. Equivalently, if only packets of a fixed size I’ are encountered, a stream of packets is (r,T)-smooth when the number of received packets during each frame is bounded by +r ST . The packet admission part of the strategy is based on the definition of the smoothness. After a transmission rate rk is allocated to a connection k , its packet arrival to the network is required to-‘be (rk ,T)-smooth. In-other words, during each frame, the aggregated length of the received packets from connection k should not exceed Ik’T bits, and any new packet which violates this limit is not admitted until the next frame starts. Alternatively, we may require that the allocated rate rk be large enough so that the stream of packets arriving to the network always maintains the (?-k 2.2 Stop-and-Go Queueing The above packet admission policy guarantees that the traffic stream of each connection k, with an allocated rate rk , iS (rk , T)-smooth upon admission to the nehuork. If this property continues to hold as 9

0 T :2T 3T 4T

0 T ZT 3T Fig: 1 : Dividing the time axis into frames of length T -2T -T 0 T 2T Fig. 2 : Departing and arriving frames of link ( the packet stream of each connection arrives to the intermediate switching nodes, then the problem of congestion is indeed resolved. Unfortunately, this is often not the case. We have show&1 that in a network with conventional first-in first-out (FIFO) queueing, as packets of a connection proceed to the destination node, they tend to cluster together and form longer and longer bursts which violates the original smoothness property. Formation of long packet bursts can cause serious degradation in the delay and loss performance of the network, and therefore must be avoided. The stop-and-go queueing is an alternative to the conventional FIFO queueing that solves this problem and guarantees that once the (rk,T)- smoothness is enforced on all connections k at their source nodes, the property will continue to hold at any subsequent switching nodes. To facilitate the description of this queueing scheme, let us first introduce the notion of departing and arriving frames on a link. Over each link, we view the time frames as traveling with the packets from one end of the link to the other end, and on to the processing device at the receiving end. Therefore, if we denote by rf the sum of the propagation delay plus the processing delay at the receiving end of link C, the frames at the receiving end (arriving frames) will be To seconds behind the corresponding frames at the transmitting end (departing frames), as shown in Fig. 2. Let us consider a special case in which re , for each link !, is both a constant and a multiple of T, i.e., rL = m *T for some integer m . In this case, at any node, all the arriving frames over different incoming links will be synchronous with the departing frames over outgoing links (Fig. 3). In a practical case, we may introduce some additional delay into each link in order to make rf a multiple of T and to synchronize the departing and arriving frames. The stop-and go queueing discipline in the above synchronous case is based on the following simple rule: transmission of a packet which has arrived at any link I! during a frame f should always be the oulgolng IInk. 0 1 21 37 4r Fig. 3 : When packets transmission. arriving on each frame become eligible for postponed until the beginning of next frame. This rule is graphically illustrated in Fig. 3 which shows when packets that arrive during each frame become eligible to receive service. It can ba proved is] that if this rule is implemented in a network with the foregoing admission policy, then: 9 Any packet that has arrived at some link e during a frame f , will receive service before the frame following

expires. ii) The packet stream of each connection will maintain the original (r ,T)-smoothness property throughout the network. iii) A buffer space of at most 2 Cl* of the link. T per link e is sufficient to eliminate any chance of buffer overflow, where CL is the capacity 2.3 Realization of the Stop-and-Go Queuing: The stop-and-go queuing does not correspond to a specific service discipline, and in particular does not require that packets be served on a FIFO basis. Nevertheless, complying with the FIFO rule is advantageous both delivered in sequence. because it simplifies the realization and because packets will always be Incorporation of the FIFO rule into the stop-and-go queueing results in what we may call a delayed- FIFO queueing. Fig. 4 illustrates two possible realizations of this strategy for the case of synchronous departing and arriving frames. In Fig. 4.a we have a double-queue structure in which during each frame one FIFO queue is loaded in, while the other one is served from. At the end of 10

b- Fig. 4 : Realization of the stop - and -go queuelng strategy. a) A double - queue structure b) A single - queue structure each frame the order is reversed. This arrangement ensures that packets received in a particular frame are not served during that same frame. Fig 4.b shows how the same thing can be accomplished using a single FIFO queue. the present load in the queue as is expired. Here, a service controller, at the beginning of each frame, marks eligible and then connects the transmission facility to the queue just for the time needed to serve the eligible load. After that, the service is interrupted until the beginning of the next frame, so that any packet which is received during the current frame, and therefore is not yet eligible, does not get transmitted before the frame In practical cases where the sum of propagation and processing delays of the links are neither negligible nor a multiple of T, the simplest way to envisage the stop-and-go queueing is by introducing an additional delay into each link, thereby making the total delay of the link a multiple of T and synchronizing the arriving and departing frames. Denoting this added delay for link L by 0~ and defining ‘i! as the first multiple of T equal to 71 , we notice that larger than or s,=;i,-- 7~ = (-Tf) mod T (1) and Q_

These packets become eligible for transmission tiL seconds later. So transmission should be resumed 19t seconds after the end of f , just for the period of time required to serve the eligible load. Considering the extra buffer space needed due to this additional delay, the required buffer size can now be as big as (2T + 0,) CL . This is always less than 3C1-T . In conclusion, the stop-and-go queueing may be implemented with a FIFO queueing structure and some simple addition the service controller at each link. We emphasize that the time instances of interrupting or resuming service of each link is determined locally and no exchange of information between nodes is required for this purpose. 3 Impact of Frame Size on Queueing Bandwidth Allocation Delay and The foregoing packet admission and queueing policy, beside eliminating buffer overflows, leads to an attractive pro P erty in terms of packet delays. It connection undergo the same amount of delay in the network, except for a small jitter. connection may be represented as: In other words, the end-to-end delay of a packet of a given D’=D+d , (3) where D, defined as the connection delay, is constant for all packets of the connection, and d, the delay jitter, varies from packet to packet. The delay jitter is limited to the size of a frame: The connection delay two parts: -T

(5) where T is sum of the end-to-end propagation and processing delays and Q represents the total queueing delays of the connection. Furthermore, Q is bounded as follows: H-T 2 Q < 2H.T , (6) where, H connection. is the number of links traversed by the These results are summarized in Fig. 5 which illustrates a typical distribution of packet delays and the involved parameters. The end-to-end packet delay is constant except for a small delay jitter. The constant delay term D is greater than the total propagation and processing delay T by Q, which is no larger than 2H. T and no smaller than 23. T . Illustrated in this figure is also a typical distribution of the packet delays in a network with conventional 11

acket Delay islributlon Convennonal FIFO Queuelog H.Tc Q< 2H.T Framing strategy Shaded area roughly corresponds to packet loss probability Flg. 5 : Comparison of typical delay distributions in a packet network with framing strategy and one with conventional FIFO queueing. FIFO queueing. The delay in such a network can be smaller on the average than that of the framing strategy. However, it is distributed over a wide range, which makes the behavior of the network rather unpredictable and subject to buffer overflow and packet loss. We conclude that the delay performance resulting from the framing strategy is attractive in cases where the term Q, or the total queueing delay of the connection, is acceptable. As mentioned before, the total queueing delay of the connection, Q , is a fixed value between H*T and ZH . T, where H is the number of links traversed by the connection. congestion-free On the other hand, the required buffer space per link & for providing communication is at most (2T + B&CL which is almost proportional to T. Therefore, by choosing a sufficiently small frame size T, one could arbitrarily reduce queueing delays as well as buffer requirements of the network. However, a small frame size T can be chosen at the cost of reduced flexibility in bandwidth allocation over the links of the network, which can be done in incremental steps of one packet per frame. In order to study the trade-off between queueing delays and flexibility in bandwidth allocation, we consider the case where all packets have a fixed length r. In this case the incremental step of bandwidth allocation, Ar, is: Ar = 5 bits/set . For a given connection which traverses H hops, the queueing delay can be expressed as Q = (r-H-T , (8) where (Y is some constant between 3. and 2, depending on the source-destination paih of the given connection. It follows that: AreQ =a-H-I’ , 150 <2 . (9) This equation clearly states that for a fixed source- destination route and a fixed packet size, Ar and Q can not be simultaneously decreased, and a reduction in one would lead to a proportional increase of the other. As a numerical example, for packet length of I=400 birs and a connection with H=5 and (~=1.6 : Ar n Q = 3200 bits . (10) Therefore, if T is chosen to yield a queueing delay of 8 msec for this connection (T=l msec), Ar becomes 400 kbits /set . On the other hand, if a bigger T is chosen to reduce Ar to 8 kbirs/sec (T=SO msec), 400 msec . the queueing delay becomes 4 Stop-and-Go Sizes Queueing with Multiple Frame The foregoing discussion shows that in order to have small queueing delays (on the order of a few milliseconds) for certain types of connections, and still be able to allocate bandwidth to other connections in small increments (of few kil.obits per seconds), more than one frame size is needed. In this section a generalization of the framing strategy is presented, which allows the use of more than one frame size. We consider G frame sizes T1, T2, . . . TG, and assume that each Tg is a multiple of Ts+r, i.e., Tg = Ks-Te+I, g=l,..., G-l, for some integer Kg. Again, we start from a reference point in time, common to all nodes, and divide the time axis into periods of length Tg, each called a type g frame. We repeat this for every value of g, starting from the same reference point. This is illustrated in Fig. 6 for the case of G=2 and T,=3T2. Frames of any type are viewed as traveling with the packets over the links. Again, we initially consider a case where rr , for each link e, is a multilple of T, (and therefore a multiple of any T,), so that the departing and arriving frames of each type become synchronous. Every connection in the network is set up as a type g connection for some g =1,2,...,G, in which case it is associated with the frame size Ts. The admission policy requires that the packet arrival of any connection k of type g , g=l,. . .,G , with the allocated transmission rate rk be (rk,Tg)-smooth. 12

T2. mnles 0 TZ 2T2 3T2 4T2 ST2 m2 Fig. 6 : Synchronous frames with size T2 and T1 = 3T 2 We refer to the packets of a type g connection, g=l,..., G, as type g packets, and require that g be indicated in the header of each packet. The stop-and-go queueing in this case is based on the following rules: A) A type g packet which has arrived at some link L during a type g frame f , does not become eligible for transmission by e until f expires. B) Any eligible type g packet, g=2,3,. . . ,G has non-preemptive priority over packets of type g’

Fig. 8: A typical queues pattern of service shifts between for the cese of G I 2 and Tt = 3 T2 the Fig. 7 : Realization of the stop-and-go queueing for multi-size frames. service of queue G-l times, on a non-preemptive basis, before its eligible load is transmitted. load of queue G-l may be interrupted several This is because each time a new type G frame starts, new packets may become eligible at queue G . Similarly, when the eligible has been served, service of queue G-2 starts. This procedure continues all the way down to queue 1. Once there is no eligible load at any queue, the server stays idle. A typical pattern of service shifts between the queues for the case of G=2 is illustrated in Fig. 8. Similar to the single frame case, the function of service controller can be modified to account for the aspchronous arriving and departing frames. Let ?t be the first multiple of equal to rL. Define Tg larger than or flj’ L ?f - rr = (- 7-J mod Tg , (12) where 0g is the additional delay term that should be incorporated into link e for type g packets, in order to synchronize the arriving and departing frames of type g over the link. This can for transmission. be done by the following modification in the function of service controller of Fig. 7: at the beginning of each type g frame, mark the load present in queue g. Then, Sg seconds later designate this marked load as eligible Considering the extra buffer space needed due to this additional dela 2 the required buffer size can now be as big as C (2T,+Of)*Cf . This is always g=l less than 3T1*Ce. 6 Queueing Delay and Bandwidth Allocation Multi-Frame Case in the In order to illustrate bandwidth allocation and queueing delay in the multi-frame case, it is again helpful to consider the case where all packets have the same length r. The bandwidth can be allocated to a connection of type g in incremental steps of one packet per Tg seconds, or the trade-off between Arg = F 8 bits/set . (13) The delays associated with the packets of a type g connection are the same as if the stop-and-go queueing was practiced on a single-frame basis with the frame size equal to Tg . This can be seen by considering properties noticing that for packets of type g , these properties are exactly the same as those previously stated for i single-frame and ii of theorem 1, and case with type g connection can be expressed as T = T8 . Consequently, the end-to-end delay of a packet of a D’g=Dg+dg=r+Qg+dg , (14) where Dg is the total connection delay, & is the packet delay jitter, and Qg is the connection queueing delay. Furthermore -Tg

section 4 (Eq. (9)), except that now the coupling between the queueing delay and the incremental steps of bandwidth allocation, as expressed by (17), applies, separately, to each category of connections, rather than globally to all connections. In other words, it is now possible to provide small queueing delays for certain type of connections, and allocate bandwidth in fine segments to other connections.

a numerical example, consider a network with link capacities of 150 Mbifs/sec and packet size of I’=400 bits, and assume that only two frame sizes T, = 16 mSec and T2= 1 msec are employed. The incremental steps of bandwidth allocation for type 1 and 2 connections will be Arl = 25 Kbits/sec and Ar2 = 400 Kbits/sec . The maximum queueing delay for a type 1 or type 2 connection with H=5 will be Q1 = 160 msec and Q2 = 10 msec . This suggests that continuous bit stream oriented traffic with high vulnerability to end-to-end delay should be set up in this network as type 2 connections, while low speed data communication services can be designated as type 1 connections. Furthermore, in regard to constraint (ll), one packet per type 2 frame, or less than 0.3% of the capacity of each link should remain unallocated to type 2 connections. Finally we look at the buffer requirements in the above example. The maximum buffer requirement per link C is: BL(mux) = (2T1 + dt) - Ce c 3T, * CL = 7.2 Mbits . (18) However, if it is anticipated that no more than 10% of the capacity of link C, i.e., 15 Mbits /set , ever needs to be allocated to type 1 connections, the maximum required buffer space would be: Bi(max ) = O.lCe - (2T, + 0;) + 0.9CL - (2T, + 6;) < 3(O.lCl - T1 + 0.9Ce - T2) = 1.125 Mbits . (19) Clearly, if the smaller buffer is provided at link !, care must be taken to ensure that at most 10% of the link capacity is allocated to type 1 traffic. 7 Integration Congestion of Framing Control Policies Strategy with Other The benefits of congestion-free and bounded-delay transmission provided by the framing strategy are basically accomplished at the cost of a strict admission policy to enforce statistical fluctuations of traffic the smoothness property on packet arrivals, i.e., admitting only rk *T bits per connection k in each frame of size T. Since an averaging period of only a fraction of second is often insufficient to smooth out the sources, this Fig. 9 : Integration of the stop-and go queueing with other queueing disciplines admission policy practically requires that capacity be allocated based on the peak rate of the connections. If the framing strategy is uniformly applied to all of the services in a broadband network, low utilization of the transmission capacity is inevitable. In this section we alleviate this problem by combining the framing strategy with other traffic management policies. Consider the queueing structure of Fig. 9. Here, a new category of packets, which we may refer to as rype 0 rruffic, is loaded into a new queue - queue 0. The function of the service controller is similar to Fig. 7 except that now queue 0 has replaced the idle position of the server and during any period of time that other queues are not served, the server is connected to queue 0. This is actually the time that the server would following two parts: have stayed idle in the configuration of Fig. 7, and is comprised of the a) The fraction of the frames corresponding to any part of the transmission bandwidth not allocated to the traffic of type ~21. b) The fraction of the allocated bandwidth to the traffic of type g>l which is being underutilized by the corresponding sources. Theorem 2: With the inclusion of type 0 traffic in the network, theorem 1 remains valid given that constraint (11) also applies to gO=l, namely: 5 q

peak rates. The provision of type 0 services helps to alleviate this problem in two distinct ways. First, any bursty traffic without stringent loss and delay requirement can now be included in the type 0 category and need not be serviced according to the framing strategy. Secondly, in regard to the remaining bursty connections of type g>l, be utilized by the type 0 traffic. the unused portion of the allocated bandwidth can now In this integrated environment, the framing strategy is best suited for the traffic with negligible burstiness, real time traffic, and other services with stringent packet loss and delay requirements, while the services with less sensitivity to loss and delay should be included in the type 0 category. It is important to notice that the new category of traffic may be regulated according to any desired policy or transmitted in any order, and can itself comprise different classes of connections, as well as datagrams, each being subject to a different congestion control policy. The presence of packets in this category does not cause any disruption in the proper functioning of the framing strategy as long as the rules of stop-and go queueing are followed. 8 Summary A framework for congestion management in broadband integrated services packet networks was presented. This framework allows for guaranteed delivery of delay-sensitive real time traffic, as well as efficient transmission of highly bursty traffic. Time framing and stop-and-go queueing were used as the essential mechanisms to protect real-time traffic against packet loss and excess delay. In order to retain the bandwidth allocation flexibility, the stop-and-go queueing on a multiple frame basis was developed, Appendix Proof of Theorem 1: Let us first define t$ f m .Tg , where g and m are integer and pl,..., G , m=-co, . . . , 00. Without loss of generality we assume that each interval (t$-i ,t,$] coincides with one type g frame. We prove theorem 1 by first establishing the following lemma: Lemma: The following conditions hold true for any gd,..., G, andm=-co, . , . ,oc): a) Any type g packet that has arrived at any link e on or prior to t$-i will be completely transmitted by ti . b) For any link .! and any connection k of type g using it, the aggregated length of the packets belonging to k that are transmitted e during frame (tk-i , ti] is bounded by Yk.T*. Proof of the Lemma: We prove this lemma by applying induction with respect to two parameters, m priority traffic (g=G) and use induction in regard to m and g. First, in steps 1, we focus on the highest to show that the Lemma is true for g=G and any m. Next, in step 2, we apply induction to g to establish the Lemma for any g and m . Step 1 - g=G : We show that the lemma is correct for g=G by applying induction with respect to the variable m. Let the lemma hold true for ,g=G and m

tfi, are completely served. Furthermore, the aggregated length of the packets of any connection k of type G that are transmitted by e during (t$, t$+r] is bounded by rk * TG. Since the choice of link e was arbitrary, we conclude that conditions a and b also hold true for g=G and m=M+l. Finally, since the network initially contains no packet, these conditions are indeed true for g=G and any m less than or equal to some integer MO . The lemma, for the case of g=G, follows by induction. Step 2 -IQ

= C;O.TBn . (22) Since for any g >go , 1) condition b by assumption holds true, for all m , and 2) an integral number of type g frames cover the type go frame (t~,r~+, ] , we conclude that the aggregated length of all of the type g packets, g>g, , that are, or become, eligible for service by C during (&,f~+i] bY is bounded (23) g=g0 g=g0 For the case of go>l, according to constraint (11) this bound is less than or equal to Tg;(C+-=) I? go = Ct*(Tgo- (24) This means that the service time of eligible type g>goypackets during (&,&+i] takes no longer than T80- max seconds. Since these packets have non- CL preemptive priority over packets of lower type, r their transmission starts no later than 2 seconds CL after & , and then, as long as any eligible type go packet is left, the server does not start transmitting a packet of lower type. We conclude that all of the type go packets will be transmitted before the frame (t$,tz+i] expires. This is the same as condition b for g=go and m=M+l. For the case of gs=l, there is a slight difference in the argument. Since constraint (11) does not apply to go=l, we can only say that the aggregated length of the packets of type g 2 go = 1 ( or indeed of any type) that are, or become, eligible for transmission during (t&,tnfi+i] is bounded by T,-&

g=l g=l 2T,s 5 Cf < 2CL.T1 g=l (26) Proof of Theorem 2: The proof of Theorem 2 is similar to the proof of Theorem 1 except for a small change in regard to the case of go=1 in step 2 of the Lemma. Here, at time r& , transmission of eligible packets of type g>l may not start r;ght away; it may have to be postponed for up to msx seconds CL later, since a type 0 packet could follows: be receiving service at tnfi. However, in view of the new constraint (20), Eq. (25) should also be changed as C,(T,-F) . (27) This guarantees that all of the eligible type g packets, g>l, will be transmitted before the frame (t&,th+r] expires. Acknowledgment The author would like to appreciate Ernst W. Biersack, Mario P. Vecchi, and Mark Garrett for helpful comments. REFERENCES 1. R. Kahn and W. Crowther, “Flow Control in Resource-Sharing Computer Network:“, Trans. Commun., June 1972, pp. 539-5,46. IEEE a 2. M. Gerla and L. Kleinrock, “Flow Control: Comparative Survey”, IEEE Trans. Commun., 1980, pp. 553-574. A 3. S. J. Golestani, A Unified Theory of Flow Control and Routing in Data Communication Networks. Engineering and Computer Science, Cambridge, MA, 1980. PhD thesis, MIT, Dept. of Electrical 4. V. Jacobson, Control”, Symposium, “Congestion Proceedings 1988, pp. 314-329. Avoidance of and SIGCOMM 5. S. J. Golestani, “Congestion-Free Transmission of Real-Time Traffic in Packet Networks,” Proceedings of INFOCOM, San Francisco, California, June 1990. 18