ADMISSION CONTROL SCHEMES FOR REAL-TIME STREAMS OVER THE INTERNET1 VICTOR T.S. SHI AND WILLIAM PERRIZO C.S. Dept, NDSU, Fargo, ND 58105 Email: {Victor.Shi, William.Perrizo}@ndsu.nodak.edu Modern Multi-Services networks have been operating in a heterogeneous traffic environment where multiple services are provided. Different services require different grades of service and service control schemes. This paper discusses two control schemes – restricted access control and partial preemption control – which may be used to enforce such network provision. General steady state balance equations are deduced for analyzing and comparing the performance of these two resource access control schemes. In addition to call blocking probability, a new metric, call preemption ratio, is proposed for evaluating the performance of partial preemption control scheme instead of traditional preempting probability. Restricted access approach may provide very good protection against overloads while meeting the needs of multiple grades of service. Partial preemption control is much more efficient than restricted access control in terms of system throughput, but runs the risk of introducing some preemption. Additionally, partial preemption control provides one-way overload protection. We conclude that partial preemption control is especially attractive when the system is lightly overloaded. Average call holding times should be considered when grouping calls into different classes. In contrast, average call holding times of different service classes do not affect the performance of the restricted access mechanism. In an example of 3 service classes, we show that, compared to Complete Sharing, the restricted access control scheme may improve system throughput by 6%, while the proposed partial preemption control scheme may improve throughput by as much as 40%. 1 Introduction Multi-Services networks use call admission control to provide quality of service to individual real-time traffic flows. In addition to traditional circuit-switched networks, which provide excellent quality of service, the packet switching Internet carries much more real time traffic than ever. In recent years researchers have proposed various call admission control schemes to improve quality of service. The existing proposals can be classified into three categories: resource reservation by signaling, edge-point admission control and host admission control. The resource reservation by signaling uses a mechanism such as RSVP [10] to reserve resources on the routers along the path. The edge-point admission control [9] and host admission control [8] are proposed to solve the scalability problem in RSVP. With host admission control, hosts send probing packet to detect the network congestion status and then decide if a newly arriving call can be admitted. In edge-point admission control, admission control decisions are made at edge routers and based on aggregate measurements obtained at a flow’s egress router. Unlike RSVP, both 1 This work is partially supported by GSA Grant ACT# 96130308. 533582644 submitted to World Scientific : 03/08/16 : 6:56 PM 1/12 1 edge-point and host admission controls do not require routers to keep per-flow state to ensure quality of service, thus they are scalable. The problems with edge-point and host admission controls are long connection setup time and the bandwidth waste caused by probing packets. Also the quality of service is not guaranteed because of the traffic randomness. While current edge-point controls are measurement-based, in this paper we assume that each ingress router has a virtual bandwidth capacity and such capacity can be predicted by analyzing the measurement data. With the assumed virtual capacity, the ingress router can perform admission control using mathematical models. The advantages of ingress router control are much shorter connection setup and no bandwidth waste (the virtual bandwidth capacity can be deduced from data collected by passive traffic monitoring). Further, we assume the calls can be classified into multiple classes and each class has its own QoS requirements. Under such an environment we analyze the performance characteristics of priority admission control schemes – restricted access and preemption control. In the literature, several call admission control mechanisms have been proposed and analyzed. The common goal has been to maintain acceptable quality of service in an effective and efficient manner. Choudhury, Leung and Whitt [1] proposed UL/GM (Upper-Limit/Guaranteed-Minimum) approach for the protection against overloads in different classes of service. They concluded that UL/GM approach outperformed CS (Complete Sharing), CP (Complete Partition) and TR (Trunk Reservation) when asymmetric services are considered and fair protection against overloads are needed (i.e., every class of service needs to be protected against overloads in other classes of service). Biswas and Sengupta [2] studied CS, UL, GM and TR approaches in a wireless ATM environment. They concluded that the TR approach outperforms the others when different grades of service are needed. In their experiments, the highest revenue gain could be up to 35% when using the TR approach. Such a revenue gain might be a windfall to service providers. Rege and Sengupta [3] considered multi-priority systems with three different service disciplines: last-come first-served preemptive resume, multiple servers and processor sharing. They further discussed three potential applications in the area of computer and communication systems. Wang and Saadawi's [4] discussed the applicability of trunk reservation techniques to nonhierarchical circuit-switched networks in which two classes of traffic with unequal arrival rates and holding times are present. They introduced two control schemes, restricted access and preemption, to improve the overall performance. In their paper, steady-state equations for two preemptive priority classes of calls were given and solved using a method called successive over-relaxation (SOR)[5]. This paper studies restricted access and preemption control schemes in a single link loss system with multi-class traffic (the link has the virtual bandwidth capacity). In the restricted access scheme, a number of bandwidth units are reserved to each class of service. A new service request is rejected if, after accepting the request, the number of existing calls of that class is greater than the 533582644 submitted to World Scientific : 03/08/16 : 6:56 PM 2/12 2 threshold corresponding to that class of service. This is the so-called UL mechanism mentioned in [1][2]. Restricted access control may still not be efficient, though it outperforms the others in most situations as claimed in [1], because it is possible that low priority calls are frequently rejected while certain bandwidth resources are idle, in the hope of accepting high priority calls in the future. The preemption control scheme is more attractive in this setting. Preemption control allows low priority calls to use those bandwidths reserved for high priority calls, and revokes the bandwidths when high priority call comes in. We studied a full preemption control scheme in [6], where high priority calls preempt bandwidths whenever low priority calls are in service. This paper studies a partial preemption scheme, where preemption control is triggered only when the number of calls in low priority classes of service simultaneously existing in the system reaches a predefined threshold. The CS scheme [1][2] and the full preemption scheme [6], may be considered as special control cases of our scheme, where the thresholds are set to zero or the full capacity of the link, respectively. Section 2 of this paper proposes a model for a link to which multiple classes of calls arrive. Both the restricted access control scheme and the partial preemption control scheme are considered. Section 3 further discusses the restricted access control scheme and derives general forms of steady-state balance equations for calculating the call blocking probability. A partial preemption control scheme is proposed and equations for a link of K classes of traffic are given in section 4. Call blocking probability is used as performance measure for the partial preemption scheme. In addition, a new metric – call preemption ratio is proposed for measuring the grade of service when preemption control is applied. Section 5 examines the performance of the two control schemes through numerical experiments. Section 6 concludes the paper. 2 Models In this section we propose a mathematical model for a single link loss system, i.e., a newly arriving call is lost and cleared when it is not allowed to proceed due to lack of resources. The link consists of n bandwidth units that correspond to n servers in a random service system. All bandwidths are fully available to all calls. Calls (corresponding to customers in a random service system) are classified into K classes according to their priorities and denoted by T1 , T2 , …, TK . Their priority levels are ordered in a decreasing way by convention, i.e., calls belonging to class i have higher priority than calls belonging to class j if i j . The call arrival process of the streams T1 , T2 , …, and TK are Poissonian with average rates 1 , 2 , …, and K , respectively. The call holding times of the streams have negative exponential distributions with means 11 , 21 , …, and K1 , respectively. All the 533582644 submitted to World Scientific : 03/08/16 : 6:56 PM 3/12 3 call arrival processes are independent of each other. Thus we denote the traffic loads corresponding to Tk by ak where ak k / k for k 1,2,..., K . We assume the switches at the ends of the link are non-blocking and the call set-up times are negligible compared to the average call holding times. The switches enforce the access control schemes by associating numbers n1 , n2 , …, nK with classes T1 , T2 , …, TK , respectively. In the restricted access control scheme, a new arrival call of class i will be rejected if the number of bandwidth units servicing calls from that class is equal to n ni . In the partial preemption control scheme, a new arrival call of class i will be rejected if no idle bandwidth in the link is available and the number of calls of class j in service is not greater than n n j for all i j . Otherwise, it will preempt a unit of bandwidth used by one of the calls of class j if class j is the class of lowest priority of those classes that have calls in service at the moment the new call arrives. For simplicity, we assume the preempted call of lower priority is lost and cleared. Let ( j1 , j 2 , …, j K ) denote the state of the link in which there are j1 calls of priority 1, j 2 calls of priority 2, ..., j K calls of priority K being serviced by the Let P( j1 , j2 ,..., jk ) be the corresponding steady-state probability. link. equation n n1 n n2 n nK j1 0 j2 0 j K 0 ... n and equation n control scheme. We set 3 P( j1 , j 2 ,..., j K ) 1 holds for the restricted access scheme n ... P( j1 , j2 ,..., j K ) 1 j1 0 j2 0 Then holds for the partial preemption j K 0 n1 to zero for efficiency and comparison reasons. Restricted access control With the model set in section 2 for restricted access, the state transitions form Markov chains. The state space is jk N nk for k 1,2,... K . Applying the conservative law we deduce steady state balance equations for general K classes of traffic as follows: K ( j k a k ) k P ( j1 , j 2 ,..., j K ) k 1 K k 1 a k k P ( j1 , j 2 ,..., jk 1,..., j K ) K (3.1) ( jk 1) k P ( j1 , j2 ,..., jk 1,..., j3 ) k 1 533582644 submitted to World Scientific : 03/08/16 : 6:56 PM 4/12 4 K ( jk n , jk n nk for k 2,3,..., K ) k 1 with boundary conditions K jk k P( j1 , j2 ,..., j K ) k 1 K ak k P( j1, j2 ,..., jk 1,..., jK ) (3.2) k 1 K ( jk n and jk n nk for k 2,3,..., K ) k 1 for m 1,2,..., K 1 K ak k P( j1 , j2 ,..., j K ) k 1, 2,..., K k ki ,i 1, 2,..., m K (3.3) K ak k P( j1 , j2 ,..., jk 1,..., jK ) k 1 K ( jk k P( j1 , j2 ,..., j K ) k 1 k 1 ( jk 1) k P( j1 , j2 ,..., jk 1,..., j3 ) k 1 jk n ; jki n nki r ,, ki 2,3,..., K , for, i 1,2,..., m ; j k n nk r for k 2,3,..., K and k ki ,) Let Pb(k ) be the call blocking probability for calls of class Pb(1) P( j1 , j 2 ,..., j K ) k . Then (3.4) j1 j2 ... jK n Pb(k ) P( j1 , j 2 ,..., j K ) j1 j2 ... jK n P( j1 , j 2 ,..., j K ) when jk n nk j1 j2 ... jK n k 2,3,..., K (3.5) 4 Partial preemption control Similarly, with the model described in section 2, we have general steady state balance equations for the partial preemption control scheme as follows: 533582644 submitted to World Scientific : 03/08/16 : 6:56 PM 5/12 5 K ( j k a k ) k P ( j1 , j 2 ,..., j K ) k 1 K (4.1) K ak k P( j1 , j2 ,..., jk 1,..., jK ) k 1 ( jk 1) k P ( j1 , j2 ,..., jk 1,..., j3 ) k 1 K jk n ) ak k P( j1, j2 ,..., jk 1,..., jK ) (4.2) ( k 1 with boundary conditions t 1 ( K ak k k 1 jk k ) P( j1 , j2 ,..., j K ) k 1 K ( K k 1 jk n , and jt n nt , t {1,2,..., K} , and ji n ni when t i K ) k 1 t 1 ( k 1 K ak k K j k k ) P( j1 , j 2 ,..., j K ) k 1 a k k P( j1 , j 2 ,..., j k 1,..., j K ) k 1 i 1 ai i P( j1 , j2 ,..., jl 1,..., ji 1,... j K ) iE l 1 (4.3) K ( jk n , jt n nt , t {1,2,..., K} , ji n ni , i Q {t 1, t 2,..., K} ; k 1 ji n ni while i E , E Q {t 1, t 2,..., K} ) The call blocking probabilities are: Pb(1) P( j1 , j 2 ,..., j K ) (4.4) j1 j2 ... jK n jk n nk ,k 2,3,..., K Pb(k ) P( j1 , j 2 ,..., j K ) when k 2,3,...,K 1 (4.5) j1 j2 ... jK n jt n nt ,t k 1,k 2,..., K Pb( K ) P( j1 , j 2 ,..., j K ) (4.6) j1 j2 ... jK n 533582644 submitted to World Scientific : 03/08/16 : 6:56 PM 6/12 6 Let Pr e(k ) be the preempting probability for class Pr e(2) k , then P( j1 , j2 ,..., j K ) (4.7) j1 j2 ... jK n j2 nn2 jt nnt ,t 3, 4,..., K Pr e(k ) P( j1 , j2 ,..., j K ) , k 3,4,..., K (4.8) j1 j2 ... jK n jk nnk jt nnt ,t k 1,k 2,..., K Pr e( K ) P( j1 , j2 ,..., j K ) (4.9) j1 j2 ... jK n j K n nK Customers of low priorities may feel it’s not acceptable if the probability of their calls being preempted exceeds a certain threshold. Preempting probability Pr e(k ) tells us the probability that a unit bandwidth servicing a call of class k is preempted when calls of higher priority arrive. No bandwidths will be preempted if no calls of higher priority arrive. Thus the number of calls of class k being preempted is related to Pr e(k ) and the arrival rates of higher priority calls. In [6], we found that preempting probability may mislead us in the performance evaluation of a preemption control scheme. Hence in this paper we use call preemption ratio as the performance indicator. Let Pr et (k ) denote the call preemption ratio. We define the call preemption ratio as the ratio of the number of calls being preempted to the number of calls being accepted. That is Pr et (2) k 1 Pr et (k ) i 1 1 Pr e(2) a11 Pr e(2) 2 [1 Pb(2)] a2 2 [1 Pb(2)] k 1 i Pr e(k ) k [1 Pb(k )] (4.10) ai i Pr e(k ) i 1 ak k [1 Pb(k )] when k 3,4,..., K 533582644 submitted to World Scientific : 03/08/16 : 6:56 PM (4.11) 7/12 7 5 Experimental results In this section we examine the numerical results of experiments with a variety of traffic patterns and system configurations. Assume the link has 30 bandwidth units ( n 30 ) and calls are divided into 3 classes. n2 and n3 are set to 18 and 20 respectively. Table 1 lists 2 groups of experimental results with the restricted access control scheme. It demonstrates that call holding time does not have impact on the performance at all. Keeping this in mind, we may group the calls into different classes without considering their holding time characteristic. Table 2 presents 2 groups of experimental results involving the partial preemption control scheme. In contrast, we see that average call holding times greatly affects the call blocking probabilities of each class. Assigning higher priorities to calls of longer average holding time may reduce the blocking probability. Table 3 demonstrates the impact of average holding times on call preemption ratios. As defined in section 4, the call preemption ratio is a performance metric for measuring the frequency of preemption in those calls which are accepted. Table 3 tells us that assigning to calls of longer average time, a higher priority, may greatly reduce the number of unfinished calls (calls which are accepted but preempted by calls of higher priority). From results in tables 1, 2 and 3, we conclude that the partial preemption scheme provides much better service in terms of blocking probability performance. With a load of 6 Erlangs in each class, the preemption control scheme services all customers with a blocking probability less than 0.003, whereas restricted access services calls of class 3 with a blocking probability near 0.04, about ten times greater than 0.003. The worst preemption ratios are near 0.00061 (with 1 0.1, 2 0.4, 3 0.8 ), a value that might be considered acceptable. If we Pb(1) 0.0012 , Pb(2) 0.0121 , Pb(3) 0.044 and 1 0.1 , 2 0.4 , 3 0.8 , then the restricted access approach provides a throughput of 18 Erlangs, the partial preemption control approach, 21.6 Erlangs at the expense of Pr et(2) 0.000068 and Pr et(3) 0.005375 with n2 18 and n3 20 . Compared to the CS approach 18 17 (throughput of 17 Erlangs), the throughput gains are =5.9% and 17 21 .6 17 =27%, respectively. If we consider Pr et (2) 0.0022 and 17 Pr et(3) 0.0101 as acceptable, then the gain of the partial preemption control 23 .9 17 approach will be up to =40.6% with n2 23 and n3 18 . 17 Longer average holding times result from higher blocking probabilities in the preemption approach. This cause and effect does not occur in the restricted access assume that the design parameters are 533582644 submitted to World Scientific : 03/08/16 : 6:56 PM 8/12 8 approach. We attribute this phenomenon to the increased average arrival rates of calls and the fact that partial preemption allows more calls of low priority to exist simultaneously. The negative effect of allowing more low priority calls simultaneously in the link can also been seen in Pb(1) columns in table 1 and 2, where Pb(1) is slightly larger in table 2 than in table 1. More calls of classes 2 and 3 simultaneously existing in the system makes the blocking probability a little higher for calls of class 1. Table 1: Blocking probability with restricted access control scheme ( N 30, n2 18, n3 20 ) 1 0.8, 2 0.4, 3 0.1 a1 a2 a3 5 6 7 8 9 10 1 0.1, 2 0.4, 3 0.8 Pb(1) Pb(2) Pb(3) Pb(1) Pb(2) Pb(3) 0.000098 0.001208 0.006677 0.021487 0.048043 0.084404 0.003506 0.012099 0.030768 0.062069 0.104369 0.153054 0.018432 0.043658 0.081235 0.128657 0.181639 0.235715 0.000098 0.001208 0.006677 0.021487 0.048043 0.084404 0.003506 0.012099 0.030768 0.062069 0.104369 0.153054 0.018432 0.043658 0.081235 0.128657 0.181639 0.235715 Table 2: Blocking probability with partial preemption control scheme ( N 30, n2 18, n3 20 ) 1 0.8, 2 0.4, 3 0.1 a1 a2 a3 5 6 7 8 9 10 1 0.1, 2 0.4, 3 0.8 Pb(1) Pb(2) Pb(3) Pb(1) Pb(2) Pb(3) 0.000121 0.001496 0.008040 0.024711 0.052910 0.089950 0.000149 0.001812 0.009578 0.028962 0.061033 0.102204 0.000210 0.002375 0.011707 0.033494 0.067911 0.110829 0.000106 0.001310 0.007194 0.022778 0.049966 0.086275 0.000139 0.001713 0.009344 0.029335 0.063664 0.108586 0.000224 0.002686 0.014233 0.043119 0.089957 0.147453 533582644 submitted to World Scientific : 03/08/16 : 6:56 PM 9/12 9 Table 3: Preemption ratio with partial preemption control scheme ( N 30, n2 18, n3 20 ) 1 0.8, 2 0.4, 3 0.1 1 0.1, 2 0.4, 3 0.8 a1 a2 a3 Pr et (2) Pr et (3) Pr et (2) Pr et (3) 5 6 7 8 9 10 0.000057 0.000635 0.003107 0.008757 0.017304 0.027299 0.000726 0.006761 0.025845 0.056266 0.088541 0.116389 0.000008 0.000101 0.000543 0.001689 0.003657 0.006257 0.000053 0.000610 0.003100 0.009003 0.018058 0.028494 Overload protection is an important issue brought up in [1]. It is desirable that overloads, caused by some classes of customers, do not grossly affect the system service behavior offered to other classes of customers who stick to their contracts. Our performance study shows that the partial preemption approach favors high priority customers. The blocking probabilities increase much more sharply for those calls of lower priority than for those calls of higher priority. An interesting and positive phenomenon is that the call preemption ratios increase very slowly. This means that overload in a particular class does not affect preempting behavior significantly, even if the calls are of lower priority. We omit the performance study of overload protection because of space limit. Please see [14] for detailed analysis. 6 Conclusions Modern Multi-Services networks operate in a heterogeneous traffic environment. It is reasonable that different services and customers may require different grades of service. In addition, calls requesting larger bandwidth experience poorer service in terms of call blocking probability, unless appropriate resource access control schemes are adopted [7]. We used a simple model to provide some insights to the performance of restricted access control and partial preemption control schemes. Call blocking probability is used to evaluate the performance of the two access control schemes. Moreover, we defined a new metric, call preemption ratio, to evaluate preemption control instead of preempting probability – a traditional metric used in [4][6]. The proposed call preemption ratio explicitly indicates the portion of calls, which are accepted but not finished because of preemption. In an example of a link with 3 classes of traffic, we found that average call holding times had no effect on system performance when adopting restricted access control. In contrast, we must take average call holding times into consideration for grouping classes of traffic when adopting partial preemption control. 533582644 submitted to World Scientific : 03/08/16 : 6:56 PM 10/12 10 With proper thresholds, restricted access control may provide different grades of service and certain degrees of overload protection, at the expense of certain amounts of throughput loss. Preemption control discussed in [6] allows full access to resources for all classes of traffic, providing a much more efficient way to use the link resource at the expense of introducing possible preemption. Obviously it may not be tolerable when the portion of calls being preempted exceeds a certain threshold. While preemption control mechanisms do not provide a way to limit this portion within certain threshold, the partial preemption control proposed in this paper helps limit this portion in an efficient manner (see [14]). It also provides one-way protection against overloads (traffic volume of lower priority has little effect on calls of higher priority). An attractive application of partial preemption control might be the situations where light overload occurs. Partial preemption control greatly reduces blocking probabilities to calls of low priority while preemption ratio keeps extremely low (see tables 1, 2 and 3). Thus it significantly outperforms restricted access control scheme in terms of system throughput. With the partial preemption approach, preemption only occurs when the number of calls of low priority existing in the system exceeds a certain threshold. This should be considered acceptable if the maximum number of simultaneously supported calls of that class is limited, as is the case with the restricted access control scheme. A major concern with preemption policy is the damage caused by preemption. First such damage can be limited by setting a low threshold of preemption ratio, which represents the very low risk of a call being interrupted. Second, the implementation of preemption policies in practice may not result in interrupted calls. We may temporary steal bandwidth by selectively dropping cells (packets) from existing calls instead of interrupting a call. The temporary degradation of QoS may be negligible since preemption is a rare event. Such “bandwidth stealing” implementation is especially promising in the Internet. For example, a customer (e.g., a university or a company) may purchase a certain amount of bandwidth from an Internet Service Provider (ISP) for Internet access. The customer can use a “bandwidth broker” [19] to enforce the QoS of real time streams. Then the temporary “out of bandwidth” packets caused by preemption can be marked by the egress router of the customer network by resetting the “A” bit (for assured service) or “P” bit (for premium service) [12]. Such packets are treated as “best effort” packets and may be dropped by the ISP network. Further discussion on admission control for Internet QoS can be found in [8, 9, 13]. The concept of soft services has been introduced in recent years (e.g., Controlled-Load [11]) for packet switching Internet applications. Soft services do not require guaranteed QoS to individual application flow. Instead, the QoS is assured at a reasonable level by applying certain admission control to bound the traffic load in the network. Partial preemption scheme has the risk of preemption to low priority traffic, but it provides excellent protection against overload caused by 533582644 submitted to World Scientific : 03/08/16 : 6:56 PM 11/12 11 low priority traffic. Thus partial preemption may serve the need of soft services in the Internet while producing excellent overload protection. References 1. G. L. Choudhury, K. K. Leung and W. Whitt, “Efficiently providing multiple grades of service with protection against overloads in shared resources”, AT&T Technical Journal, pp.50-63, July-August, 1995 2. S. K. Biswas and B Sengupta, “Call admissibility for multirate traffic in wireless ATM networks”, Infocom’97, 1997. 3. Rege and Sengupta, “A priority-based admission scheme for a multiclass queueing system”, AT&T Technical Journal, Vol.64, No.7, pp.1731-1753, 1985. 4. W. Wang and T. N. Saadawi, “Trunk congestion control in heterogeneous circuit switched networks”, IEEE Transations on Communications, Vol.COM40, No.7, p1156-1161, 1992. 5. R. Cooper, Introduction to Queuing Theory, 2nd ed., New York, Macmillan Co., 1981. 6. V. T.-S Shi, C. Wang and W. Perrizo, “The performance of a link with multipriority traffic”, IEEE Transactions on Communications, Vol. 46, No. 6, pp.743-746, 1998. 7. B. Kraimeche and M. Schwartz, “Circuits access control strategies in integrated digital networks,” Proc. IEEE Infocom’84, pp.230-235, 1984. 8. L Breslau, E. Knightly, S. Shenker, I. Stoica and H. Zhang, “Endpoint Admission Control: Architectual issues and Performance”, ACM SIGCOMM 2000, pp.57-69. 9. C. Cetinkaya and E. Knightly, “Egress Admission Control”, IEEE INFOCOM 2000. 10. L. Zhang, S. Deering, D. Estrin, S. Shenker and D. Zappala, “RSVP: A new resource ReSerVation Protocol”, IEEE Network, Vol. 7, No.5, pp.8-18, Sept. 1993. 11. J. Wroclawski, “Specification of the controlled-load network element service”, 1997, Internet Draft. 12. X. Xiao and L. Ni, “Internet QoS: A Big Picture”, IEEE network, No.2, 1999, pp.8-18. 13. E. Knightly and N. Shroff, “Admission Control for Statistical QoS: Theory and Practice”, network, No.2, 1999, pp.8-18. 14. V. Shi and W. Perrizo, “Acess control schemes and performance of a link with heterogeneous traffic”, technical report, NDSU-CS-TR-06, 1999. 533582644 submitted to World Scientific : 03/08/16 : 6:56 PM 12/12 12