Word

advertisement
End-To-End performance analysis with Traffic Aggregation
Tiziana Ferrari
Italian National Institute for Nuclear Physics (INFN-CNAF)
v.le Berti Pichat 6/2
I-40127 Bologna ITALY
Tiziana.Ferrari@cnaf.infn.it
7
Abstract- The provisioning of end-to-end services in
presence of traffic aggregation is evaluated in the
framework of the Differentiated Services QoS
architecture. The effect of stream multiplexing on delay
and jitter-sensitive traffic is analyzed through an
experimental approach. Aggregation is evaluated in three
different scenarios: with different aggregate traffic loads,
with a variable number of flows multiplexed in the same
class and with different packet sizes. Two different
scheduling algorithms are considered: Priority Queuing
and Weighted Fair Queuing.1
Keywords: Quality of Service, End-to-end performance,
Differentiated Services
I. INTRODUCTION
The Differentiated Services (diffserv) QoS architecture
specified in RFC 2475 is based on the aggregation of streams
into classes and on the provisioning of QoS to the class
instead of the single flow 1,2.
The per-class packet treatment is defined by the Per-Hop
Behavior (PHB), which describes the packet handling
mechanism a given class is subject to 3.
Packets are classified by access routers or hosts through
traffic filters: The class to which a packet is bound is
identified by a specific code in the diffserv field called DSCP
(DiffServ Code Point) as specified in RFC 2474.
Classification and marking is performed at ingress of a given
domain and only edge routers have access to the policy for
the binding between packets and classes, while core diffserv
nodes only manage classes, i.e. no per-flow information is
deployed in the core, this is to move the network complexity
from the core to the edge.
In this paper we focus on one aspect of diffserv: the
provisioning of end-to-end services in presence of traffic
aggregation. While the mixing of different streams into one
class is an inherent property of diffserv and is fundamental
for an improved scalability of the architecture, the
interference between packets of multiple flows – that are
treated in an undifferentiated way within a class – can have
an influence of the end-to-end performance of the single
stream.
While aggregation is of relevant to diffserv, this issue does
not arise in QoS architectures like ATM and intserv 7,8,9
that provide finely grained QoS by supporting quality of
service to the flow: both the stream and the reservation
profile are advertised through a signaling protocol so that
streams are treated independently.
The problem of per-flow performance under aggregation is a
topic that still needs more research for a better
1
The results reported in this article are work under progress carried out by
the TERENA TF-TANT task force in the framework of the EC-funded
project QUANTUM [25].
1
understanding. In this work we address the problem through
an experimental approach by running tests over an diffserv
test network. Given a reference stream to which
measurement applies, the end-to-end performance of one
stream in the class is evaluated in different scenarios: for
different aggregation degrees, different loads and different
packet sizes.
In order to keep the end-to-end behavior of a flow in
accordance with the contract, policing, shaping or other
forms of traffic conditioning may be adopted. In this work
we call stream isolation the capability of a network of
preserving the original stream profile when transmitted over
one or multiple diffserv domains in an aggregated fashion.
The maximum traffic isolation degree is similar to the one
achieved when each stream is carried by a dedicated wire.
In this work we estimate the traffic isolation supplied in
different traffic and network scenarios in order to evaluate
the feasibility of the QoS architecture under test and to
identify the conditions for maximization of isolation.
II.
TRAFFIC AGGREGATION
The behavior of a diffserv network is related to the type of
packet treatment in use, in particular to the scheduling
algorithm adopted. Given the large variety of schedulers, we
restrict our study to two well-known solutions: Weighted
Fair Queuing (WFQ) 16-21 and Priority Queuing (PQ)
22,23,29. We develop the study of PQ in detail and we
compare the performance of the two.
According to the WFQ algorithm packets are scheduled in
increasing order of forwarding time, which is an attribute
computed for each packet upon arrival. It is a function of
both the packet size and the weight of the queue the datagram
belongs to. On the other hand, with PQ 8,9 every time the
ongoing transmission is over, the priority is checked: The
priority queue is selected for transmission if it contains one
or more packets while the transmission control is released
only when/if the priority queue is empty. While WFQ
provides a fair distribution of link capacity among queues,
PQ is starvation-prone.
Given the difference of the two algorithms, their performance
is compared under different traffic scenarios to identify the
most suitable approach for the provisioning of delay, jitter
and packet loss guarantees. The goal is to identify
configuration guidelines for the support of the Expedited
Forwarding (EF) 6 PHB for the support of services.
End-to-end performance measurement is applied to a single
stream in a class, which we call reference stream in what
follows. We assume that the class the reference stream
belongs to applies for the EF PHB since the goal of
configuration tuning is the limitation of one-way delay, jitter
and packet loss.
Nevertheless, results of this work can be applied to any
traffic class that is subject to PQ or WFQ.
Given a diffserv node, aggregation occurs when bundles of
streams coming from two or more different input interfaces
belong to the same PHB or PHB group and are sent to the
same output interface. It can happen that in the long or short
term – depending on the packet rate of the streams – arrival
times of packets belonging to different bundles are close to
each other, so that a packet collision occurs. Collision can
produce packets accumulation, i.e. instantaneously nonempty queues as illustrated in Fig. A. As stated in 6,10,
given a diffserv node, packet clustering in the queue can be
avoided if the minimum departure rate, i.e. the output rate of
a given traffic class, is strictly greater than the maximum
arrival rate, i.e. the maximum instantaneous input rate, when
computed over any time interval 2 greater than the packet time
at the configured service rate of the class. If this requirement
is met, then the distortion of inter-packet gaps is limited –
some wait time is still introduced to wait for the finish of the
packet transmission in progress at the time the EF packet
arrives.
Proper configuration of the departure rate can be difficult in
case 1, if a diffserv edge router collects EF traffic from a
large number of low-speed input streams. In this case if the
output line rate is smaller than n*I then we can never
guarantee that the departure rater is greater than Amax.
On the other hand, case 2 applies in presence of high-speed
input and output network trunks. In order to provide timely
delivery EF rate should be a minor fraction of the line rate of
the trunk that provides EF transit.
Fig. A: aggregation of streams and
packet clustering in the output queue
During packet slot SI the EF queue is instantaneously nonempty. If EF traffic is serviced according to the priority
queuing policy then the queue is emptied in one go, and EF
traffic leaves the diffserv node at line rate (by definition of
the priority queuing algorithm), i.e. at a rate which is much
higher than the aggregate input rate  rj. This implies that
diffserv node introduces a distortion of the EF aggregate
despite of the fact that the maximum arrival rate is less than
the departure rate, and of the fact that the input bundles
injected a correctly shaped aggregate each. The result of this
is an output burst at line rate which propagates to the next
diffserv node and which makes the probability of packet
collision higher when moving downstream, unless traffic
conditioning (shaping and/or policing) is applied. This
implies that conditioning should probably be applied in each
aggregation point in which the minimum departure rate can
be exceeded by the maximum arrival rate.
The maximum instantaneous arrival rate depends on the
number n of traffic bundles bj. We assume that each bundle
injects a CBR stream of packets of constant size S at rate rj
and that the configured maximum class rate is R, where R 
 rj. If we also assume a constant line rate I for each interface
injecting a bundle, then we have to that the packet slot at rate
I (SI) and at configured rate R (SR) are according the
following equations:
The occurrence of bursts contributes to an increase of
queuing delay and poses the problem of proper queue
dimensioning. On the other hand, shaping introduces an
overhead, in particular when run at high speed, and adds
distortion to the single stream when applied to an
aggregation. On the other hand, policing could introduce
packet loss in case of high aggregation even if the stream
components are well shaped.
In the following sections we try to quantify the impact of EF
aggregation in absence of shaping and policing. A set of
In the worst-case scenario n packets arrive at the same time, well-behaved CBR sources is used to load the network in
one for each bundle. The corresponding maximum arrival presence of persistent congestion. The influence of
rate Amax during the packet slot SR can be estimated in the aggregation on end-to-end performance in terms of packetloss, one-way delay and jitter is evaluated.
following way:
SI = S / I
and
SR = S / R
(1)
R  I (SI < SR)  Amax = (n S) / SR = n R case 1
R > I (SR < SI )  Amax = (n I SR) / SR = n I case 2
III.
TEST SCENARIO
(2)
In both cases the maximum arrival rate is a function of the
number of input bundles n, by making n arbitrarily large,
even if each individual bundle injects well-shaped traffic we
can exceed the configured departure rate.
2
We assume that during the whole time interval the queue under analysis is
non-empty.
7
A.
Connectivity
The diffserv test-bed interconnects 14 test sites as illustrated
in Fig. 2. The wide area network is partially meshed and is
based on CBR ATM VPs at 2 Mbps (ATM overhead
included). On each VP one PVC at full bandwidth is
configured and is used as a point-to-point connection
between two remote diffserv capable routers.
way packet delay and instantaneous packet delay variation.
These are the key parameters for applications that have
stringent QoS demands. In order to have reproducible and
consistent measurements, we adopt the
definitions for these quantities that were developed in the
IPPM working group of the IETF [11].
Key to the IPPM definitions is the notion of wire time. This
notion assumes that the measurement device has an
observation post on the IP link: The packet arrives at a
particular wire time when the first bit appears at the
observation point, and the packet departs from the link at a
particular wire time when the last bit of the packet has passed
the observation point.
Fig.2: experimental diffserv wide area network
One-way Delay is defined formally in RFC 2679. This metric
is measured from the wire time of the packet arriving on
the link observed by the sender to the wire time of the last
Aggregation is produced in a multi-hop data path by injecting
bit of the packet observed by the receiver. The difference
several EF streams from different interfaces in each hop as
of these two values is the one-way delay.
illustrated in Fig. 3. UDP is the transport protocol adopted in
each test. The number of EF streams, the aggregate load and
the packet size of each individual stream is varied to see the Instantaneous Packet Delay Variation (IPDV) is formally
defined by the IPPM working group Draft [13]. It is based
influence in each case on the end-to-end profile
on one-way delay measurements and it is defined for
Specialized hardware, the SmartBits 200 by Netcom
(consecutive) pairs of packets. A singleton IPDV
Systems, is deployed: it supports packet time stamping in
measurement requires two packets. If we let Di be the onehardware for an accurate delay and jitter estimation. The
way delay of the ith packet, then the IPDV of the packet
network is based on routers CISCO 7200 (IOS experimental
pair is defined as Di – Di-1.
version 12.0(6.5)T7).
According to common usage, IPDV-jitter is computed
At each aggregation point (four in total) local area interfaces
according to the following formula:
at 10 or 100 Mbps inject a small amount of EF traffic. EF
data is then sent to the output interface, an ATM connection
IPDV-jitter = | IPDV |
(3)
at 2 Mbps. EF load varies from test to test in the range 1050 % of the ATM line rate.
In our tests we assume that the drift of the sender clock and
Each node is permanently congested through background
receiver clock is negligible given the time scales of the
UDP traffic.
tests discussed in this article. In the following we will refer
to IPDV-jitter simply with ipdv.
It is important to note that while one-way-delay requires
clocks to be synchronized or at least the offset and drift to
be known so that the times can be corrected, the IPDV
computation cancels the offset since it is the difference of
two time intervals. If the clocks do not drift significantly in
the time between the two time interval measurements, no
correction is needed.
B.
Traffic generation
Maximum Burstiness: we define maximum burstiness the
maximum number of packets instantaneously stored in a
queue. Maximum burstiness can be expressed in packets or
bytes.
Fig. 3: EF aggregation in a wide area network set-up
IV.
QOS METRICS
Packet-loss Percentage: the percentage of packets lost during
the whole duration of a stream.
V.
METHODOLOGY
In order to begin characterizing EF behavior in router In this section we describe the measurement methodology
implementations, we focused on two key QoS metrics: One- deployed for the tests described in the following sections.
7
The SmartBits 200 (firmware v.6.50) - was used as single
measurement point supporting packet time stamping in
hardware and providing a precision of 100 nsec. It was
equipped with two ML-7710 10/100 Ethernet interfaces so
that Expedited Forwarding traffic could be originated from a
given network card and received by the second one. In this
way precise one-way delay measures can be gathered, since
they are not affected by clock synchronization errors. This is
the EF measurement approach adopted for all the
experiments presented in this paper.
One-way delay computation is derived from cut-through
latency measures collected through the application
SmartWindow (v. 6.53) according to RFC 1242 [14]: it can
be computed from cut-through latency by adding the
transmission time of the packet, which is constant for a given
packet size, as explained in Fig. 4.
In order to analyze the time interaction between best-effort
and expedited forwarding traffic even in the presence of
nodal congestion, constant bit rate UDP unidirectional
VI.
EF LOAD
We varied the EF load, i.e. the overall EF rate injected in
the network. The EF load varies in the range [10-50]%
of the line rate as indicated in TABLE I.
TABLE I: test parameters of the EF load test
EF (UDP) – Priority Queuing BE (UDP)
Packet
Num of
Aggregate
Load
Num of Frame
size
streams
load
(Kbps) streams
size
(bytes)
% line rate
64
10 per site
[10, 50]
> 2000
20
Real
(40 in total)
[0,1500]
According to the test results burstiness is approximately a
linear function of the EF load, as indicated in Fig. 5, which
plots the EF burst size versus the aggregated load. In this
test the maximum burst size see was up to 35 EF packets.
Fig. 5: EF burstiness as a function of the EF load
For each EF load we have set the EF queue length so that
the whole maximum burst is completely stored in the EF
streams were generated through the application called mgen
queue and we have analyzed the impact of this setting on
3.1 [24]. While EF traffic occupies a relatively small portion
delay and IPDV.
of line capacity, best-effort streams were needed to add
background traffic.
This means that for each point of the curve the EF queue
size varies and depends on the corresponding burstiness. is
configured. Fig. 15(a) plots the one-way delay distribution
Fig. 4: relationship between cut-through latency
of the reference stream for different load values. Delay is
and one-way delay.
expressed in delay units, i.e. 108.14 msec. The increasing
The maximum burstiness is evaluated according to the
EF queue length that is configured to avoid packet loss has
following methodology. For a given test session in each
a small effect on one-way delay: One-way delay slightly
router on the data path the occurrence of tail drop in the EF
decreases as Fig. 6(a) shows.
queue is monitored. The length Q of the priority queue is
In the worst case scenario (burst size equal to 35 packets)
progressively increased until no packet loss is observed. The
the queuing delay introduced by the priority queue is up to
maximum burstiness is assumed to be equal to Q if during a
14.84 msec. However, if the presence of long EF bursts is
time interval of the order of magnitude of a test session the
occasional, the effect is not very visible from distributions
following conditions apply: when the queue size is Q-1 tail
curves, nevertheless it could have a negative impact on the
3
drop occurs, while for a queue size of Q bytes no packet loss
end-to-end application performance.
is observed.
In our tests the burst size in bytes can be easily derived from
Fig. 6(b) plots the IPDV frequency distribution; IPDV is
the equivalent metric expressed in packets as for each test the
expressed in transmission units (TX units), i.e. as a multiple
EF
packet
size
is
constant
and
known.
of the time needed to transmit one EF packe, which is eual
to l 0.424 msec in this scenario. The curves show that in
presence of increasing EF load values, i.e. of increasing EF
3
Packet loss is relevant to burst measurement when due to tail drop.
7
burstiness and consequently of longer EF queues, the IPDV
distribution gets slightly more spread around the average
and the maximum IPDV observed increases as well.
A possible interpretation of the fact that for larger load
values packets tend to cluster into longer bursts could be
the following. Best-effort packets are transmitted only
when the priority queue is empty. This implies that the
longer the burst, the greater the number of packets which
are transmitted at the same time by the priority queue
without being interleaved by BE packets. This also implies
that for a greater number of EF packets the delay
experienced in a PQ does not considerably differ from the
corresponding delay of the previous and subsequent packet
in the burst, i.e. IPDV is more constant.
Fig 6: one-way delay distribution (a) and IPDV distribution (b) for different burst sizes
VII.
NUMBER OF EF STREAMS
As indicated in TABLE II, in end-to-end performance was
tested for a variable number of EF streams and a constant
EF load equal to 32% of the line rate.
TABLE II: test parameters for (Burstiness and number of EF streams)
EF (UDP) – Priority Queuing BE (UDP)
Packet
Num of
Aggregate
Load
Num of Frame
size
streams
load
(Kbps) streams
size
(bytes)
% line rate
64
32
> 2000
20
Real
1, 100
[0,1500]
Burstiness is only moderately influenced by the number of
EF streams being part of the class as indicated in Fig. 7:
While burstiness greatly increases when the number of
streams varies from 1 to 8, from 8 to 100 burstiness reaches a
stable point and it oscillates moderately.
While the effect of the number of streams on delay and IPDV
is small, the end-to-end performance of a single stream is
largely different from the corresponding performance in case
of two or more flows.
Fig. 7: EF burstiness as a function of the number of EF
streams
VIII.
EF PACKET SIZE
In this aggregation the overall rate is constant and the
number of EF streams is also constant and equal to 40, whilst
we varied the payload size of the packet in a typical range
used for IP telephony, namely: [40, 80, 120, 240] bytes, as
summarized in TABLE III.
TABLE III: test parameters for (Burstiness and EF packet size)
EF (UDP) – Priority Queuing BE (UDP)
Packet
Num of
Aggregate
Load
Num of Frame
Payload streams
load
(Kbps) streams
size
size
% line rate
(bytes)
40, 80,
40
32
> 2000
20
Real
120, 140
[0,1500]
Burstiness in bytes increases gradually from 1632 - with 40
byte-payload packets – to 1876 bytes – with 240 bytepayload packets. If we look at the same burstiness expressed
7
in packets, we see that it decreases: the reference stream rate
is constant, as a consequence if the packet size increases, the
packet rate decreases.
Fig. 8(a) and 8(b) plot the one-way delay and IPDV
distribution for different EF packet sizes. In this experiment
one delay unit corresponds to 113.89 msec, while the
transmission unit TX unit is equal to 0.424 msec.
TABLE IV: average one-way delay and jitter for different EF packet sizes
EF payload size
40
80
120
240
(byte)
134.834
141.325
144.715
Avg delay (msec) 132.629
8188
6608
4879
5416
Avg IPdV (msec)
Average one-way delay increases with the EF packet size
while the average IPDV decreases (TABLE IV). One-way
delay is distributed in a interval which is inversely
proportional to the packet size: With small EF packets the
one-way delay range can reach lower values, while the
maximum does not greatly change from case to case.
We can conclude that in this aggregation scenario the
difference in performance is primarily a function of the
number of packets sent per second (and of the background
packet size) rather than of the EF packet size itself. In fact,
given an overall constant rate, the increase in packet size
produces a decrease in the number of packets sent per
second and the packet-per-second rate has an effect on the
TX queue composition.
Fig8: one-way delay distribution (a) and IPDV frequency distribution (b)
in case of burstiness produced for different EF packet-size streams
For example, for a BE rate of 214 pack/sec and an EF rate
of 720 pack/sec (40 byte packets), 1 BE packet is sent only
after 3.3 EF packets, while 1 BE packet is sent every 1.1
EF packets if the EF rate is 240 pack/sec (240 byte
packets). The resulting composition of the TX queue and
the corresponding queuing delay are very different in the
two cases (B representing a BE packet, E representing an
EF packet)
 EF rate (240 bytes): 240 pack/sec, TX queue =
BEBEB
 queuing time = 1.2 * 2 + 4.6 *3 = 16.2 msec
 EF rate (40 bytes): 720 pack/sec, c, TX queue =
BEEEB
 queuing time = 0.424 * 3 + 4.6 * 2 = 11.747 msec
The resulting difference in queuing delay (approximately
the transmission of time of one background packet) is
amplified on the data path in each congested TX queue.
While delay is more limited in case of small EF packets as
an effect of the increasing rate of packets per second,
IPDV performance improves with large EF packet sizes, as
illustrated in Fig. 8 (b)). IPDV is distributed in a larger
range for smaller packet sizes. This could be explained by
the burst length: The presence of longer bursts with PQ
reduces IPDV since packets being part of the same burst
7
experience a similar queuing delay and consequently delay
variation in a burst is small.
IX.
COMPARISON OF WFQ AND PQ
According to the previous tests EF load is the primary
parameter from which burstiness depends: The aggregation
scenario with variable EF load was evaluated by replacing
PQ with WFQ and results were compared.
As usual, load varies in the range 10, 50 % of the line
rate. WFQ performance is measured for two different
packet sizes: 40 bytes and 512 bytes since for small EF
packet sizes the WFQ behavior is closer to PQ 28.
As Fig. 9 shows, burstiness with WFQ in presence of fairly
large EF packet sizes is almost negligible. As expected, with
WFQ the formation of large EF bursts is prevented, since
unlike PQ in WFQ EF packets are interleaved by BE packets
and an EF burst is not transmitted as-it downstream. While
one-way delay limitation is important – and PQ supplies a
more timely delivery than with WFQ – in case of WFQ
aggregation produces a more limited burstiness.
With shorter packet sizes (e.g. 40 bytes like in this test),
WFQ converges to a PQ algorithm, since with WFQ short
packets get a preferential treatment (the forwarding time Allali and Andrea Chierici for their collaboration during
the test phase.
computed by WFQ is proportional to the packet size).
REFERENCES
[1]
[2]
[3]
[4]
[5]
[6]
Fig. 9: EF burstiness with PQ and WFQ (40 and 512 byte EF
packets)
7
8
X.
CONCLUSIONS
The study of the effect aggregation on end-to-end
performance is fundamental to evaluate the capability of a
diffserv network in preserving the original stream profile, in
particular of delay and jitter-sensitive traffic, and to evaluate
the need of traffic conditioning mechanisms like policing and
shaping to limit the disruptive interaction between streams
within a given class. Experimental results conducted in
presence of EF aggregation (without traffic conditioning)
show that the limitation of the EF load, of the aggregation
degree and the presence of small packet sizes greatly
contributes to minimize the occurrence of occasional bursts.
Burstiness is a linear function of the class load. But while
the increase in the EF queue size – needed to store traffic
bursts –has a minor effect on one-way delay, IPDV is
clearly minimized in case of limited load, even in this
case IPDV distribution is spread in a larger range of
values, and a few packet samples still experience high
IPDV
Burstiness is proportional to the number of EF streams: it
increases rapidly when the number of streams varies in
the range [1, 8], but then it quickly converges to a stable
value.
WFQ is less burstiness-prone if the EF departure rate is
approximately equal to the arrival rate, while it converges
to PQ when the EF queue weight – and consequently the
corresponding service rate – increases. With WFQ a
tradeoff between one-way delay minimization and
burstiness avoidance has to be identified.
9
10
[11
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
[22]
[23]
[24]
[25]
[26]
ACKNOWLEDGEMENTS
The work presented in this paper is the result of the joint
efforts of the TF-TANT task force. We thank Cisco
Systems, IBM and Netcom System for the hardware loan
as well as for the valuable technical support provided.
Special acknowledgements go to Jordi Mongay, Hamad el
7
[27]
[28]
29
Differentiated Services, http://www.ietf.org/html.charters/diffservcharter.html
K. Nichols, V. Jacobson, L. Zhang, A Two-bit Differentiated
Services Architecture for the Internet.
D. Black, M. Carlson, E. Davies, Z. Wang, W. Weiss, RFC 2475:
An Architecture for Differentiated Services; S. Blake.
Y. Bernet, A. Smith, S. Blake, A Conceptual Model for
DiffservRouters; diffserv draft, work in progress.
K. Nichols, S. Blake, F. Baker, D. Black, RFC 2474 : Definition of
the Differentiated Services Field (DS Field) in the Ipv4 and Ipv6
Headers.
V. Jacobson, K. Nichols, K. Poduri, RFC 2598: An Expedited
Forwarding PHB.
J.Wroklawski, RFC 2210: The Use of RSVP with IETF Integrated
Services
J.Wroklawski, RFC 2211: Specification of the Controlled-Load
Network Element Service
S.Shenker, C.Partridge, R.Guerin, RFC 2212: Specification of
Guaranteed Quality of Service
V. Jacobson, K. Nichols, K. Poduri, The 'Virtual Wire' Behavior
Aggregate, IETF draft, work in progress, March 2000.
IP Performance Metrics; http://www.ietf.org/html.charters/ippmcharter.html
G. Almes, S. Kalidindi, M. Zekauskas, RFC 2679: One-way Delay
Metric for IPPM
C. Demichelis, P. Chimento, Instantaneous Packet Delay Variation
Metric for IPPM; ippm draft, work in progress
S. Bradner,
RFC 1242: Benchmarking Terminology for
Network Interconnection Devices.
W.Prue, J.Postel,
RFC 1046: A Queuing Algorithm to
Provide Type-of-Service for IP Links;
H. Zhang, Service Disciplines For Guaranteed Performance Service
in Packet-Switching Networks.
J.C.R.Bennett, H. Zhang, Why WFQ Is Not Good Enough For
Integrated Services Networks.
S. Floyd, V. Jacobson, Link-sharing and Resource Management
Models for Packet Networks;, ACM Transactions on Networking,
Vol 3 No. 4, Aug 1995.
I. Wakeman, A. Ghosh, J. Crowcroft, Implementing Real Time
Packet Forwarding Policies using Streams.
A. Demers, S. Keshav and S. Shenkar, Analysis and simulation of a
fair queueing algorithm; Internet Research and Experience, vol 1,
1990
A. Parekh and R. Gallager, A Generalized Processor Sharing
Approach to Flow Control in Integrated Services Networks: The
Single-Node Case; IEEE/ACM Transactions on Networking, Vol 1,
No 3, June 1993.
The bandwidth guaranteed prioritized queuing and its
implementations law, K.L.E. Global Telecommunications
Conference, 1997. GLOBECOM '97., IEEE Volume: 3 , 1997,
Page(s) 1445-1449 vol3.
R. Rönngren and R. Ayani, A comparative study of parallel and
sequential priority queue algorithms; ACM trans. Model. Comput.
Simul. 7,2 (Apr. 1997), pages 157 – 209.
The Multi-Generator (MGEN) Toolset, Naval Research Laboratory
(NRL), http://manimac.itd.nrl.navy.mil/MGEN/
The
Joint
DANTE/TERENA
Task
Force
TF-TANT;
http://www.dante.net/tf-tant/
Quantum Project (the Quality network Technology for UserOriented Multi-Media); http://www.dante.net/quantum/
Differentiated Service Experiment: Report; T. Ferrari (editor), TFTANT interim report Jun ‘99–Sep ‘99,
http://www.cnaf.infn.it/~ferrari/tfng/ds/del-rep1.doc
T. Ferrari, P. Chimento, A Measurement-based Analysis of
Expedited Forwarding PHB Mechanisms, IWQoS 2000, in print
T.Ferrari, G.Pau, C.Raffaelli, Priority Queuing Applied to Expedited
Forwarding:a Measurement-Based Analysis, submitted to Qofis
2000
7
Download