Stein Stein Gjessing University of Oslo NORWAY

advertisement
Stein Stein Gjessing
University of Oslo
NORWAY
e-mail: steing@ifi.uio.no,
Stein Gjessing, University of Oslo
http://www.ifi.uio.no/~steing
1
u
A simple RPR model written in the
programming language Java:
–
–
–
–
–
Class Node
Class DualNode
Class Buffer
Class Link
Class Packet
// single direction node
// several needed in each node
// one (out) for each single Node
// new one for each packet sent
– Class Application
– Class Kernel
Stein Gjessing, University of Oslo
// generating system load, etc
Class Unit // simulation environment
2
u RPR
model status
as of May 14. 2001:
Dual rings with shortest path forwarding
Two priority levels with two set of buffers
Absolute priority for the highest (provisioned)
Choice of preemption (without packet loss (½ K))
Cut-through (store&forw. very easy to implemet)
Parameters:
» No. of nodes, wire length/wire latency, bandwidth
Programmable (in Java) load (Class Application) with
destination and packet size set individually for each packet sent
Simple statistics (Class Reporter)
No flow control yet
Stein Gjessing, University of Oslo
3
synchronous application logic (provisioned)
priority sink
fast insertion
buffer
normal transit path
normal sink
normal insertion
buffer
1
3
4
outputSelector( )
stripper( )
fast transit path
2
asynchronous application logic
Traffic put into correct single node depending on shortest path
Stein Gjessing, University of Oslo
4
u
16 nodes (numbered 0 – 15), dual rings
u 250 microsec. cable between each node
– includes one node bypass latency
– ( ~ 50 km between each node )
u
1Gbyte/sec bandwidth (= 10Gbit/sec)
Stein Gjessing, University of Oslo
5
u
Scenario A – Random receiver
– Overloaded system – 10Gbit/sec/link
– Three background packet sizes:
1600 , 16K and 520 bytes
u
Scenario B – Hot receiver
– Partly highly loaded system – 10Gbit/sec/link
– Three background packet sizes:
1600, 16K and 520 bytes
Stein Gjessing, University of Oslo
6
Simulation Scenario - A
Random receiver
12
11
13
10
Only traffic
streamed from
7 to 15 are logged
14
9
15
8
0
7
1
6
5
Streaming
from 7 to 15
Stein Gjessing, University of Oslo
2
1
3
Random traffic
between all nodes.
30% bandwidth
provisoned
small high prio.
70% bandwidth
overloaded,
large low prio.
7
Simulation Scenario - B
Hot receiver
12
11
13
10
9
14
All traffic from
all nodes are sent
to node 15
8
Only traffic
streamed from
7 to 15 are logged
15
0
7
1
6
5
Streaming
from 7 to 15
Stein Gjessing, University of Oslo
2
1
3
Links into 15
have close to
100% utilization
15 is hot receiver
30% bandwidth
provisoned
small high prio.
70% bandwidth
large, low prio.
Last link to 15
almost fully used
8
u
Latency / jitter
u Streaming small high priority packets
(80 bytes including header)
from node 7 to node 15
– 8 hops, ~ 400 km distance = 2ms min. latency
u
2 us. between packets
u 125 us. between packets (TDM frame interval)
Stein Gjessing, University of Oslo
9
u
Load distribution
– 30 % bandwidth high priority small packets
– 70% bandwidth low priority
» - ”IP-packets” (1600 bytes)
- Jumbo-packets (16K)
– - Jumbo-packets with preemption (1/2 K)
–
A. Random receiver
– network overloaded
–
B. Hot receiver (node 15)
– Almost full utilization of last link into hot receiver
(i.e. lighter loaded system than A, but not easy to get
comparable load in all cases)
Stein Gjessing, University of Oslo
10
u
u
u
Packet Latency (and Jitter) in a stream
of small (80 byte) high priority packets
How much delay/jitter are caused by
other packets blocking ?
Delay caused by
– Low priority packets on their way out (mostly)
– Other high priority packets (also)
u
u
Single runs of 20 ms
(statistics from 10,000 packets)
No confidence intervals etc.
Stein Gjessing, University of Oslo
11
u
u
u
u
Traffic from all nodes to all nodes (random destination)
All linkes full all the time
Measuring high prio. stream from 7 to 15 with
2 us. or 125 us between packets
Background traffic is
– 30% high prio 80 bytes packets (provisioned) and
70% low prio packets:
3 sub-scenarios with 3 packet sizes:
» A1. 1600 bytes ”IP-packets” or
» A2. 16K jumbo packets or
» A3. 16K jumbo packets with preemption (1/2 K)
Stein Gjessing, University of Oslo
12
Scenario A1:
Random ”IP-packets” background
u
Random background traffic with
– 30 % bandwidth high prio small packets
– overloaded with ”IP-packets” (1600 bytes)
u
Streaming from node 7 to node 15
– 8 hops, ~ 400 km distance = 2ms min latency
– 2 us between packets
– 125 us. between packets
Stein Gjessing, University of Oslo
13
A1. Latency
Streaming high prio. small packets 8 hops (400 km., 2ms.)
with random overloaded ”IP-packets” (1600 byte) background
2013
2012
2011
2010
microsec.
2009
2008
2007
2006
2005
2004
2003
2002
2001
2000
1
1001
2001
3001
4001
5001
6001
7001
8001
9001
1000
Individual packets (2 us. apart)
Stein Gjessing, University of Oslo
14
A1. Latency
Streaming small packets 8 hops with overloaded
”IP-packets” background. More detailed sample.
2013
2012
2011
2010
microisec.
2009
2008
2007
2006
2005
2004
2003
2002
2001
2000
1
11
21
31
41
51
61
71
81
91
101
Individual packets (2 microsec. apart)
Stein Gjessing, University of Oslo
15
A1. Latency
Streaming small packets 8 hops with overloaded
”IP-packets” background. 125 us. between packets.
2012
mean and median
2011
2010
2009
microsec.
2008
2007
2006
2005
2004
2003
2002
2001
2000
1
21
41
61
81
101
121
141
161
individual packets (125 us. apart)
Stein Gjessing, University of Oslo
16
A1. Conclusion:
Streaming small high prio. packets
with ”IP packets” overloaded background
u
u
u
u
u
u
u
Added latency between 2 and 12 us.
(going 8 hops, 400 km., 2 ms.)
Theoretically added latency between 0 and 13us.
Max 11.7 us.
Min. 1.5 us. added latency
0.1 %: more than 11us. added latency
1%: more than 10us. added latency
Mean and median is 6.4 us. added latency
Max jitter almost as large as total latency variation
Stein Gjessing, University of Oslo
17
Scenario A2:
Random Jumbo packets background
u
Random background traffic with
– 30 % bandwidth high prio small packets
– overloaded with Jumbo packets (16K bytes)
u
Streaming from node 7 to node 15
– 8 hops, ~ 400 km distance = 2ms min latency
– 2 us between packets
– 125 us. between packets
Stein Gjessing, University of Oslo
18
A2. Latency
Streaming small high prio. packets 8 hops (400 km., 2 ms.)
with random overloaded Jumbo-packets (16K) background
2120
2110
2100
2090
microsec.
2080
2070
2060
2050
2040
2030
2020
2010
2000
1
1001
2001
3001
4001
5001
6001
7001
8001
Individual packets (2us apart, One jumbopacket is 16 us)
Stein Gjessing, University of Oslo
9001
10001
19
A2. Latency
Streaming small high prio. packets 8 hops with
random overloaded Jumbo-packets (16K) background (details)
2110
2100
2090
microsec.
2080
2070
2060
2050
2040
2030
2020
2010
2000
1
101
201
301
401
501
601
701
801
901
1001
individual packets
Stein Gjessing, University of Oslo
20
A2. Latency
Streaming small high prio. packets 8 hops with
random overloaded Jumbo-packets (16K) background (more details)
2100
2090
microsec.
2080
2070
2060
2050
2040
2030
2020
1
11
21
31
41
51
61
71
81
91
101
individual packets (2us. apart, One Jumbopacket is 16us.)
Stein Gjessing, University of Oslo
21
A2. Latency
Streaming small high prio. packets 8 hops with random
overloaded Jumbo-packets (16K) background. 125 us. stream
2110
mean and median
2100
2090
microsec.
2080
2070
2060
2050
2040
2030
2020
2010
2000
1
21
41
61
81
101
121
141
161
individual packets (125 us. apart)
Stein Gjessing, University of Oslo
22
A2. Conclusion:
Streaming small high prio. packets
with Jumbo packets overloaded background
u
u
u
u
u
u
u
u
Added latency between 20 and 100 us.
Theoretically between 0 and 128 us.
Min: 14 us.
Max: 106.5 us.
0.1 % larger than 104 us.
1% larger than
96.2 us.
10% larger than 81.4 us.
Mean and median: 62 us.
Max jitter about half of total latency variation
Stein Gjessing, University of Oslo
23
Scenario A3.
Background traffic with Preemption
u
Random background traffic with
– 30 % bandwidth high prio small packets
– 70 % (overloaded) with Jumbo packets
with preemption (slide in at every ½ K)
u
Streaming from node 7 to node 15
– 8 hops, ~ 400 km distance = 2ms min latency
– 2 us between packets
– 125 us. between packets
Stein Gjessing, University of Oslo
24
A3. Latency
Streaming small high prio. packets 8 hops (400 km., 2 ms.) with
overloaded random background preemptable (½ K) Jumbo-packets
2005
2004
2003
2002
2001
2000
1
1001
2001
3001
4001
5001
6001
7001
8001
9001
10001
Individual packets (2us apart)
Stein Gjessing, University of Oslo
25
A3. Latency
Streaming small high prio. packets 8 hops with overloaded
background preemptable (½ K) Jumbo-packets (details)
2005
2004
2003
2002
2001
2000
1
51
101
151
201
251
301
351
401
451
501
individual packets (2us. apart)
Stein Gjessing, University of Oslo
26
A3. Latency
Streaming small high prio. packets 8 hops with overloaded
background preemptable (½ K) Jumbo-packets. 125 us. stream.
2004
2003.5
2003
2002.5
2002
2001.5
2001
2000.5
2000
1
11
21
31
41
51
61
71
81
91
101
111
121
131
141
151
161
Individual packets 125us. apart
Stein Gjessing, University of Oslo
27
A3. Conclusion:
Streaming small high prio. packets
with preemptable overloaded background
u
u
u
u
u
u
u
u
Added latency between 0.5 and 4 us.
Theoretically between 0 and 4.1 us.
Min: 0.5 us.
Max: 4.05 us.
0.1 % larger than 3.7 us.
1% larger than
3.3 us.
10% larger than 2.8 us
Mean and median: 2.15 us.
Max jitter almost as large as total
latency variation
Stein Gjessing, University of Oslo
28
u
Traffic from all nodes to hot receiver, node 15
Last links to receiver is almost fully utilized
(but can be different in the three cases B1, B2 and B3)
u
u
Measuring high priority stream from 7 to 15
with 2 us. or 125 us between packets
Background is
– 30% high prio 80 bytes packets (provisioned) and
70% low prio packets.
3 sub-scenarios with 3 different packet sizes
» B1. 1600 bytes ”IP-packets” or
» B2. 16K jumbo packets or
» B3. 16K jumbo packets with preemption (1/2 K)
Stein Gjessing, University of Oslo
29
Scenario B1.
Hot receiver, ”IP-packets” background
u
Hot receiver (15) background traffic with
– 30 % bandwidth high prio small packets
– 70% ”IP-packets” (1600 bytes)
Almost full utilization of last links to 15
u
Streaming from node 7 to node 15
– 8 hops, ~ 400 km distance = 2ms min latency
– 2 us between packets
– 125 us. between packets
Stein Gjessing, University of Oslo
30
B1. Latency
Streaming high prio. small packets 8 hops (400 km., 2ms.)
with hot receiver ”IP-packets” (1600 byte) background
2008
2007
microsec.
2006
2005
2004
2003
2002
2001
2000
1
1001
2001
3001
4001
5001
6001
7001
8001
9001
individual packets (2us. apart)
Stein Gjessing, University of Oslo
31
B1. Latency
Streaming small packets 8 hops (2ms.)
Hot receiver . ”IP-packets” background (details)
2008
2007
microsec.
2006
2005
2004
2003
2002
2001
2000
1
21
41
61
81
101
121
141
161
181
201
individual packets (2us. apart)
Stein Gjessing, University of Oslo
32
B1. Latency
Streaming small packets 8 hops (2ms.)
Hot receiver . ”IP-packets” background. 125 us. stream
2006
mean
2005
microseconds
2004
2003
2002
2001
2000
1
11
21
31
41
51
61
71
81
91
101
111
121
131
141
151
161
Individual packet, 125 us. apart
Stein Gjessing, University of Oslo
33
B1. Conclusion:
Streaming small high prio. packets
with hot receiver and ”IP-packets” background
u
u
u
u
u
u
u
u
u
This is not a fully overloaded system
Added latency between 0 and 6.5 us.
Theoretically between 0 and 13 us.
Max observed added latency is 6.74 us.
0.1% added latency greater than 5.9 us.
1% aded latency greater than 4.6 us.
Median 1.36 us. Mean
1.47us.
10% went through with no added latency
Max jitter as large as total latency variation
Stein Gjessing, University of Oslo
34
Scenario B2.
Hot receiver, Jumbo packets background
u
Hot receiver (#15) background traffic with
– 30 % bandwidth high prio small packets
– 70% Jumbo packets (16K bytes)
Almost full utilization of last links to 15
u
Streaming from node 7 to node 15
– 8 hops, ~ 400 km distance = 2ms min latency
– 2 us between packets
– 125 us. between packets
Stein Gjessing, University of Oslo
35
B2. Latency
Streaming high prio. small packets 8 hops (400 km., 2ms.)
with hot receiver Jumbo packets (16K byte) background
2070
2060
2050
2040
2030
2020
2010
2000
1
1001
2001
3001
4001
5001
6001
7001
8001
9001
10001
Individual packets (2 us. apart)
Stein Gjessing, University of Oslo
36
B2. Latency
Streaming high prio. small packets 8 hops with
hot receiver Jumbo packets (16K byte) background (Details)
2070
2060
2050
2040
2030
2020
2010
2000
1
21
41
61
81
101
121
141
161
181
201
Individual packets (2 us. apart)
Stein Gjessing, University of Oslo
37
B2. Latency
Streaming high prio. small packets 8 hops with
hot receiver Jumbo packets (16K byte) background. 125 us. stream
2060
median
2050
2040
2030
2020
2010
2000
1
21
41
61
81
101
121
141
161
Individual packets, 125 us. apart
Stein Gjessing, University of Oslo
38
B2. Conclusion:
Streaming small high prio. packets
with hot receiver and Jumbo packets background
u
u
u
u
u
u
u
u
u
u
This is not a fully overloaded system
Added latency between 0 and 55 us.
Theoretically between 0 and 128 us.
Max observed added latency is 59.7 us.
0.1% added latency greater than 55 us.
1% added latency greater than 47 us.
Median 18.8 us. Mean 19.9 us.
1% went through with no added latency
10% less than 6 us.
Max jitter about half of total latency variation
Stein Gjessing, University of Oslo
39
Scenario B3.
Hot receiver, preemptive background
u
Hot receiver (15) background traffic with
– 30 % bandwidth high prio small packets
– 70% Jumbo packets with preemption ( ½ K)
Almost full utilization of last links to 15
u
Streaming from node 7 to node 15
– 8 hops, ~ 400 km distance = 2ms min latency
– 2 us between packets
– 125 us. between packets
Stein Gjessing, University of Oslo
40
B3. Latency
Streaming high prio. small packets 8 hops (400 km., 2ms.)
with hot receiver Preemptive ( ½ K) Jumbo packets background
2004
2003
2002
2001
2000
1
1001
2001
3001
4001
5001
6001
7001
8001
9001
10001
Individual packets
Stein Gjessing, University of Oslo
41
B3. Latency
Streaming high prio. small packets 8 hops with hot receiver
Preemptive ( ½ K) Jumbo packets background (Details)
2003,5
2003
2002,5
2002
2001,5
2001
2000,5
2000
1
11
21
31
41
51
61
71
81
91
101
111
121
131
141
151
161
171
181
191
201
Individual packets, 2 us. apart
Stein Gjessing, University of Oslo
42
B3. Latency
Streaming high prio. small packets 8 hops with hot receiver
Preemptive ( ½ K) Jumbo packets background 125 us. stream
2003.5
median
2003
2002.5
2002
2001.5
2001
2000.5
2000
1
11
21
31
41
51
61
71
81
91
101
111
121
131
141
151
161
Individual packets, 125 us. apart
Stein Gjessing, University of Oslo
43
B3. Conclusion:
Streaming small high prio. packets
with Hot receiver and Preemptive Jumbo packets
u
u
u
u
u
u
u
u
u
u
This is not a fully overloaded system
Observed added latency between 0 and 3.4 us.
Theoretically between 0 and 4.1 us.
Max observed added latency is 3.6 us.
0.1% added latency greater than 3.3 us.
1% added latency greater than 2.9 us.
Median 1.0 us. Mean 1.1 us.
0.5% went through with no added latency
10% less than 0.2 us.
Max jitter as large as the total latency variation
Stein Gjessing, University of Oslo
44
u
Scenario A – Random background
– Overloaded system
– Different background low priority packet sizes clearly
give difference foreground packet latency
u
Scenario B – Hot receiver background
– More variably loaded system
– Still differently sized background packets clearly
influence foreground packet latency
u
Jitter almost as large as total latency variation
Stein Gjessing, University of Oslo
45
Conclusion Scenario A:
Streaming small high prio. packets with random
overloaded background (3 packet sizes)
2110
Jumbo packets (16K bytes)
"IP packets" (1600 bytes)
Preeption (520 bytes)
2100
2090
2080
2070
2060
2050
2040
2030
2020
2010
2000
1
21
41
Stein Gjessing, University of Oslo
61
81
101
Individual packets streaming at 125 us.
121
141
161
46
Conclusion Scenario B:
Streaming small high prio. packets with hot
receiver, high load, background (3 packet sizes)
2060
Jumbo packets (16K bytes)
"IP packets" (1600 bytes)
Preemption (520 bytes)
2050
2040
2030
2020
2010
2000
1
21
41
61
81
101
121
141
161
Individual packets streaming at 125 us.
Stein Gjessing, University of Oslo
47
– 10Gbit/sec system
u
u
u
u
In a ”high load” system 1% of the packets observe
half of the theoretical max latency
In an overloaded system 1% of the packets
observe close to the theoretical max latency
Hence with Jumbo packets (16K) and no
preemption it is possible to get 100 us. added
latency with 8 nodes (128 us. theoretical max).
This is close to the 125 us. synchronous stream
interval (TDM frame interval)
Jitter almost as large as total latency variation
Stein Gjessing, University of Oslo
48
Download