Specification of the Hardware Infrastructure for the MB

advertisement
Specification of the Hardware Infrastructure for the
MB-NG Project
Draft 0.3
27 Nov 01
R. Hughes-Jones HEP Manchester
P. Myers Network Group MCC
This is a draft – Comments are very welcome.
Introduction
The purpose of this note is to set down the functionality and the specification of the IP
network hardware infrastructure and the peripheral test components for the MB-NG
Project. A “working model” of the connectivity is proposed to position the
components. It attempts to draw on the discussions presented at the September
meeting in London and the experience of managing regional and campus networks.
The equipment must allow the MB-NG Project to meet its aims, that is to investigate
and study three major areas:



MPLS switching and engineering
High Performance High Throughput data transfers
Managed Bandwidth and QoS
These areas may well need slightly different arrangements of the equipment or
different configurations of the software involved. As well as conventional network
statistic gathering, many of the tests will require status and control information from
the equipment, e.g. test PCs & routers, to be monitored and recorded as the test
proceeds. Thus we have the requirements:




The equipment should be able to be used in a flexible manner. This indicates
uniformity of kit were possible.
The equipment should be IP accessible preferably via a path independent of
the current links under test.
Access to current configurations and statistics must be available to all the
collaborators.
The equipment at the C-PoPs should be able to be power-cycled and managed
remotely. UKERNA already operate in this manner for the SuperJANET
production network.
Much of the kit will be situated on campus and laboratory sites and in general all the
test traffic will flow on links independent of the IP production traffic. However access
to the test equipment may well involve traversal of the campus network, and some of
the demonstrations being considered might need traffic from production computing
systems, e.g. the operation of AccessGrid video rooms over MB-NG QoS. It is clearly
understood that were the work impinges on the site domains, no arrangements can be
made without discussion and the full involvement of the network teams concerned. In
fact we believe this input is essential to the smooth operation of the project. Similarly,
issues that involve the C-PoPs will require discussion with UKERNA. Thus it is
emphasised that at this point in time, this is very much a discussion document.
Physical Network Infrastructure
Figure 1 shows the SuperJANET4 Development Network Core together with the
access links to the sites involved.
Core Routers
The Core routers are Cisco GSR 12416s with two 10 Gbit POS cards and one 4 port
Gigabit Ethernet Card. These routers will be connected to the UKERNA ISDN based
out of band power management and terminal access system. Access to this power
management system will be restricted to the UKERNA NOCs.
Access Links
The access and cross-campus links have the following specifications:
 Manchester a 2.5G POS link from the Warrington C-PoP to MCC and a
fibre to the HEP Group across Manchester Campus.
 RAL There is dark fibre to Reading, which could be used for Gigabit
Ethernet.
 UCL a 2.5G POS link from the London St Pancreas C-PoP to ULCC and a
1 Gbit Ethernet link to UCL over the London University fibre
Equipment for the External and Edge Domains
Figure 2. shows the equipment for the external Backbone and Edge domains. It
consists of a backbone router connected to the C-PoP development access link, an
Edge router connected “over Campus” to the backbone router, and several hosts
capable of generating test and load traffic. In addition IP access and possible 100 Mbit
Ethernet connections to other computing systems must be provided.
Edge Routers
The edge routers must be able to:
 Connect to the test systems in the IP domain.
 Accept marked packets from the test systems and be able to re-mark them
 Mark the packets on input according to at least:
Source – dest IP address IP port
Layer3 and Layer4 headers TCP/UDP
Support Diffserve code points
…
 Perform policing and admission control
 Provide a range of Queue type options –
Several queues with different priorities
Real-time queue
 Provide flexible Queue Scheduling
Weighted round robin
Weighted fair queuing
WorldCom
Manc
MCC
SJ4 Dev
C-PoP
Warrington
12416
SuperJANET4
Production
Network
Gigabit Ethernet
2.5 Gbit POS
10 Gbit POS
SJ4 Dev
C-PoP
Reading
12416
Dark Fibre (SSE)
RAL /
UKERNA
Leeds
RAL /
UKERNA
Figure 1. SuperJANET4 Development Network and the Site Access links for MB-NG
SJ4 Dev
C-PoP
London
12416
WorldCom
ULCC
UCL
Gigabit Ethernet
2.5 Gbit POS
10 Gbit POS
IP or MPLS
Load PC
Test PC
10 Gbit POS
Test PC
3 * 1 Gigabit
Ethernet
blade
Test PC
Load PC
2.5 Gbit POS
10 Gbit POS
Test PC
4 * 1 Gigabit
Ethernet
blade
QoS Marking
4 * 1 Gigabit
Ethernet
blade
QoS Marking
MPLS
2.5 Gbit POS
MPLS
MPLS
Load PC
Supervisor
Supervisor
Load PC
SJDN Core
MPLS Backbone
Router 12416
Figure 2. Equipment for the External and Edge Domain
MPLS Backbone
Router OSR
Difserve router
OR
MPLS Edge Router
Test PC
Load PC


Provide Congestion control such as
RED
ECN
Be able to assign MPLS labels to the packets in a flexible manner.
As shown in Figure 2, the edge routers have one 4-port Gigabit Ethernet interface.
Three ports are connected to test PCs and one to the Backbone router.
Question to Cisco: Can one card perform the IP and MPLS functions we need or do
we require two cards – one for IP input functions, the other for MPLS output?
Question to Cisco: what is the performance of these 4-port Gigabit Ethernet
interfaces?
The supervisor cards have 2 Gigabit Ethernet ports but it is believed these do not
support the required options, they can be used for Load PCs and/or IP access to the
router.
For the High Performance High Throughput tests, and for future MAN operation, it is
important to study the operation and performance of a 2.5 Gbit POS link. To do this it
is proposed that Manchester connect the Edge and Backbone routers with a POS link
across campus.
Study of transport protocols at Gigabit rates will require knowledge of the packet
headers, it is proposed to use 3508 Gigabit Ethernet switches to monitor the flows
between two other ports in the switch, directing the traffic to a dedicated monitor PC
to record the information.
External Backbone Routers
The Backbone routers must be able to support essentially the same features as the
Edge routers but in the MPLS domain. [We should clarify the details here. Rich]
There is also a requirement for these routers to be connected to test or load PCs so that
cross traffic (similar priority to the test traffic ) or background traffic (of lower
importance) may be injected or sinked at the router ports. Some of this traffic may be
directed towards the Edge router and some towards the Development Core. The aim is
to load the links causing queuing.
Question to Cisco: Can the 6500 OSR switch IP cross traffic (ports a->b) as well as
MPLS (ports c->d) ? What about mixing IP and MPLS on a port? What are the load
implications for the router blades / CPU card?
Test Hosts
It is proposed that 1 GHz PCs with 64bit 66 MHz PCI bus and >=133 MHz front side
bus running Linux be used, these motherboards include 100Mbit Ethernet interfaces.
The SysKonnect Gigabit NICs will be tested for suitability.
[More details to follow - Rich]
The groups of PC test systems should have a Keyboard Mouse Video switch and
screen / keyboard to provide suitable local access.
IP Access to the External Systems
To provide IP access it is proposed to connect the PCs and their local router to the
campus network using a Cisco 3512 switch as shown in Figure 3. Some care is
required with the configuration; the following arrangement is suggested:



The default route from the PC is set to the Gigabit link to the development
network.
Static routes for the researchers only are set for the 100 Mbit interface, e.g. to
Manchester HEP, MCC, RAL, UCL, ULCC, UKERNA.
The link between the Cisco 3512 and the Campus is set to 10 Mbit – this
would minimise any unintentional traffic.
4 * 1 Gigabit
Ethernet
blade
QoS Marking
Test PC
100 Mbit
switch
Test PC
Cisco
3512
Load PC
No routing traffic
Supervisor
Load PC
Backbone
or Edge Router
Gigabit Ethernet
100 Mbit Ethernet
10 Mbit Ethernet
Figure3. A possible way to provide IP access to the MB-NG test systems.
To
Campus
Network
IP Access to the Core Routers
Loging into a test PC or external router would provide in-band access to the SJDN
Core routers. With care the access could be via a non-stressed route e.g if testing was
between London and Manchester access could be from RAL. An independent route
might well be advantageous.
Initial Kit List
C-PoP GSR kit
Backbone OSR
kit
Edge OSR kit
MCC
Chassis Supv etc
OSR Ethernet
OSR 2.5G POS
3512 Eht. switch
1 GSR POS
1
1
1
2
1
M/c HEP
Chassis Supv etc
OSR Ethernet
OSR 2.5G POS
3512 Eht. switch
3508 Eht. switch
1
1
1
1
1
RAL1
RAL2
Chassis Supv etc
OSR Ethernet
OSR 2.5G POS
3512 Eht. switch
Chassis Supv etc
OSR Ethernet
OSR 2.5G POS
3512 Eht. switch
3508 Eht. switch
1 GSR 3 GigEth
1
1
1
1
1
1
1
1
ULCC/UCL1
Chassis Supv etc
OSR Ethernet
OSR 2.5G POS
3512 Eht. switch
1 GSR POS
1
1
1
1
1
UCL2
Chassis Supv etc
OSR Ethernet
OSR 2.5G POS
3512 Eht. switch
3508 Eht. switch
1
1
1
1
Download