Global Environment for Networking Innovations (GENI): Establishing

advertisement
GENI
Global Environment for Network Innovations
Spiral 1 Substrate Catalog
(DRAFT)
Document ID: GENI-INF-PRO-S1-CAT-01.5
February 22, 2009
Prepared by:
The GENI Project Office
BBN Technologies
10 Moulton Street
Cambridge, MA 02138 USA
Issued under NSF Cooperative Agreement CNS-0737890
(DRAFT) Spiral 1 Substrate Catalog
GENI-INF-PRO-S1-CAT-01.5
January 20, 2009
TABLE OF CONTENTS
1
DOCUMENT SCOPE ...................................................................................................................... 6
1.1
1.2
1.3
1.4
PURPOSE OF THIS DOCUMENT ........................................................................................................... 6
CONTEXT FOR THIS DOCUMENT ........................................................................................................ 7
RELATED DOCUMENTS ..................................................................................................................... 8
1.3.1 National Science Foundation (NSF) Documents ................................................................. 8
1.3.2 GENI Documents ................................................................................................................ 8
1.3.3 Standards Documents .......................................................................................................... 8
1.3.4 Other Documents ................................................................................................................. 8
DOCUMENT REVISION HISTORY ........................................................................................................ 8
2
GENI OVERVIEW .......................................................................................................................... 9
3
MID ATLANTIC NETWORK........................................................................................................10
3.1
3.2
3.3
3.4
3.5
3.6
4
SUBSTRATE OVERVIEW....................................................................................................................10
GENI RESOURCES ...........................................................................................................................12
AGGREGATE PHYSICAL NETWORK CONNECTIONS (HORIZONTAL INTEGRATION) ............................13
AGGREGATE MANAGER INTEGRATION (VERTICAL INTEGRATION) ..................................................13
MEASUREMENT AND INSTRUMENTATION.........................................................................................14
AGGREGATE SPECIFIC TOOLS AND SERVICES ..................................................................................15
GPENI .............................................................................................................................................15
4.1
4.2
4.3
4.4
4.5
4.6
5
SUBSTRATE OVERVIEW....................................................................................................................15
GENI RESOURCES ...........................................................................................................................18
AGGREGATE PHYSICAL NETWORK CONNECTIONS (HORIZONTAL INTEGRATION) ............................19
AGGREGATE MANAGER INTEGRATION (VERTICAL INTEGRATION) ..................................................20
MEASUREMENT AND INSTRUMENTATION.........................................................................................21
AGGREGATE SPECIFIC TOOLS AND SERVICES ..................................................................................21
BEN .................................................................................................................................................21
5.1
5.2
5.3
5.4
5.5
5.6
6
SUBSTRATE OVERVIEW....................................................................................................................21
GENI RESOURCES ...........................................................................................................................27
AGGREGATE PHYSICAL NETWORK CONNECTIONS (HORIZONTAL INTEGRATION) ............................27
AGGREGATE MANAGER INTEGRATION (VERTICAL INTEGRATION) ..................................................29
MEASUREMENT AND INSTRUMENTATION.........................................................................................30
AGGREGATE SPECIFIC TOOLS AND SERVICES ..................................................................................30
CMU TESTBEDS ...........................................................................................................................31
6.1
6.2
6.3
6.4
SUBSTRATE OVERVIEW....................................................................................................................31
GENI RESOURCES ...........................................................................................................................31
AGGREGATE PHYSICAL NETWORK CONNECTIONS (HORIZONTAL INTEGRATION) ............................31
AGGREGATE MANAGER INTEGRATION (VERTICAL INTEGRATION) ..................................................32
Page 2 of 71
(DRAFT) Spiral 1 Substrate Catalog
6.5
6.6
7
January 20, 2009
MEASUREMENT AND INSTRUMENTATION.........................................................................................32
AGGREGATE SPECIFIC TOOLS AND SERVICES ..................................................................................32
DOME .............................................................................................................................................32
7.1
7.2
7.3
7.4
7.5
7.6
8
SUBSTRATE OVERVIEW....................................................................................................................32
GENI RESOURCES ...........................................................................................................................33
AGGREGATE PHYSICAL NETWORK CONNECTIONS (HORIZONTAL INTEGRATION) ............................34
AGGREGATE MANAGER INTEGRATION (VERTICAL INTEGRATION) ..................................................35
MEASUREMENT AND INSTRUMENTATION.........................................................................................35
AGGREGATE SPECIFIC TOOLS AND SERVICES ..................................................................................36
ENTERPRISE GENI .......................................................................................................................36
8.1
8.2
8.3
8.4
8.5
8.6
9
SUBSTRATE OVERVIEW....................................................................................................................36
GENI RESOURCES ...........................................................................................................................37
AGGREGATE PHYSICAL NETWORK CONNECTIONS (HORIZONTAL INTEGRATION) ............................37
AGGREGATE MANAGER INTEGRATION (VERTICAL INTEGRATION) ..................................................37
MEASUREMENT AND INSTRUMENTATION.........................................................................................37
AGGREGATE SPECIFIC TOOLS AND SERVICES ..................................................................................37
SPP OVERLAY ..............................................................................................................................37
9.1
9.2
9.3
9.4
9.5
9.6
10
SUBSTRATE OVERVIEW....................................................................................................................38
GENI RESOURCES ...........................................................................................................................39
AGGREGATE PHYSICAL NETWORK CONNECTIONS (HORIZONTAL INTEGRATION) ............................40
AGGREGATE MANAGER INTEGRATION (VERTICAL INTEGRATION) ..................................................41
MEASUREMENT AND INSTRUMENTATION.........................................................................................41
AGGREGATE SPECIFIC TOOLS AND SERVICES ..................................................................................41
PROTO-GENI .................................................................................................................................41
10.1
10.2
10.3
10.4
10.5
10.6
11
SUBSTRATE OVERVIEW....................................................................................................................42
GENI RESOURCES ...........................................................................................................................43
AGGREGATE PHYSICAL NETWORK CONNECTIONS (HORIZONTAL INTEGRATION) ............................44
AGGREGATE MANAGER INTEGRATION (VERTICAL INTEGRATION) ..................................................44
MEASUREMENT AND INSTRUMENTATION.........................................................................................44
AGGREGATE SPECIFIC TOOLS AND SERVICES ..................................................................................44
PROGRAMMABLE EDGE NODE ................................................................................................45
11.1
11.2
11.3
11.4
11.5
11.6
12
GENI-INF-PRO-S1-CAT-01.5
SUBSTRATE OVERVIEW....................................................................................................................45
GENI RESOURCES ...........................................................................................................................46
AGGREGATE PHYSICAL NETWORK CONNECTIONS (HORIZONTAL INTEGRATION) ............................46
AGGREGATE MANAGER INTEGRATION (VERTICAL INTEGRATION) ..................................................47
MEASUREMENT AND INSTRUMENTATION.........................................................................................47
AGGREGATE SPECIFIC TOOLS AND SERVICES ..................................................................................48
MEASUREMENT SYSTEM ..........................................................................................................48
12.1 SUBSTRATE OVERVIEW....................................................................................................................48
12.2 GENI RESOURCES ...........................................................................................................................48
12.3 AGGREGATE PHYSICAL NETWORK CONNECTIONS (HORIZONTAL INTEGRATION) ............................49
Page 3 of 71
(DRAFT) Spiral 1 Substrate Catalog
GENI-INF-PRO-S1-CAT-01.5
January 20, 2009
12.4 AGGREGATE MANAGER INTEGRATION (VERTICAL INTEGRATION) ..................................................49
12.5 MEASUREMENT AND INSTRUMENTATION.........................................................................................50
12.6 AGGREGATE SPECIFIC TOOLS AND SERVICES ..................................................................................50
13
VISE ................................................................................................................................................50
13.1
13.2
13.3
13.4
13.5
13.6
14
WIMAX...........................................................................................................................................57
14.1
14.2
14.3
14.4
14.5
14.6
15
SUBSTRATE OVERVIEW....................................................................................................................57
GENI RESOURCES ...........................................................................................................................59
AGGREGATE PHYSICAL NETWORK CONNECTIONS (HORIZONTAL INTEGRATION) ............................60
AGGREGATE MANAGER INTEGRATION (VERTICAL INTEGRATION) ..................................................60
MEASUREMENT AND INSTRUMENTATION.........................................................................................62
AGGREGATE SPECIFIC TOOLS AND SERVICES ..................................................................................62
TIED ................................................................................................................................................62
15.1
15.2
15.3
15.4
15.5
15.6
16
SUBSTRATE OVERVIEW....................................................................................................................62
GENI RESOURCES ...........................................................................................................................62
AGGREGATE PHYSICAL NETWORK CONNECTIONS (HORIZONTAL INTEGRATION) ............................62
AGGREGATE MANAGER INTEGRATION (VERTICAL INTEGRATION) ..................................................62
MEASUREMENT AND INSTRUMENTATION.........................................................................................63
AGGREGATE SPECIFIC TOOLS AND SERVICES ..................................................................................63
PLANETLAB ..................................................................................................................................63
16.1
16.2
16.3
16.4
16.5
16.6
17
SUBSTRATE OVERVIEW....................................................................................................................63
GENI RESOURCES ...........................................................................................................................63
AGGREGATE PHYSICAL NETWORK CONNECTIONS (HORIZONTAL INTEGRATION) ............................63
AGGREGATE MANAGER INTEGRATION (VERTICAL INTEGRATION) ..................................................63
MEASUREMENT AND INSTRUMENTATION.........................................................................................63
AGGREGATE SPECIFIC TOOLS AND SERVICES ..................................................................................64
KANSEI SENSOR NETWORKS ...................................................................................................64
17.1
17.2
17.3
17.4
17.5
17.6
18
SUBSTRATE OVERVIEW....................................................................................................................50
GENI RESOURCES ...........................................................................................................................53
AGGREGATE PHYSICAL NETWORK CONNECTIONS (HORIZONTAL INTEGRATION) ............................54
AGGREGATE MANAGER INTEGRATION (VERTICAL INTEGRATION) ..................................................54
MEASUREMENT AND INSTRUMENTATION.........................................................................................55
AGGREGATE SPECIFIC TOOLS AND SERVICES ..................................................................................56
SUBSTRATE OVERVIEW....................................................................................................................64
GENI RESOURCES ...........................................................................................................................66
AGGREGATE PHYSICAL NETWORK CONNECTIONS (HORIZONTAL INTEGRATION) ............................68
AGGREGATE MANAGER INTEGRATION (VERTICAL INTEGRATION) ..................................................68
MEASUREMENT AND INSTRUMENTATION.........................................................................................69
AGGREGATE SPECIFIC TOOLS AND SERVICES ..................................................................................69
ORBIT .............................................................................................................................................70
18.1 SUBSTRATE OVERVIEW....................................................................................................................70
18.2 GENI RESOURCES ...........................................................................................................................70
Page 4 of 71
(DRAFT) Spiral 1 Substrate Catalog
18.3
18.4
18.5
18.6
19
GENI-INF-PRO-S1-CAT-01.5
January 20, 2009
AGGREGATE PHYSICAL NETWORK CONNECTIONS (HORIZONTAL INTEGRATION) ............................70
AGGREGATE MANAGER INTEGRATION (VERTICAL INTEGRATION) ..................................................70
MEASUREMENT AND INSTRUMENTATION.........................................................................................70
AGGREGATE SPECIFIC TOOLS AND SERVICES ..................................................................................70
ACRONYMS ..................................................................................................................................71
2
Page 5 of 71
(DRAFT) Spiral 1 Substrate Catalog
1
GENI-INF-PRO-S1-CAT-01.5
January 20, 2009
Document Scope
This section describes this document’s purpose, its context within the overall GENI document tree,
the set of related documents, and this document’s revision history.
1.1
Purpose of this Document
Spiral-1 of GENI prototyping activities will provide a diverse set of substrate technologies in the
form of components or aggregates, with resources that can be discovered, shared, reserved and
manipulated by GENI researchers. It is the purpose of this document to catalog substrate technologies
in reference to a set of topic areas regarding the capabilities and integration of the spiral-1 prototype
aggregates and components. These topic areas1 were sent to the PI’s of projects providing substrate
technologies. Together with subsequent discussions with the GPO, this document presents the
information gathered in these areas. Information in some of the topic areas will be determined over the
duration of the spiral-1 integration activities, and thus this document will evolve with time.
There are several reasons for requesting the information forming this document. A detailed
description of substrate technologies and the relevance to GENI will assist in the analysis of spiral-1
capabilities to support early research experiments and to identify substrate technology gaps. Presently in
spiral-1, two small projects are funded to provide this type of analysis to the GPO.

Data Plane Measurements http://groups.geni.net/geni/wiki/Data%20Plane%20Measurements

GENI at Four Year Colleges http://groups.geni.net/geni/wiki/GeniFourYearColleges
In addition, having a well documented set of aggregate integration solutions, as expected for the diverse
set of substrate technologies in spiral-1, will provide a valuable resource as GENI system, operations
and management requirements are developed.
1
For more information see http://groups.geni.net/geni/wiki/ReqInf
Page 6 of 71
(DRAFT) Spiral 1 Substrate Catalog
1.2
GENI-INF-PRO-S1-CAT-01.5
January 20, 2009
Context for this Document
Figure 1-1 below shows the context for this document within GENI’s overall document tree.
GPO
Project
Management
Legal & Contract
Management
Architecture
System
Engineering
Infrastructure
Planning
Outreach
Infrastructure
Prototyping
Spiral 1
Substrate Catalog
Figure 1-1. Spiral-1 Substrate Catalog within the GENI Document Tree.
Page 7 of 71
(DRAFT) Spiral 1 Substrate Catalog
1.3
GENI-INF-PRO-S1-CAT-01.5
January 20, 2009
Related Documents
The following documents are related to this document, and provide background information,
requirements, etc., that are important for this document.
1.3.1
National Science Foundation (NSF) Documents
Document ID
Document Title and Issue Date
N/A
1.3.2
GENI Documents
Document ID
Document Title and Issue Date
GENI-SE-SY-SO02.0
GENI System Overview
GENI-FAC-PRO-S1OV-1.12
GENI Spiral 1 Overview
GENI-SE-AS-TD01.1
GENI Aggregate Subsystem Technical Description
1.3.3
Standards Documents
Document ID
Document Title and Issue Date
N/A
1.3.4
Other Documents
Document ID
Document Title and Issue Date
N/A
1.4
Document Revision History
The following table provides the revision history for this document, summarizing the date at which
it was revised, who revised it, and a brief summary of the changes. This list is maintained in
chronological order so the earliest version comes first in the list.
Revision
Date
Revised By
Summary of Changes
Page 8 of 71
(DRAFT) Spiral 1 Substrate Catalog
GENI-INF-PRO-S1-CAT-01.5
January 20, 2009
-01.0
01Oct08
J. Jacob
Initial draft to set scope of document
-01.1
08Dec08
J. Jacob
Compiled input from substrate providers
-01.2
-01.3
-01.4
10Dec08
5Jan09
20Jan09
J. Jacob
J. Jacob
J. Jacob
Edit of sections 3,4; addition of WiMAX content
Include comments from A. Falk and H. Mussman
Revised PI input to reflect discussions to date. Initial
editing for uniformity across projects. Addition of Kansei
Sensor Networks
-01.5
22Feb09
J. Jacob
Incorporated additional PI input as well as SE comments
and diagrams
2
GENI Overview
The Global Environment for Network Innovations (GENI) is a novel suite of infrastructure now
being designed to support experimental research in network science and engineering.
This new research challenges us to understand networks broadly and at multiple layers of
abstraction from the physical substrates through the architecture and protocols to networks of people,
organizations, and societies. The intellectual space surrounding this challenge is highly
interdisciplinary, ranging from new research in network and distributed system design to the theoretical
underpinnings of network science, network policy and economics, societal values, and the dynamic
interactions of the physical and social spheres with communications networks. Such research holds
great promise for new knowledge about the structure, behavior, and dynamics of our most complex
systems – networks of networks – with potentially huge social and economic impact.
As a concurrent activity, community planning for the suite of infrastructure that will support NetSE
experiments has been underway since 2005. This suite is termed the Global Environment for Network
Innovations (GENI). Although its specific requirements will evolve in response to the evolving NetSE
research agenda, the infrastructure’s conceptual design is now clear enough to support a first spiral of
planning and prototyping. The core concepts for the suite of GENI infrastructure are as follows.

Programmability – researchers may download software into GENI-compatible nodes to
control how those nodes behave;

Virtualization and Other Forms of Resource Sharing – whenever feasible, nodes implement
virtual machines, which allow multiple researchers to simultaneously share the infrastructure;
and each experiment runs within its own, isolated slice created end-to-end across the
experiment’s GENI resources;

Federation – different parts of the GENI suite are owned and/or operated by different
organizations, and the NSF portion of the GENI suite forms only a part of the overall
‘ecosystem’; and

Slice-based Experimentation – GENI experiments will be an interconnected set of reserved
resources on platforms in diverse locations. Researchers will remotely discover, reserve,
configure, program, debug, operate, manage, and teardown distributed systems established
across parts of the GENI suite.
Page 9 of 71
(DRAFT) Spiral 1 Substrate Catalog
GENI-INF-PRO-S1-CAT-01.5
January 20, 2009
As envisioned in these community plans, the GENI suite will support a wide range of experimental
protocols, and data dissemination techniques running over facilities such as fiber optics with nextgeneration optical switches, novel high-speed routers, city-wide experimental urban radio networks,
high-end computational clusters, and sensor grids. The GENI suite is envisioned to be shared among a
large number of individual, simultaneous experiments with extensive instrumentation that makes it easy
to collect, analyze, and share real measurements.
The remainder of this document presents the substrate catalog descriptions covering the topic areas
suggested by the GPO. In response to the initial input, the GPO has provided a list of comments and
questions to the PI’s. These will be embedded within the discussion to serve as placeholders where
additional information is requested, and will be updated as the information is provided. This is a public
document and all within the GENI community are encouraged to contribute (when applicable) either
directly within this document or through the substrate working group mailing list2.
3
Mid Atlantic Network
Substrate Technologies: Regional optical network, 10Gbps DWDM optical nodes, 10 Gbps Ethernet
switches, Compute servers
Cluster: B
Project wiki: http://groups.geni.net/geni/wiki/Mid-Atlantic%20Crossroads
3.1
Substrate Overview
Mid-Atlantic Crossroads owns and operates the DRAGON network deployed within the
Washington, D.C. beltway, connecting ten regional participating institutions and interconnecting with
both the Internet2 national backbone network and the National Lambda Rail network. This research
infrastructure consists of approximately 100 miles of high-grade fiber connecting five core switching
locations within the Washington metropolitan area. Each of these switching locations include optical
add/drop multiplexers and state-of-the-art MEMS-based wavelength selectable switches (Adva Optical
Networking). These nodes also include 10Gbps capable Ethernet switches (Raptor Networks). The
network currently connects approximately a dozen research facilities within the region that are either
involved in the development of control plane technologies, or are applying the light path capabilities to
e-science applications (note: these are not GENI projects). Figure 5-1 depicts the network
configuration at the time of this writing.
2
substrate-wg@geni.net
Page 10 of 71
(DRAFT) Spiral 1 Substrate Catalog
GENI-INF-PRO-S1-CAT-01.5
January 20, 2009
Figure 3-1 Research Infrastructure
The Mid-Atlantic Network PoP locations include:

University of Maryland, College Park

USC ISI-East Arlington, VA

Qwest Eckington Place NE, Washington, DC

George Washington University

Level 3 Old Meadow Rd. Mclean, VA

6 St Paul Street Baltimore

660 Redwood Street Baltimore

Equinix 21715 Filigree Court Ashburn, VA
The dedicated research network supporting the GENI spiral-1 prototyping project includes the
following locations:





University of Maryland, College Park
USC ISI-East Arlington, VA
Qwest Eckington Place NE, Washington, DC
George Washington University
Level 3 Old Meadow Rd. Mclean, VA
It is possible for MAX to support connection for additional GENI prototyping projects at sites from
either list of PoP locations.
As shown in Figure 3-2, a typical core node includes either a three or four degree ROADM, an
OADM, one or more Layer 2 Ethernet switches and several PC’s for control, virtualization and
performance verification.
Page 11 of 71
(DRAFT) Spiral 1 Substrate Catalog
GENI-INF-PRO-S1-CAT-01.5
January 20, 2009
PC
(AM)
?
Core DRAGON Node – Typical Configuration
Control Plane Switch
GMPLS Stack
PCs
Public
Internet
Virtualization
?
Performance Testing
n GBE
Raptor Ethernet
SW
Public Internet
Data Plane Switch
Inter-domain
data plane
connections
n PC
(VM)
?
Control
SW
?
MAX Layer 3
IP Service
?
?
GigE & 10GigE
n 10GBE
Adva Optical
Add/Drop Shelf
MAX Ops
and Mgmt
nλ
ADVA DWDM
nλ
Dark fiber to other
metro-area ROADMs
or add/drop shelves
Adva Optical
Multi-Degree ROADM
nλ
Dark fiber to other
metro-area ROADMs
or add/drop shelves
n OTN2
nl
Figure 3-2 Typical core node
3.2
?
nλ
ADVA ROADM
nl
nl
GENI Resources
The following resources will be offered by this aggregate:

Ethernet Vlans

Optical measurements
These resources fall into the categories of component resources, aggregate resources, and measurement
resources,3 respectively.
The PC’s designated by (VM) in figure 3-2 are configured as PlanetLab servers which are have
deployed across the DRAGON infrastructure. The resources available from PlaneLab servers are cpu
slices and memory. Circuits can be provisioned to these PlanetLab nodes through one of the APIs
discussed in section 3.4.
Layer 2 Ethernet VLANs (802.1Q tagged MAC frames) provide the ability to provision or schedule
circuits (based on Ethernet VLANs) across the optical network with the following restrictions and
limitations:
 maximum of 250 VLANs in use at any one time
 maximum of 1 Gbps per circuit provisioned with bandwidth granularity of 0.1 Mbps
 VLAN ID must be in the range of 100-3965
Ethernet VLAN’s enable the physical infrastructure to be shared, but traffic is logically isolated into
multiple broadcast domains. This effectively allows multiple users to share the underlying physical
resources without allowing access to each other’s traffic.
The DWDM optical layer will provide optical performance measurements. Measurements are made
at the wavelength level, which carries multiple slice VLANS. While the DRAGON infrastructure does
provide Layer 2 and Layer 1 programmable connectivity within the Washington DC metro area, and
3
See GENI Aggregate Subsystem Technical Description for a detailed discussion of these resources
Page 12 of 71
(DRAFT) Spiral 1 Substrate Catalog
GENI-INF-PRO-S1-CAT-01.5
January 20, 2009
peering points with national infrastructures such as Internet2 and NLR, the complexities associated with
optical wavelength provisioning and configurations on a shared optical network infrastructure prohibit
offering alien wavelengths or direct access to the photonic layer as a GENI resource.
3.3
Aggregate Physical Network Connections (Horizontal Integration)
Physical connectivity within the substrate aggregate is illustrated in Figures 3-1 and 3-2. Core
nodes are connected via dark fiber, which is connected to DWDM equipment. Connectivity to the
network edge may either be a single wavelength at GigE, 10GigE, or in some cases a multi-wavelength
DWDM connection.
Connectivity to the GENI backbone (provided by Internet2 and NLR) will be accomplished via
existing tie fibers between Mid Atlantic Crossroads’ suite in the Level3 co-location facility at McLean,
Virginia. The DRAGON network already has existing connections to both NLR and Internet2 in
McLean.
The DRAGON infrastructure primarily uses Gigabit Ethernet and 10Gigabit Ethernet for physical
connectivity – in particular, 1000BaseLX, 1000BaseSX, 1000baseT, 10GBase-LR, and 10GBase-SR.
3.4
Aggregate Manager Integration (Vertical Integration)
The control switch provides physical interfaces (all at 1GBE) between the control framework and
the aggregate resources. The GENI facing side of the aggregate manager communicates to the CF
clearinghouse on a link to the public internet. The substrate facing side of the aggregate manager
communicates with each substrate component over intra-node Ethernet links.
A high level view of the aggregate manager and its software interfaces are illustrated in 3.3. With
respect to resource discovery and resource management (reservation, manipulation), the large GENI
(PlanetLab) block represents a native GENI subsystem within the aggregate manager. The arrows
represent interfaces from this subsystem to specific aggregate substrate technologies. The GENI facing
interfaces between the aggregate manager and the clearinghouse and/or researcher are not illustrated.
Under the assumption that optical measurements are resources to be discovered an interface must exist
for the ADVA DWDM equipment. The measurement
GENI (PlanetLab)
configuration and retrieval will be realized using TL1
over the craft (Ethernet) interface for the ADVA
equipment. The VLAN resource requires an interface to
the DRAGON software system. The DRAGON
DRAGON
software interfaces currently include a Web Services
(API/CLI)
(SOAP/XML) API, a binary API (based on GMPLS
standards), as well as command-line interfaces (CLI) and
web-based user interfaces. A GENI-compliant interface
ADVA
Raptor
will be provided by developing a wrapper to the existing
MyPLC
(SNMP/TL1)
(SNMP)
Figure 3-3
Web Services API. VLAN configuration will be realized using SNMP over the craft (Ethernet)
interface for Raptor switches. The PlanetLab hosts will use typical PLC control and management per
GENI wrapper developments for these hosts.
Page 13 of 71
(DRAFT) Spiral 1 Substrate Catalog
3.5
GENI-INF-PRO-S1-CAT-01.5
January 20, 2009
Measurement and Instrumentation
Identify any measurement capabilities either embedded in your substrate components or as dedicated
external test equipment.
Layer 1 equipment (Adva Optical Networking), provides access to optical performance monitoring
data, FEC counters (only for 10G transponders), and SNMP traps for alarm monitoring. Layer 2
Ethernet switches, provides access to packet counters and error counters.
The DRAGON network also uses Nagios for monitoring general network health, round-trip time
between nodes, etc. Cricket is used for graphing network performance (interface speeds) as well as
graphing optical performance monitoring data (light levels, FEC counters, etc). Cricket and Nagios
monitoring is depicted in Figures 3 and 4, respectively.
The DRAGON network also has a Perfsonar active measurement point deployed at
https://perfsonar.dragon.maxgigapop.net.
Page 14 of 71
Figure 3-4 Cricket monitoring optical receive power on a 10G transponder (last 24
hours)
(DRAFT) Spiral 1 Substrate Catalog
GENI-INF-PRO-S1-CAT-01.5
January 20, 2009
Figure 3-5 Nagios monitoring PING service for a network switch
3.6
Aggregate Specific Tools and Services
Describe any tools or services which may be available to users and unique to your substrate
contribution.
The DRAGON software interfaces to the aggregate manager as an aggregate specific tool as well as
for resource discovery and management. The latter interface was discussed in the vertical integration
section above. This discussion focuses on how researchers will use DRAGON as a tool to configure the
VLAN’s which have been discovered and reserved. Researchers will be able to allocate dedicated Layer
2 circuits (provisioned using Ethernet VLANs) using the DRAGON API integrated into GENI
framework. Researchers must specify end points (source/destination interfaces), desired bandwidth,
starting time and duration. Circuits may be scheduled in advance (book-ahead reservations) or set to
signal immediately upon receiving a reservation request.
4
GpENI
Substrate Technologies: Regional optical network, 10Gbps optical switches, 10 Gbps Ethernet switches,
Programmable Routers, Site-specific experimental nodes
Cluster: B
Project wiki: http://groups.geni.net/geni/wiki/GpENI
4.1
Substrate Overview
Provide an overview of the hardware systems in your contribution.
Page 15 of 71
(DRAFT) Spiral 1 Substrate Catalog
GENI-INF-PRO-S1-CAT-01.5
January 20, 2009
GpENI is built upon a multi-wavelength fiber interconnection between four GpENI universities within
the GPN (Great Plains Network), with direct connection to the Internet 2 backbone. At a high level,
Qwest dark fiber IRUed to the state of Kansas interconnects KSU to KU through the Qwest POP to the
Internet2 POP in Kansas City. UMKC is connected to the Internet2 POP over MOREnet IRU fiber, and
UNL is connected to the Internet POP by its own IRU fiber. Administration of the GpENI infrastructure
is performed by staff at each university, assisted by GPN, KanREN (Kansas Research and Education
Network) and MOREnet (Missouri Research and Education Network).
In the first year deployment, each university has a
GpENI node cluster interconnected to one-another and
to the rest of GENI by Ethernet VLAN’s. In later
deployments, each university site will include a Ciena
optical switch for layer-1 interconnections among
GpENI institutions. The node cluster is designed to be
as flexible as possible at every layer of the protocol
stack.
Each GpENI node cluster (figure 4-2) consists of
several components, physically interconnected by a
Gigabit Ethernet switch to allow arbitrary and flexible
experiments. Multiple PC’s in the node cluster will be
Figure 4-1 GpENI Participating Universities
designated for use as programmable routers such as
XORP or Click, or designated as PlanetLab nodes. The
specific number, n, per site is to be determined. One of the PC’s in the KSU cluster will be designated
as a MyPLC controller, and one of the PC’s in the KU
Campus
Internet
cluster will be designated as a backup MyPLC controller.
At each site an additional processor is reserved to host the
1 GbE
GpENI management and control. The exact functions of
this machine and its accessibility to GENI researchers is
n PC’s
n PC’s
GpENI mgmt &
Site-specific
prog routers
MyPLC
control
(tbd)
still being determined. This is a potential processor to host
the GpENI aggregate manager to provide the interface
between the GpENI substrate and the PlanetLab
1 GbE
1 GbE
1 GbE
1 GbE
clearinghouse. All of the PC’s in the node cluster have
GbE connections to the Ethernet switch. The Ethernet
?
Ethernet SW
switch will enable VLAN’s between the host PC’s
spanning the GpENI topology as well as VLAN’s
spanning the GENI backbone and connecting to other host
1 GbE
nx1 GbE
PC’s within cluster B. The ethernet switch also enables
the physical connections between GpENI control and
Ciena Optical
Switch
management with all substrate components in the node
cluster. Site specific nodes could include experimental
OC192
nodes, including software defined radios (such as the
KUAR), optical communication laboratories, and
sensor testbeds, as well as nodes from other GENI
Figure 4-2 GpENI Node Cluster
projects. These site specific nodes will connect to the
ethernet switch for control and management
interfaces. Experimental plane connections could be through either the ethernet switch or the Ciena
optical switch. At this time, no site-specific nodes have been committed to the GpENI aggregate. The
Ciena optical switch forms the high speed optical interconnections between GpENI cluster nodes.
Currently only one Ciena optical switch is deployed in GpENI and is located at UNL. This deployment
is a Core director configured with a single 10GbE Ethernet Service Line Module (ELSM) on the client
Page 16 of 71
(DRAFT) Spiral 1 Substrate Catalog
GENI-INF-PRO-S1-CAT-01.5
January 20, 2009
side, and a single OC-192 module on the line side. The ELSM client side can support ten 1GbE
connections or one 10GbE connection. As drawn in figure 4-2, multiple 1GbE connections are
illustrated.
Per site decomposition of GpENI node clusters
The University of Kansas
Role
Optical Switch
Model
Ciena (4200 or CD)
HW Specs
TBD
Ethernet Switch
Netgear JGS524
24-port 10/100/1000 Enet
Ethernet Switch
Netgear GSM7224
24-port 10/100/1000 Enet with VLAN mgt
SNMP traffic monitoring
ordered
Dell Precision 470
Intel Xeon 2.8 GHz, 1GB
Linux
installed
Dell Precision 470
Intel Xeon 2.8 GHz, 1GB
Linux
installed
Dell Precision 470
Intel Xeon 2.8 GHz, 1GB
Linux
installed
PL node
Dell Precision 470
Intel Xeon 2.8 GHz, 1GB
Linux
installed
PL node
Dell Precision 470
Intel Xeon 2.8 GHz, 1GB
Linux
installed
prog. router
Dell Precision 470
Intel Xeon 2.8 GHz, 1GB
Linux
installed
prog. router
Dell Precision 470
Intel Xeon 2.8 GHz, 1GB
Linux
installed
spare node
Dell Precision 470
Intel Xeon 2.8 GHz, 1GB
Linux
installed
spare node
Dell Precision 470
Intel Xeon 2.8 GHz, 1GB
Linux
installed
spare node
Dell Precision 470
Intel Xeon 2.8 GHz, 1GB
Linux
installed
GpENI
master control
GpENI control
backup
MyPLC
KSU backup
SW Specs
TBD
Status
Planned
temporary
Kansas State University
Role
Model
HW Specs
SW Specs
Status
Optical Switch
Ciena (4200 or CD)
TBD
TBD
future
Ethernet Switch
Netgear GSM7224
24-port 10/100/1000 Enet with VLAN mgt
SNMP traffic monitoring
planned
Linux
planned
GpENI
?
KSU control
?
MyPLC
Dell Precision 470
Intel Xeon 2.8 GHz, 2GB
Linux
installed
PL node
Custom
2.8 GHz Pentium 4, 1 GB
Linux
installed
PL node
Custom
2.8 GHz Pentium 4, 1 GB
Linux
installed
PL node
VMware Server 2.0
1.8 GHz Pentium Dual-Core, 1 GB
Linux
installed
Page 17 of 71
(DRAFT) Spiral 1 Substrate Catalog
PL node
GENI-INF-PRO-S1-CAT-01.5
(Dylan)?
January 20, 2009
?
Linux
future
PL node
?
?
Linux
future
prog. router
?
?
Linux
planned
prog. router
?
?
Linux
planned
University of Missouri – Kansas City
Role
Model
HW Specs
SW Specs
Status
Optical Switch
Ciena CD
TBD
TBD
planned
Ethernet Switch
Netgear GSM7224
SNMP traffic monitoring
planned
GpENI
24-port 10/100/1000 Enet with VLAN
mgt
?
?
Linux
planned
PL node
?
?
Linux
planned
PL node
?
?
Linux
planned
prog. router
?
?
Linux
planned
prog. router
?
?
Linux
planned
HW Specs
SW Specs
Status
160 Gbps cross connect
Core Director version
1 STM64/OC192 optical interface
5.2.1
UMKC control
University of Nebraska – Lincoln
Role
Model
Optical Switch
Ciena CDCI
Ethernet Switch
Netgear GSM7224
GpENI
24-port 10/100/1000 Enet with VLAN
mgt
installed
SNMP traffic monitoring
planned
?
?
Linux
planned
PL node
?
?
Linux
planned
PL node
?
?
Linux
planned
prog. router
?
?
Linux
planned
prog. router
?
?
Linux
planned
UNL control
4.2
GENI Resources
Discuss the GENI resources offered by your substrate contribution and how they are shared,
programmed, and/or configured. Include in this discussion to what extent you believe the shared
resources can be isolated.
Page 18 of 71
(DRAFT) Spiral 1 Substrate Catalog
GENI-INF-PRO-S1-CAT-01.5
January 20, 2009
The following is a tentative list of resources available from the GpENI aggregate.

PlanetLab hosts provide cpu slices and memory.

Programmable routers provide cpu cycles, memory, disk space, queue’s, addresses and
ports

Ethernet switches provides VLAN’s

Optical switch resource is tbd.
Layer 2 Ethernet VLANs (802.1Q tagged MAC frames) provide the ability to provision or schedule
circuits (based on Ethernet VLANs) across the optical network with the following restrictions and
limitations:
 maximum of 250 VLANs in use at any one time
 maximum of 1 Gbps per circuit provisioned with bandwidth granularity of 0.1 Mbps
 VLAN ID must be in the range of 100-3965
Ethernet VLAN’s enable the physical infrastructure to be shared creating researcher defined topologies.
Traffic is logically isolated into multiple broadcast domains. This effectively allows multiple users to
share the underlying physical resources without allowing access to each other’s traffic.
x
4.3
Aggregate Physical Network Connections (Horizontal Integration)
Provide an overview of the physical connections within the substrate aggregate, as well as between the
aggregate and the GENI backbones. Identify to the extent possible, non-GENI equipment, services and
networks involved in these connections.
The GpENI Infrastructure is still in early phases of deployment. Detailed network maps will be updated
on the GpENI wiki (https://wiki.ittc.ku.edu/gpeni_wiki) as the infrastructure is deployed. The physical
topology consists of fiber interconnection between the four GpENI universities, and is currently being
deployed, as shown in the figure 4-3 as white blocks. GpENI-specific infrastructure is depicted by blue
blocks.
Each of the four university node clusters will interface into the GpENI backbone via a Ciena CN4200
or CoreDirector switch. Currently only a CoreDirector switch has been deployed at UNL. The CN4200
switches shown in figure 4-3 represent the initial plans, which are being modified in response to the
phased acquisition alignment with GENI funding levels. The rest of the GpENI node infrastructure
(PC’s and Ethernet switch) for each site is labeled "GpENI node cluster". It should be noted that both
blue boxes at each site form the GpENI node cluster illustrated in figure 4-2.
The main fiber run between KSU, KU, and Kansas City is Qwest Fiber IRUed (leased) to KU,
proceeding through the Qwest POP at 711 E. 19th St. in Kansas City, and continuing to the Internet2
(Level3) POP at 1100 Walnut St., which will provide access to GpENI from Internet2. A chunk of Cband spectrum is planned providing multiple wavelengths at KU and KSU. UMKC is connected over
MOREnet (Missouri Research and Education Network) fiber to the Internet2 POP, with four
wavelengths anticipated. UNL is also connected to the Intenet2 POP over fiber IRUed from Level3 with
two wavelengths committed. Local fiber in Manhattan and Lawrence is leased from Wamego
Telephone (WTC) and Sunflower Broadband (SFBB), respectively. There is abundant dark fiber
already in place on the KU, KSU, UMKC, and UNL campuses to connect the GpENI nodes to the
switches (existing or under deployment) on the GPN fiber backbone. For reference, the UNL link is
Page 19 of 71
(DRAFT) Spiral 1 Substrate Catalog
GENI-INF-PRO-S1-CAT-01.5
January 20, 2009
terminated by Ekinops switches, the UMKC link is terminated by ADVA switches, and the KU/KSU
link is terminated by Ciena switches.
Figure 4-3 GpENI Network Connections
4.4
Aggregate Manager Integration (Vertical Integration)
Identify the physical connections and software interfaces for integration into your cluster's assigned
control framework
Vertical integration of GpENI into the PlanetLab CF is currently not understood. The following text
describes a potential host for the aggregate manager and interfaces into the GpENI node cluster
components (PC’s, Ethernet Switch, and Optical Switch). The GpENI control and management node
seems like a logical place to consider hosting the PlanetLab aggregate manager. The physical
connection from this node to the campus Internet enables connectivity to both the GENI (PlanetLab)
clearinghouse and researchers. The GpENI control and management node also interfaces to all
components within the node cluster via 1GbE connections through the Ethernet switch. The 1GbE
connection between Ethernet switch and the optical switch is a special case where the control and
management interfaces to a craft port on the optical switch, i.e. the control and experiment planes have
physically separate connections. All other components share the same connection between control and
experiment planes.
The control and management interfaces to the GpENI node cluster components, which defines the
aggregate specific interfaces to integrate into the PlanetLab CF are illustrated in figure 4-4. GpENI is
currently investigating the use of DRAGON software from the Mid-Atlantic Network project for
dynamic VLAN’s to be part of the GpENI resources. As such, this appears as an aggregate specific tool
with interfaces between GENI and the Ethernet switch.
Page 20 of 71
(DRAFT) Spiral 1 Substrate Catalog
GENI-INF-PRO-S1-CAT-01.5
January 20, 2009
GENI (PlanetLab)
MyPLC
Xorp/Click
DRAGON
(Web Svc API)
PL nodes
(Linux)
PL nodes
(Linux)
NetGear
(SNMP)
Ciena CD
(http)
Figure 4-4 Aggregate specific SW interfaces
4.5
Measurement and Instrumentation
Identify any measurement capabilities either embedded in your substrate components or as dedicated
external test equipment.
4.6
Aggregate Specific Tools and Services
Describe any tools or services which may be available to users and unique to your substrate
contribution.
5
BEN
Substrate Technologies: Regional optical network, Optical fiber switch, 10Gbps Optical Transport, 10
Gbps Routers/Switches, Site-specific experimental nodes
Cluster: D
Project wiki: http://groups.geni.net/geni/wiki/ORCABEN
5.1
Substrate Overview
Provide an overview of the hardware systems in your contribution.
BEN4 is the primary platform for RENCI network research. It’s a dark fiber-based, time-sharing
research facility created for researchers and scientists in order to promote scientific discovery by
4
Breakable Experimental Network: https://ben.renci.org
Page 21 of 71
(DRAFT) Spiral 1 Substrate Catalog
GENI-INF-PRO-S1-CAT-01.5
January 20, 2009
providing the three Triangle Universities with world-class infrastructure and resources. RENCI
engagement sites at the three university campuses (UNC, NCSU and Duke) in RTP area as well as
RENCI Europa anchor site in Chapel Hill, NC contain BEN PoPs (Points of Presence) that form a
research test bed. Fig. 5-1 illustrates RENCI’s BEN dark fiber network that’s used to connect the
engagement sites.
Figure 5-1 The diagram illustrates the BEN fiber footprint over the NCNI ring. Its touchpoints
include the Duke, NCSU and UNC Engagement Sites and RENCI’s anchor site, the Europa
Center.
BEN consists of two planes – management, run over a mesh of secure tunnels provisioned over
commodity IP and data, run over the dark fiber infrastructure. Access to the management plane is
granted over a secure VPN. Management plane address space is private non-routable.
Each RENCI PoP shares the same network architecture that enables the following capabilities in
support of regional network research:
 Reconfigurable fiber switch (layer 0) provides researcher access to fiber ports for dynamically
assigning new connections, which enables different physical topologies to be generated and
connect different experiments to BEN fiber
 Power and Space to place equipment for performing experiments on BEN
 RENCI additionally provides equipment to support collaborative research efforts:
o Reconfigurable DWDM (Dense Wavelength Division Multiplexing) (layer 1) which
provides access to wavelengths to configure new network connections
o Reconfigurable switch/router (layers 2 and 3) provide packet services
o Programmable compute platforms (3-4 server blades at each PoP)
Page 22 of 71
(DRAFT) Spiral 1 Substrate Catalog
GENI-INF-PRO-S1-CAT-01.5
January 20, 2009
o
Figure 5-2 Fig. (a) provides a functional diagram of a RENCI BEN PoP. The nodal architecture
indicates various types of equipment and power and space for researchers to collocate their
experimental equipment. Fig (b) provides a systems level network architecture perspective that
illustrates network connectivity between PoPs.
As shown in figure 5-2 each BEN PoP is capable of accommodating equipment for several
experiments that can use the dark fiber exclusively in a time-shared fashion, or, at higher layers, run
concurrently.
Further details of a BEN PoP are shown in figure 5-3 with detailed equipment and per-site
configuration provided in the following text.
Polatis 32 fiber Reconfigurable Optical Switch
The Polatis Fiber Switch enables connectivity between individual fibers in an automated fashion.
The optical switch permits various scenarios for connecting the different pieces of equipment in a BEN
PoP. In a traditional implementation, layer 2-3 devices will connect to layer 1 WDM equipment that
connects to the optical switch. Alternatively, a layer 2-3 switch/router or experimental equipment can
connect directly to the fiber switch. With the appropriate optics, it’s entirely possible to directly connect
end systems into the optical fiber switch for network connectivity to end systems located in other BEN
PoPs.
 Polatis implements beam-steering technology to collimate opposite interfaces from an input array to
an output array of optical ports
 Voltage signals control the alignment of the optical interfaces on both sides of the array
 Fully reconfigurable switch allows connectivity form any port to any other port on the switch
Page 23 of 71
(DRAFT) Spiral 1 Substrate Catalog
GENI-INF-PRO-S1-CAT-01.5
January 20, 2009
 Optical power monitors are inserted to accurately measure the strength of optical signals through the
switch
 32 fibers
 Full non-blocking and completely reconfigurable: e.g. any fiber may be connected to another fiber
without needing to reconfigure existing optical fiber connections
 Switch unit is optically transparent and fully bidirectional
 Switch uinit front panel comprising 32 LC/UPC type connectors
 Switch unit height 1RU
 Switch unit to fit into a standard 19” rack
 Ethernet interface and a serial RS-232 interface
 Support for SNMP and TL1
 See fiber switch data sheet for detailed specification of optical performance http://www.polatis.com/datasheets/vst%20303-07-0-s%20aug%202007.pdf
o Base loss through switch core < 1.4 dB (See Performance Category H on OST data
sheet – excludes additional loss from optical power monitors, OPM)
o Additional loss for a single OPM in connected path < 0.2 dB (See Performance
Category K-300 on fiber switch data switch)
o Additional loss for two OPMs in connected path < 0.3 dB (See performance category
K-500 on fiber switch data switch)
o Additional loss for three OPMs in connected path < 0.5 dB
o Additional loss for four OPMs in connection path < 0.6 dB
Infinera Digital Transport Node (DTN)
The Infinera DTN (Digital Transport Node) delivers advanced WDM infrastructure capabilities for
the purpose of supporting network and application research. Dedicated to promote experimentation in
BEN, the DTN is fully reconfigurable to aid investigators in advancing network science. Within a BEN
node, the DTN can accept a variety of client side signals through the fiber switch to enable direct
connectivity to experimental systems or other commercial network equipment to facilitate new proofof-concept demonstrations. Reconfigurability at layer 1 in a non-production, pro-research testbed
facility encourages fresh looks at network architectures by incorporating dynamic signaling of the DTN
in new networking paradigms and integrating it’s functionality in novel approaches to exploring
network phenomena.
Cisco Catalyst 6509-E
The Cisco 6509 provides a well-understood platform to support the high performance networking
needs and network research requirements at Renci. Its support for nearly all standardized routing and
switching protocols make it advantageous for implementing different testbed scenarios and for
customized experimentation over BEN. The 6509 enables collaboration between Renci’s engagement
sites by integrating various computing, visualization and storage technologies.
 Support for Cisco’s Internetwork Operating System (IOS)
 Each 6509 equipped with multiple 10 gigabit and 1 gigabit Ethernet interfaces
 Redundant Supervisor Engine 720 (3BXL)
 Catalyst 6500 Distributed Forwarding Card (3BXL)
Specifications URL
Page 24 of 71
(DRAFT) Spiral 1 Substrate Catalog
GENI-INF-PRO-S1-CAT-01.5
January 20, 2009
http://www.cisco.com/en/US/prod/collateral/modules/ps2797/ps5138/product_data_sheet09186a00800ff916_
ps708_Products_Data_Sheet.html
Juniper 24-port EX 3200 Series Switch
The Juniper EX switch provides a programmatic API that can be used to externally communicate
with and control the switch through the use of XML. The advantage of having this capability is it
enables another device, that’s separate from the switch, to control its behavior for separation of the
control and forwarding planes.
 Support for the Juniper Operating System (JUNOS)
 XML application to facilitate programmatic API for the switch
 24-port 10/100/1000 Ethernet interfaces
Specifications URL
http://www.juniper.net/products_and_services/ex_series/index.html
Node/Site configuration
Duke (status: operational)


Polatis 32 Fiber Reconfigurable Optical Switch
Non-blocking, optically transparent and fully bi-directional, LC connectors
Supports optical power measurement




Infinera Digital Transport Node (DTN) – 23” chassis
Band Multiplexing Module - C-band Only, OCG 1/3/5/7, Type 1 (Qty: 2)
Digital Line Module - C-band, OCG 1, Type 2 (Qty: 2)
Tributary Adaptor Module - 2-port, 10GR with Ethernet PM (Qty: 2)
Tributary Optical Module - 10G SR-1/I64.1 & 10GBase-LR/LW (Qty: 3)



Cisco Catalyst 6509 Enhanced 9-slot chassis, 15RU
Supervisor 720 Fabric MSFC3 PFC3BXL (Qty: 2)
48-port 10/100/1000 Ethernet Module, RJ-45, DFC-3BXL (Qty: 1)
8-port 10 Gigabit Ethernet Module, X2, DFC-3BXL (Qty: 1)

Juniper EX Switch, 1U
24-port 10/100/1000 Ethernet interface, RJ-45




Dell PowerEdge 860 Server (Qty: 3)
2.8GHz/256K Cache,Celeron533MHz
2GB DDR2, 533MHZ, 2x1G, Dual Ranked DIMMs
80GB, SATA, 3.5-inch 7.2K RPM Hard Drive
DRAC 4 Dell Remote Management PCI Card; 8X DVD
NCSU (status: under construction, due on-line 01/09)


Polatis 32 Fiber Reconfigurable Optical Switch
Non-blocking, optically transparent and fully bi-directional, LC connectors
Supports optical power measurement
Page 25 of 71
(DRAFT) Spiral 1 Substrate Catalog
GENI-INF-PRO-S1-CAT-01.5
January 20, 2009
Infinera Digital Transport Node (DTN) – 23” chassis




Band Multiplexing Module - C-band Only, OCG 1/3/5/7 (Qty: 2)
Digital Line Module - C-band, OCG 1 (Qty: 2)
Tributary Adaptor Module - 2-port, 10GR with Ethernet PM (Qty: 2)
Tributary Optical Module - 10G SR-1/I64.1 & 10GBase-LR/LW (Qty: 3)




Cisco Catalyst 6509 Enhanced 9-slot chassis, 15RU
Supervisor 720 Fabric MSFC3 PFC3BXL (Qty: 2)
48-port 10/100/1000 Ethernet Module, RJ-45, DFC-3BXL (Qty: 1)
8-port 10 Gigabit Ethernet Module, X2, DFC-3BXL (Qty: 1)
4-port 10 Gigabit Ethernet Module, XENPAK, DFC-3BXL (Qty:1)

Juniper EX Switch, 1U
24-port 10/100/1000 Ethernet interface, RJ-45




Dell PowerEdge 860 Server (Qty: 3)
2.8GHz/256K Cache,Celeron533MHz
2GB DDR2, 533MHZ, 2x1G, Dual Ranked DIMMs
80GB, SATA, 3.5-inch 7.2K RPM Hard Drive
DRAC 4 Dell Remote Management PCI Card; 8X DVD
UNC (status: operational)


Polatis 32 Fiber Reconfigurable Optical Switch
Non-blocking, optically transparent and fully bi-directional, LC connectors
Supports optical power measurement




Infinera Digital Transport Node (DTN) – 23” chassis
Band Multiplexing Module - C-band Only, OCG 1/3/5/7 (Qty: 2)
Digital Line Module - C-band, OCG 1 (Qty: 2)
Tributary Adaptor Module - 2-port, 10GR with Ethernet PM (Qty: 2)
Tributary Optical Module - 10G SR-1/I64.1 & 10GBase-LR/LW (Qty: 4)



Cisco Catalyst 6509 Enhanced 9-slot chassis, 15RU
Supervisor 720 Fabric MSFC3 PFC3BXL (Qty: 2)
48-port 10/100/1000 Ethernet Module, RJ-45, DFC-3BXL (Qty: 2)
8-port 10 Gigabit Ethernet Module, X2, DFC-3BXL (Qty: 2)

Juniper EX Switch, 1U
24-port 10/100/1000 Ethernet interface, RJ-45
Dell PowerEdge 860 Server (Qty: 3)




2.8GHz/256K Cache,Celeron533MHz
2GB DDR2, 533MHZ, 2x1G, Dual Ranked DIMMs
80GB, SATA, 3.5-inch 7.2K RPM Hard Drive
DRAC 4 Dell Remote Management PCI Card; 8X DVD
Renci (status: operational)

Polatis 32 Fiber Reconfigurable Optical Switch
Non-blocking, optically transparent and fully bi-directional, LC connectors
Page 26 of 71
(DRAFT) Spiral 1 Substrate Catalog

GENI-INF-PRO-S1-CAT-01.5
January 20, 2009
Supports optical power measurement
Infinera Digital Transport Node (DTN) – 23” chassis




Band Multiplexing Module - C-band Only, OCG 1/3/5/7 (Qty: 2)
Digital Line Module - C-band, OCG 1 (Qty: 2)
Tributary Adaptor Module - 2-port, 10GR with Ethernet PM (Qty: 4)
Tributary Optical Module - 10G SR-1/I64.1 & 10GBase-LR/LW (Qty: 8)



Cisco Catalyst 6509 Enhanced 9-slot chassis, 15RU
Supervisor 720 Fabric MSFC3 PFC3BXL (Qty: 2)
48-port 10/100/1000 Ethernet Module, RJ-45, DFC-3BXL (Qty: 2)
8-port 10 Gigabit Ethernet Module, X2, DFC-3BXL (Qty: 2)

Juniper EX Switch, 1U
24-port 10/100/1000 Ethernet interface, RJ-45




Dell PowerEdge 860 Server (Qty: 3)
2.8GHz/256K Cache,Celeron533MHz
2GB DDR2, 533MHZ, 2x1G, Dual Ranked DIMMs
80GB, SATA, 3.5-inch 7.2K RPM Hard Drive
DRAC 4 Dell Remote Management PCI Card; 8X DVD
BEN is represented in the GENI effort by RENCI. Following the established operational model,
GENI will become a project on BEN and RENCI personnel will serve as an interface between BEN and
the external research community for the purposes of setup, operational support and scheduling. This
will allow for the use of RENCI owned resources on BEN, such as Infinera’s DTN platforms, Cisco
6509 and Juniper routers etc. If other groups within the NC research community desire to contribute
equipment to GENI, their equipment can be accommodated at BEN PoPs and connected to BEN using
the interfaces described in the above list.
5.2
GENI Resources
Discuss the GENI resources offered by your substrate contribution and how they are shared,
programmed, and/or configured. Include in this discussion to what extent you believe the shared
resources can be isolated.
BEN is a regional networking resource that is shared by the NC research community. BEN operates
in a time-shared fashion with individual experiments scheduling time on BEN exclusively, or, when
possible, concurrently. Currently BEN scheduling is manual. Our plans for automatic scheduling and
provisioning are centered on adapting Duke’s ORCA framework to BEN to become its de facto
scheduling solution. Researchers are represented in BEN by their projects and allocation of time on
BEN is done on a per-project basis. Each project has a representative in the BEN Experimenter Group
(BEN EG), and the EG is the body that is responsible for managing BEN time allocations and resolving
conflicts.
5.3
Aggregate Physical Network Connections (Horizontal Integration)
Page 27 of 71
(DRAFT) Spiral 1 Substrate Catalog
GENI-INF-PRO-S1-CAT-01.5
January 20, 2009
Provide an overview of the physical connections within the substrate aggregate, as well as between the
aggregate and the GENI backbones. Identify to the extent possible, non-GENI equipment, services and
networks involved in these connections.
BEN is an experimental testbed that is designed to allow running networking experiments in
isolation. However some experiments may require external connectivity to national backbones or other
resources. BEN external connectivity is achieved through an interface between BEN’s Cisco 6509 and a
production Cisco 7609 router at RENCI’s Europa anchor site. This router allows interconnecting BEN’s
data plane with either a regional provider (NCREN) or national backbones (NLR or, via NCREN, I2).
Due to the production nature of the Cisco 7609, which also serves to support RENCI’s global
connectivity, as well as due to security concerns, connections from BEN to the outside world are
currently done by special arrangements.




Infinera utilizes Photonic Integrated Circuits (PICs) implemented in Indium Phosphide as a
substrate to integrate opto-electronic functions
Each DTN node introduces an OEO operation, which provides signal clean-up, digital power
monitoring, sub-wavelength grooming, muxing and add/drop
The DTN architecture simplifies link engineering and service provisioning
GMPLS control plane enables end-to-end services and topology auto-discovery across an Infinera
DTN network
BEN Ops
and Mgmt
Additional
Components
GbE
PC
Node
Control
(AM)
n PC
(VM)
GbE
GbE
Control Switch
GbE x n
GbE
Cisco Router (3)
GbE
Juniper Router (23)
GbE
10 GbE
1/10GbE
GbE
GbE
Infinera Transport
(1)
10 GbE
10 GbE
OTU 2
GbE
Polatis Fiber Switch
(0)
nl
Patch Panel
n l Internet
Figure 5-3: BEN external connectivity diagram – access to BEN occurs directly through Renci’s 10
GigE NLR FrameNet connection and through Renci’s production router that’s used for commodity
Internet, Internet2 and NLR connectivity (via NCREN).
Figure 5-3 and Table 7-1 summarize the possibilities for external connections to BEN data plane.
Page 28 of 71
(DRAFT) Spiral 1 Substrate Catalog
GENI-INF-PRO-S1-CAT-01.5
January 20, 2009
Table 7-1: BEN External connectivity
Connect
Type
Interf
ion
Purpose
ace
BEN
IP
10
GigE
Enables remote access to BEN through directly
connected IP interface on Renci’s 7609 router
NCREN
IP/BGP
10
GigE
Primary connectivity to commodity Internet and
R&E networks (Internet2 and NLR PacketNet); can
be used for layer 2 connectivity to Internet2’s
Dynamic Circuit Network (DCN)
Etherne
10
GigE
Experimental connectivity to national layer 2
switch fabric - FrameNet; it can also be used to
implement layer 2 point-to-point connections
through NLR, as well as connectivity to C-Wave
NLR
FrameNe
t
t
5.4
Aggregate Manager Integration (Vertical Integration)
Identify the physical connections and software interfaces for integration into your cluster's assigned
control framework.
GENI AM (ORCA)
(ORCA?)
Polatis
(SNMP)
(TL1)
Infinera
(SNMP)
(TL1)
(Prop XML)
(GMPS UNI)
Juniper
(CLI-TL1?)
(Pub XML)
Cisco
(CLI-TL1?)
(SNMP)
PC
(Linux)
Additional
Components
Polatis- TL1 and SNMP for configuration and status.
Infinera - TL1, SNMP and proprietary XML for configuration and status. GMPLS UNI support for
service provisioning.
Cisco - CLI and SNMP for configuration and status.
Juniper - CLI, published XML interface for configuration and status.
Page 29 of 71
(DRAFT) Spiral 1 Substrate Catalog
5.5
GENI-INF-PRO-S1-CAT-01.5
January 20, 2009
Measurement and Instrumentation
Identify any measurement capabilities either embedded in your substrate components or as dedicated
external test equipment.
Polatis - 20 directional optical power monitors (OPM) as specified in the table below
Fiber
#


Input
OPM?
Output
OPM?
Fiber
#
Input
OPM?
Output
OPM?
1
Yes
Yes
17
Yes
Yes
2
Yes
Yes
18
Yes
Yes
3
Yes
19
Yes
4
Yes
20
Yes
5
Yes
21
Yes
6
Yes
22
Yes
7
Yes
23
Yes
8
Yes
24
Yes
9
25
10
26
11
27
12
28
13
29
14
30
15
31
16
32
Input optical power monitors measure light entering the switch
Output optical power monitors measure light exiting the switch
No optical power monitors on fibers 9-16 and 25-32
5.6
Aggregate Specific Tools and Services
Describe any tools or services which may be available to users and unique to your substrate
contribution.
Renci is developing a graphical user interface with multiple views (based on credentials) to operate the
Polatis optical switch.
Page 30 of 71
(DRAFT) Spiral 1 Substrate Catalog
6
GENI-INF-PRO-S1-CAT-01.5
January 20, 2009
CMU Testbeds
Substrate Technologies: Small scale Emulab testbed, Residential small form-factor machines, Wireless
emulator
Cluster: C
Project wiki: http://groups.geni.net/geni/wiki/CmuLab
6.1
Substrate Overview
Provide an overview of the hardware systems in your contribution.
Three distinct testbeds and hardware systems:
CMULab is basically a small version of the Emulab testbed. It has 10 experimental nodes, and two
control nodes. Each experimental node has two ethernet ports, a control network port and an
experimental network port. The machines are connected to a reconfigurable ethernet switch, just like in
Emulab.
Homenet is a distributed testbed of small form-factor machines. We're still in the process of negotiating
the deployment, but at this point, it looks like the initial major deployment will be in a residential
apartment building in Pittsburgh. The nodes will be connected to cable modem connections in
individual apartments. Each node has 3 wireless network interfaces.
The CMU Wireless Emulator is a DSP-based physical wireless emulator. It has a rack of laptops
connected to the emulator. The emulator can provide, within a range of capabilities, arbitrary physicallayer effects (attenuation, different shape rooms for multipath, interference, etc.). The wireless emulator
is physically connected to the CMULab rack over gigabit ethernet.
6.2
GENI Resources
Discuss the GENI resources offered by your substrate contribution and how they are shared,
programmed, and/or configured. Include in this discussion to what extent you believe the shared
resources can be isolated.
All nodes can be programmed by users. The Homenet nodes are like PlanetLab or RON testbed nodes:
Users can get accounts and run user-level programs or interface with the raw network. They probably
can NOT change the kernel. The emulator and CMULab nodes are fully controllable by experimenters,
including different OS support, etc. All three testbeds can be shared using space sharing.
The wireless emulator testbed also supports wireless channels that can be fully controlled by the user.
For an N node experiment, the user has N*(N-1)/2 channels that can be controlled. Control is typically
specified through a Java program, although control through a GUI or script are also supported.
Channels belonging to different experiments are fully isolated from each other.
6.3
Aggregate Physical Network Connections (Horizontal Integration)
Provide an overview of the physical connections within the substrate aggregate, as well as between the
aggregate and the GENI backbones. Identify to the extent possible, non-GENI equipment, services and
networks involved in these connections.
Page 31 of 71
(DRAFT) Spiral 1 Substrate Catalog
GENI-INF-PRO-S1-CAT-01.5
January 20, 2009
The homenet nodes are located in houses. They require a commodity DSL or cable connection. The
CMULab and Emulator are connected to our campus network, which connects to I2, and from there to
ProtoGENI.
6.4
Aggregate Manager Integration (Vertical Integration)
Identify the physical connections and software interfaces for integration into your cluster's assigned
control framework
Resource allocation for all three testbeds is dong using CMUlab, which uses the ProtoGeni software
from Utah. Users use an ns-2-like script to specify how many nodes they need.
To a first order, CMUlab views the emulator as a pool of special laptops and users can specify how
many they need. Some laptops have slightly different capabilities (multi-path, software radios, etc.),
and users can specify these in the script they give to CMUlab. Users can also specify the OS, software,
etc. that they need on the laptops.
User swap in of an experiment currently requires a temporary solution. After the CMUlab swap in is
completed (which means that CMUlab has set up the laptops with the right software etc.) users then log
into the emulation controller and run a control program that syncs up with CMUlab and sets up the
emulator hardware to be consistent with the experiment. The user can then run experiments and they
need to run the control program again after the CMUlab swap out. An autonomous solution, i.e. no user
intervention, is in the plans for this project.
6.5
Measurement and Instrumentation
Identify any measurement capabilities either embedded in your substrate components or as dedicated
external test equipment.
Standard machine monitoring only - interface monitoring, tcpdump/pcap, nagios-style monitoring of
machine availability, etc.
6.6
Aggregate Specific Tools and Services
Describe any tools or services which may be available to users and unique to your substrate
contribution.
7
DOME
Substrate Technologies: Metro mobile network, compute nodes, variety of wireless interfaces, GPS
devices, wireless access points
Cluster: D
Project wiki: http://groups.geni.net/geni/wiki/DOME
7.1
Substrate Overview
Page 32 of 71
(DRAFT) Spiral 1 Substrate Catalog
GENI-INF-PRO-S1-CAT-01.5
January 20, 2009
Provide an overview of the hardware systems in your contribution.
The DOME test bed consists of 40 buses, each equipped with:
 A computer
- 1GHz Intel Celeron processor
- 1GB RAM
- 60GB hard drive
- Linux 2.6
 Mini PCI 802.11g card with an Atheros chipset
 802.11 access point connect via 100Mb Ethernet port
 GPS device connected via USB
 3G cellular modem connected via USB
 900MHz Digi XTend radio connected via USB
Bus
Brick
DOM 0
DOME Control Plane
Guest Domain
User
defined
OS/
software
eth0
(172.x.x.254)
ath0
(DHCP)
ppp0
(DHCP)
3G
modem
NAT
eth0
(172.x.x.254)
peth0
Cellular
Network
Ethernet
Bus Passengers
Wireless AP
Layer 2
Layer 4
Another Bus
Amherst
Mesh or
Open APs
Commercial
Internet
Aggregate
Manager
Internet2
UMass
Bus-to-bus connections initially will be over 802.11 links. The second year of this project will add
900MHz radio links for bus-to-bus connections. The cellular link is dedicated to control plane
connectivitiy.
7.2
GENI Resources
Discuss the GENI resources offered by your substrate contribution and how they are shared,
programmed, and/or configured. Include in this discussion to what extent you believe the shared
resources can be isolated.
The primary resource provided is access to the test bed. A researcher will have access to all or part
of the mobile network, including the mobile environment, as opposed to slivers or slices of a specific
hardware resource. Initially, the granularity of a slice will likely be access to the entire network, i.e., all
Page 33 of 71
(DRAFT) Spiral 1 Substrate Catalog
GENI-INF-PRO-S1-CAT-01.5
January 20, 2009
buses, over a period of time, such as a day or two. With respect to the specific hardware defined in item
1 above, the following applies.
 The computer
- Researchers will run experiments within an instance of a Xen Linux virtual machine,
i.e., guest domain. The user will be able to execute as root.
- The host domain (dom0) is expected to be relatively inactive, so most of the CPU
cycles should be available to the guest domain.
- 512MB of the RAM is expected to be available to the guest domain.
- Uploading of user-supplied file representing a disk image, which will be mounted as a
disk partition to the guest domain, will require DOME operations involvement. The
size limit of the partition is to be determined.
 802.11 PCI device
- The PCI address of the 802.11 card will be exposed to the guest domain. By default,
the Atheros madwifi driver will load when the guest domain boots, though the user is
free to replace driver modules. The guest domain will have full, unshared access to the
device.
 802.11 access point
- The bus access points are recognizable by their SSIDs. The guest domain will have an
IP address assigned to an Ethernet port on the same subnet as the access point.
- Mechanism for a bus to determine the IP addresses of other buses will be provided.
 GPS device
- The GPS device is owned by dom0. A standard gpsd daemon will run in dom0. The
guest domain interfaces with gpsd via TCP to get GPS information.
 3G cellular modem
- This is a shared device managed by dom0. Since it is a mobile network, cellular
connectivity is never assured. Dom0 will be responsible for maximizing the amount of
time cellular connectivity is available.
- The cellular link is the link used by the control plane.
- The cellular link is not directly exposed to the guest domain, but instead traffic is
routed through the guest domain's Ethernet link.
 900MHz radio
- This is not a shared device; it will be exclusively managed by the guest domain.
- This is a Year 2 deliverable.
7.3
Aggregate Physical Network Connections (Horizontal Integration)
Provide an overview of the physical connections within the substrate aggregate, as well as between the
aggregate and the GENI backbones. Identify to the extent possible, non-GENI equipment, services and
networks involved in these connections.
Bus to bus via 802.11
There are two primary mechanisms for buses to directly communicate with each other. The first is
for the Atheros PCI client to detect the access point on another bus and establish TCP/IP
communication. This involves associating with the AP and joining the other bus's subnet. As mentioned
above, a means to determine the IP address of each guest domain will be provided.
The second method is for the guest domain to place an Atheros PCI card in server/AP mode and for
PCI cards on separate buses to directly interoperate. This allows the researcher to bypass the traditional
TCP/IP stack and experiment with new protocols.
Page 34 of 71
(DRAFT) Spiral 1 Substrate Catalog
GENI-INF-PRO-S1-CAT-01.5
January 20, 2009
Note that the guest domain has complete control over, and responsibility for, the PCI WiFi device.
This means that the guest domain can perform scans for SSIDs and BSSIDs, and it controls the policy
for determining 802.11 associations.
Bus to Internet via the 802.11 mesh
Since the guest domain has complete control over the 802.11 association policy, experiments can
also choose to connect to the Town of Amherst and UMass 802.11 meshes using TCP/IP. Once
connected, traffic can be routed through the Internet (or Internet2) and be interconnected with other
GENI networks.
Bus to Internet via the cellular link
As previously stated, the cellular link is a shared device to be used by the control plane. Dom0 uses
the link for uploading log files, downloading GENI experiments, and providing status. The guest
domain may also use the cellular link for similar functions while it is executing.
Bus to bus via the 900MHz radios
This will be available in Year 2. The 900MHz radios will provide a long-range, low-speed pointpoint link between buses.
7.4
Aggregate Manager Integration (Vertical Integration)
Identify the physical connections and software interfaces for integration into your cluster's assigned
control framework
As stated, the 3G cellular link is used by the control plane. Since integration with the framework is a
deliverable for the second half of the first year, the interfaces are TBD. Broadly, we will provide
mechanisms to install experiments, reserve access to the test bed, and schedule experiments.
Note that the characteristics of a mobile test bed are quite different from other test beds. The computers
often are not running, and when they do run they can be abruptly powered off. Even when the computer
is running, there may be no available communication lines. Furthermore, cellular links are typically
fire-walled to prevent inbound connections, and the assigned IP addressees are very dynamic.
Therefore, the staging of an experiment is a precondition for scheduling an experiment. The paradigm
of assigning resources and then loading the experimental software is not the correct model.
Furthermore, the standard operating procedure of ssh'ing into test systems does not apply (though
intermittent ssh tunneling is conceivable). Experiments must be able to run unattended, and the ability
to asynchronously offload experimental results is paramount.
7.5
Measurement and Instrumentation
Identify any measurement capabilities either embedded in your substrate components or as dedicated
external test equipment.
Page 35 of 71
(DRAFT) Spiral 1 Substrate Catalog
GENI-INF-PRO-S1-CAT-01.5
January 20, 2009
When the computers come up they do a self-check of their health and log the results to a UMass
server. Once running, the computers on the buses continuously attempt to report their GPS coordinates
so that we can determine a computer is running and track the bus's location.
Traditionally we have recorded a wealth of information, such as SSIDs discovered, whether a bus
could communicate with the mesh or other buses, throughput information, etc. The control of the
devices has now been ceded to the GENI experiments, and the GENI researchers are free to measure
whatever they find useful. The information for 802.11 includes available SSIDs and BSSIDs, signal
strength, noise level, and type of security that may be enabled. Since both the raw PCI device and root
access is available, all traffic, including beacons, can be recorded using standard tools such as tcpdump.
With respect to the GPS device, both interpreted data and access to the raw sentences are available.
7.6
Aggregate Specific Tools and Services
Describe any tools or services which may be available to users and unique to your substrate
contribution.
We have two definitions of users: a GENI researcher who runs his or her own software on our
computers, and bus riders that may indirectly use our computers for services such as Internet access.
With respect to the former, we offer the ability to pre-stage software on the computers. In other
words, we will deal with the issues involved in distributing software to multiple, intermittently
connected devices. However, we believe that the interface for this should eventually be provided by the
framework, though the actual mechanism would be specific to the test bed. Another service that we
provide is asynchronous logging. Due to intermittent connectivity, a GENI experiment cannot be
assured that it can offload its collected data during its allocated time. Therefore, we will provide a
mechanism to continue offloading data even though the experiment is no longer running.
As for the second type of user, we do provide Internet access to the bus riders. This is done by bus
riders connecting to the access points, and dom0 routing traffic through the shared cellular link. We are
also working with the UMass Transit Authority to provide bus tracking using the GPS coordinates that
we report.
An opportunity for the GENI community is to run experiments that offer services to bus riders.
8
Enterprise GENI
Substrate Technologies: Campus wide programmable Ethernet switches
Cluster: B
Project wiki: http://groups.geni.net/geni/wiki/EnterpriseGeni
8.1
Substrate Overview
Provide an overview of the hardware systems in your contribution.
A small lab test network or about 5 switches, mostly 48x 1Gb/s and the Stanford CS building network
with up to 23 HP Procurve 5400 switches (48 x 1 Gb/s).
Page 36 of 71
(DRAFT) Spiral 1 Substrate Catalog
8.2
GENI-INF-PRO-S1-CAT-01.5
January 20, 2009
GENI Resources
Discuss the GENI resources offered by your substrate contribution and how they are shared,
programmed, and/or configured. Include in this discussion to what extent you believe the shared
resources can be isolated.
Full isolation for packets (at L2 and L3) and isolation of routing/switching mechanisms but n0
bandwidth isolation.
8.3
Aggregate Physical Network Connections (Horizontal Integration)
Provide an overview of the physical connections within the substrate aggregate, as well as between the
aggregate and the GENI backbones. Identify to the extent possible, non-GENI equipment, services and
networks involved in these connections.
Most links are 1Gb/s with a few 10Gb/s links. Connection to the GENI backbone is TBD, but
we have connectivity (L2 or L3) to Internet2 via tunnels over Calren.
8.4
Aggregate Manager Integration (Vertical Integration)
Identify the physical connections and software interfaces for integration into your cluster's assigned
control framework
8.5
Measurement and Instrumentation
Identify any measurement capabilities either embedded in your substrate components or as dedicated
external test equipment.
This is TBD. Via OpenFlow it should be generally possible to gather statistics at a per flow
entry granularity on the routers/switches directly.
8.6
Aggregate Specific Tools and Services
Describe any tools or services which may be available to users and unique to your substrate
contribution.
Users will need to operate an OpenFlow controller to control forwarding decisions in the
network (we can provide the software for such a controller). This allows them complete control
over L2/L3 forwarding decisions, including experimental protocols that are not IP based.
9
SPP Overlay
Substrate Technologies: High performance overlay hosting nodes, netFPGA cards
Page 37 of 71
(DRAFT) Spiral 1 Substrate Catalog
GENI-INF-PRO-S1-CAT-01.5
January 20, 2009
Cluster: B
Project wiki: http://groups.geni.net/geni/wiki/OverlayHostingNodes
9.1
Substrate Overview
Provide an overview of the hardware systems in your contribution.
In this project, we will acquire, assemble, deploy and operate five high performance overlay hosting
platforms, and make them available for use by the research community, as part of the emerging GENI
infrastructure. These systems will be hosted in the Internet 2 network at locations to be determined, in
the course of future discussions. Our Supercharged PlanetLab Platform (SPP) has a scalable
architecture, supports a flexible ratio of compute power to IO capacity and is built on power and spaceefficient, industry-standard components that will continue to scale in performance, in keeping with ongoing improvements in technology.
net
FPGA
CP
External
Switch
GPE
GPE
NPE
Chassis Switch
Line Card
10x1 GbE
Figure 9-1 SPP Hardware components and photo of an SPP node
Page 38 of 71
(DRAFT) Spiral 1 Substrate Catalog
GENI-INF-PRO-S1-CAT-01.5
January 20, 2009
Figure 11-1 shows the hardware components of an SPP node. All input and output occurs through the
Line Card (LC), which is an NP subsystem with external IO interfaces. The LC forwards each arriving
packet to the system component configured to process it, and queues outgoing packets for transmission.
The General Purpose Processing Engines (GPE) are conventional dual processor server blades running
the PlanetLab OS and hosting vServers that serve application slices. The Network Processing Engines
(NPE) are NP subsystems. The NPE-IXP includes two Intel IXP 2850 NPs, with 17 internal processor
cores, 3 banks of SDRAM, 3 banks of QDR SRAM and a Ternary Content Addressable Memory
(TCAM). The Control Processor (CP) is a separate server hosting the software that coordinates the
operation of the system as a whole. The chassis switch is a 10 GbE switch with 20 ports and the
external switch is a 1 GbE switch with 24 ports, plus four expansion slots that can be used for 10 GbE
interfaces. One of these is used to connect the Figure 11-1 also includes a photograph of the SPP and
Figure 11-2 lists all the parts used to assemble the SPP.
Qty
1
1
3
1
1
1
1
1
1
3
2
2
2
1
1
1
1
1
1
1
1
1
1
1
Description
Dual IXP 2850 Network Processor Module with IO
10 Gb/s Fabric Interface Card
1 GE Copper SFP Modules (4 per kit)
Dual IXP 2850 Network Processor Module
10 Gb/s Fabric Interface Card
18 MB IDT TCAM Module
10 GE/1GE Switch & Control Module
RTM with extra IO ports
10GE Copper XFP modules
1 GE Copper SFP Modules (4 per kit)
Server blade with 2 dual-core Xeon processors
37 GB Disk (SAS)
4 GB DDR2 RAM
Zephyr 6 Slot ATCA Shelf
Shelf Manager
Alarm Board
1U Power Supply Shelf
48 Vdc/25A Power Supply
115 Vac/15A Power Cord
1U rack-mount server with dual PCI slots
24 Port Gigabit L3 Switch with 4, 10GE Uplinks
10 GE Adapter Module
10 GE Fiber Tranceiver (Short Range)
netFPGA boards
Supplier
Radisys
Radisys
Radisys
Radisys
Radisys
Radisys
Radisys
Radisys
Radisys
Radisys
Radisys
Maxtor
Crucial
Schroff
Schroff
Schroff
Unipower
Unipower
Unipower
Dell
Netgear
Netgear
Netgear
Digilent
Model
A7K-PPM10-CFG002
A7010-FIC-2X10G
A2K-SFP-C
A7010-BASE-2855
A7010-FIC-2X10G
A7010-TCAM-01-R
A2210-SWH-CFG-01
A5010-SPM-01
A2K-XFP
A2K-SFP-C
A4310-CPU-SAS-FE
8K036S0
CT2KIT12872AA53E
ZR5ATC6TMDPEM2N
21593-375
ISAP2
TPCPR1U3B
TPCP7000
364-1409-0000
Poweredge 860
GSM7228S
AX741
AXM751
NETFPGA
Figure 9-2 SPP Parts List
9.2
GENI Resources
Discuss the GENI resources offered by your substrate contribution and how they are shared,
programmed, and/or configured. Include in this discussion to what extent you believe the shared
resources can be isolated.
Page 39 of 71
(DRAFT) Spiral 1 Substrate Catalog
GENI-INF-PRO-S1-CAT-01.5
January 20, 2009
The SPP nodes can be used by researchers to host overlays. They can be used in exactly the same way
as existing PlanetLab nodes, but, they also offer additional capabilities. In particular, they allow users to
map the performance-critical portions of their overlay nodes onto a “fast-path” that runs on an IXP
2850 network processor. We will be providing an API that users can access from within their slice to
configure a fast path and to reserve bandwidth on the SPP’s external interfaces. Details of the API will
be provided on the project web page when the SPP nodes are made available to users. The system will
provide bandwidth isolation (through per-slice queues) and performance isolation on the NPEs.
9.3
Aggregate Physical Network Connections (Horizontal Integration)
Provide an overview of the physical connections within the substrate aggregate, as well as between the
aggregate and the GENI backbones. Identify to the extent possible, non-GENI equipment, services and
networks involved in these connections.
The SPP nodes will be located in Internet 2 POPs and will be connected by multiple gigabit links,
as indicated in the diagram in Figure 11-3 (the numbers on the links in the diagram indicate the number
of gigabit links). Our preferred locations are Los Angeles, Kansas City, Houston, Washington DC and
Atlanta, but this is subject to negotiation with Internet 2. The SPPs have 1 GbE interfaces (copper) and
will need to connect to the I2 infrastructure using those interfaces. This may require that the I2
infrastructure at certain locations be augmented to enable the mapping of 1 GbE links onto their optical
backbone. Each site will also require multiple 1 GbE interfaces to the I2 router at the site. Each of these
interfaces will require an Internet 2 IP address that is routable from all the connected I2 universities. In
addition, the CP for each SPP node will require its own “backdoor” network connection with its own
routable Internet 2 IP address. This is required to enable management access in situations where one or
more of the other hardware components is not operating correctly. This can be a low bandwidth
connection (e.g. 10 Mb/s or 100 Mb/s).
2
2
2
3
2
3
3
Figure 11-3 Internet2 PoP’s and Desired SPP Node Locations
Page 40 of 71
(DRAFT) Spiral 1 Substrate Catalog
9.4
GENI-INF-PRO-S1-CAT-01.5
January 20, 2009
Aggregate Manager Integration (Vertical Integration)
Identify the physical connections and software interfaces for integration into your cluster's assigned
control framework
The physical connections required for the SPP nodes have been detailed above. No other physical
connections are required. Initially, the SPP nodes will implement the standard interfaces provided by
Planetlab nodes. Over the course of the project, the interface will be modified to be compatible with the
GENI control framework. In particular, we will implement a component interface and use rspecs and
tickets to manage the use of SPP resources, in place of the local allocation mechanisms that will be used
in the initial phase of the project.
9.5
Measurement and Instrumentation
Identify any measurement capabilities either embedded in your substrate components or as dedicated
external test equipment.
The system will implement the standard PlanetLab mechanisms for auditing outgoing traffic. In
addition, the core components provide a large number of low level traffic counters that will be
accessible to users and can be used to generate real-time charts of traffic associated with a slice. Details
of these mechanisms will be developed over the course of the project. The system will not include
mechanisms for measuring fine-grained information about a slice’s operation. This can be done more
effectively by the slice developer, than the substrate. The system will also not provide mechanisms for
storing large quantities of measurement results for individual slices.
9.6
Aggregate Specific Tools and Services
Describe any tools or services which may be available to users and unique to your substrate
contribution.
As already mentioned, the ability to map the performance-critical part of an application onto a fast-path
that runs on an NPE is a unique capability of the SPP nodes. We also provide the ability to reserve
interface bandwidth and NP processing cycles. In addition, we plan to support use of a NetFPGA in
each SPP node. However this capability will become available, only towards the end of the project.
10 Proto-GENI
Substrate Technologies: Backbone Sites, Programmable 10GBE Ethernet Switch, PC’s w/ NetFPGA
cards
Cluster: C
Page 41 of 71
(DRAFT) Spiral 1 Substrate Catalog
GENI-INF-PRO-S1-CAT-01.5
January 20, 2009
Project wiki http://groups.geni.net/geni/wiki/ProtoGENI
10.1 Substrate Overview
Provide an overview of the hardware systems in your contribution.
The main substrate resource to be built as a part of ProtoGENI will be a nationwide backbone. This
backbone will be built on top of the Internet2 wave infrastructure, will use HP switches to provide
VLAN connectivity across that infrastructure, and will include PCs and NetFGPA cards (hosted in the
PCs) at backbone sites. The plan for Spiral 1 is to get this hardware installed at 8 colocation sites across
the country, with 10Gbps of dedicated bandwidth between sites.
ProtoGENI will also integrate with a number of substrate resources in "Cluster C". It will also include,
and federate with, existing Emulab-based testbeds, which have hardware such as PCs with 802.11
interfaces, network processors, software radio peripherals, Cisco routers, cluster PCs, and Ethernet
switching infrastructure. (This writeup covers the backbone only.)
Figure 12-1. Planned Deployment of SPP nodes in Internet 2
Nodes will consist of primarily the following equipment:
 HP ProCurve 5400 switches
 NetFPGA cards
 1U Rackmount PCs, exact model TBD
The switches will use the following hardware:





1x ProCurve? 5406zl chassis (6 slots) J8697A
2x 875W power supplies J8712A
1x 4-port 10-GbE X2 module J8707A
2-3x 10-GbE LR Optics J8437A
1-2x line cards with 20 10/100/1000 copper Ethernet ports + 4 mini-GBIC 1Gbps slots J8705A
Page 42 of 71
(DRAFT) Spiral 1 Substrate Catalog
GENI-INF-PRO-S1-CAT-01.5
January 20, 2009
 1x Mini-GBIC SC-LC 1Gbps Ethernet optics J4858C
The number of interconnects between PCs, NetFPGAs, and switches will depend on the out-degree of
the Internet2 site: some are degree-3, and some are degree-2. The switch at each site will have one
10Gbps Ethernet interface per out-degree of the site, connected to Internet2's DWDM equipment.
Each PC will have at least one 1Gbps Ethernet interface per out-degree of the site, which will be
connected to the switch. Similarly, each NetFPGA will have one interface per degree connected to the
switch. Where more ports are available, we may increase the number of interfaces in order to utilize the
inter-site bandwidth as fully as possible.
At sites where Internet2 has an IP router, the switch will have a single 1Gbps fiber Ethernet connection
to the router. This will be used to give each PC one IP address from Internet2's IP space; this will be
used to get traffic from end users into and out of the backbone. (Since Internet2 currently does not carry
traffic to/from the commercial Internet, this will limit end users to Internet2-connected institutions for
the time being.) Each PC will have one network interface dedicated to this external connectivity, in
addition to those listed above.
10.2 GENI Resources
Discuss the GENI resources offered by your substrate contribution and how they are shared,
programmed, and/or configured. Include in this discussion to what extent you believe the shared
resources can be isolated.
The ProtoGENI backbone will have a 10GB wave on the Internet2 infrastructure. We will run Ethernet
on these waves, and slice it with VLANs. Researchers will not be able to program the Ethernet switches
directly, but they will be able to select the topology of VLANs to run on top of the infrastructure, and
we hope to enable OpenFlow on these switches, allowing experimenters direct control over the
forwarding tables in them.
VLANs will prevent slices from seeing each others' traffic. They will not provide QoS-like guarantees
for performance isolation; however, we will use three different techniques to prevent over-subscription
of these resources. First, request RSpecs and tickets will include the bandwidth to be used by slices; we
will track the bandwidth promised to slices, and not over-book it. Second, shared PCs will use hostbased traffic shaping to limit slices to the bandwidths they have been promised. Third, for components
over which experimenters have full control (and thus host-based limits would not be enforceable), we
will use traffic limiting technologies of our backbone switches to enforce limits on VLANs attached to
those components.
The PC components will be handled in two different ways: some will be sliced using in-kernel
virtualization techniques adopted from PlanetLab and VINI. This allows for a large number of slivers,
but provides only limited control over the network stack. In the case of PlanetLab vservers, slivers are
unable to see each others' traffic, but share interfaces, and have no control over routing tables, etc. VINI
adds a significant amount of network virtualization, allowing slivers to have their own virtual
interfaces, which greatly aids slicing via tunnels and VLANs. It also allows slivers control over their
own IP routing tables. These technologies provide a little in the way of performance isolation between
slivers, but our main strategy will be to do admission control to prevent these components from being
overloaded.
Page 43 of 71
(DRAFT) Spiral 1 Substrate Catalog
GENI-INF-PRO-S1-CAT-01.5
January 20, 2009
Because the slicing techniques listed share a common kernel among slivers, they allow for a large
number of slivers, but do not enable disruptive changes to the kernel, such as modifying the network
stack. For this reason, a set of our components will be run on exclusive-access basis, in which
experimenters will have the ability to replace the operating system, etc. on them. In the future, if
someone comes up with a good slicing implementation using Xen, VMWare, or some other more
traditional virtual machine, we may consider using that on this set of components. We do not expect
NetFPGAs to be sliceable in the near future, so we intend to allocate them to once slice at a time, and to
deploy a number of them (3 to 4 per backbone site) to support simultaneous slices.
10.3 Aggregate Physical Network Connections (Horizontal Integration)
Provide an overview of the physical connections within the substrate aggregate, as well as between the
aggregate and the GENI backbones. Identify to the extent possible, non-GENI equipment, services and
networks involved in these connections.
Backbone switches will be connected to the Internet2 wave (DWDM) network (used for many purposes
other than GENI) via 10Gbps Ethernet interfaces. The wave network provides a "virtual fiber", so that
the switches appear to be directly attached to each other (over a distance of hundreds of miles). Each
switch will have two or three 10Gbps interfaces, depending on the out-degree of the Internet2 site.
Each switch will have a single 1Gbps copper Ethernet interface to the Internet2 IP network (non-GENI)
for connectivity to the outside world.
Each PC and NetFPGA will have a number of interfaces; at least 1Gbps per out-degree of the site, and
possibly more (up to four). PCs will have an additional interface for the control plane (eg. remote power
cycling and console access) and connectivity to the outside world.
10.4 Aggregate Manager Integration (Vertical Integration)
Identify the physical connections and software interfaces for integration into your cluster's assigned
control framework
10.5 Measurement and Instrumentation
Identify any measurement capabilities either embedded in your substrate components or as dedicated
external test equipment.
Ethernet switches have basic packet counters, and can be configured to "mirror" traffic between ports
for collection purposes. The PCs will be able to use standard measurement and capture tools, such as
tcpdump. Due to their nature as "programmable hardware", quite a lot of measurement is possible on
the NetFPGAs.
10.6 Aggregate Specific Tools and Services
Describe any tools or services which may be available to users and unique to your substrate
contribution.
Page 44 of 71
(DRAFT) Spiral 1 Substrate Catalog
GENI-INF-PRO-S1-CAT-01.5
January 20, 2009
ProtoGENI will leverage the Emulab tools (snmpit) for creation and manipulation of VLANs on the
backbone, and some Emulab tools for sliver programming and manipulation will be available as well,
e.g. frisbee an EMULAB disk imaging tool. Our slice embedding service will aid in the selection of
backbone paths for researchers who are interested in using the backbone simply for connectivity, rather
than having an interest in controlling specific backbone components and routers.
11 Programmable Edge Node
Substrate Technologies: Intel IXP2855 network processor based programmable edge node
Cluster: C
Project wiki http://groups.geni.net/geni/wiki/ProgrammableEdgeNode
11.1 Substrate Overview
Provide an overview of the hardware systems in your contribution.
The PEN hardware is a commercial rack mount server (vendor Supermicro) that consists of two
quad-core Intel Xeon processors running at 3.0GHz, 8G memory, 400GB hard disk, and a Netronome
NFE-i8000 network processor card.
Programmable Edge Node
(hardware architecture with off-the-shelf components)
Programmable Edge Node
(conceptual architecture)
VR_0
VR_1
…..
x86 Multi-core
NICs,
g Virtual
Instantiatin ane Tasks
Control Pl
Storage
FSB
Chipset
Bridging
0
VNIC-NP
1
2
Host Memory
PCI Express Bus
3
Legend
VNIC-VE
…
N-1
En
Mea forcing
sure
li
men nk shari
t and
ng
Diag ,
nosis
SRAM
Intel IXP
28xx NP
SRAM
Intel IXP
28xx NP
DDR
SDRAM
1Gbps x 4
DDR
SDRAM
1Gbps x 4
VNIC-NP
VR: Virtual Router
Computer Clusters,
Computer Computer
Clusters Clusters
Computer Clusters
e.g. PlanetLab,
Emulab
Figure 11-1 Hardware Architecture of UML_PEN
The NFE-i8000 card (shown in Figure 11-2) is equipped with an Intel IXP2855 network processor
running at 1.4GHz, on-chip crypto units, 4 1Gbp Ethernet ports (copper or fiber connections), 768MB
RDR DRAM, 40Mb QDR2 SRAM, and 9 Mb TCAM.
Users of the system will configure the virtual network connection using Emulab interfaces which
subsequently call scripts executed on UML_PEN.
Page 45 of 71
(DRAFT) Spiral 1 Substrate Catalog
GENI-INF-PRO-S1-CAT-01.5
January 20, 2009
Figure 11-2 NFE-i8000 card based on Intel IXP2855 network processor
11.2 GENI Resources
Discuss the GENI resources offered by your substrate contribution and how they are shared,
programmed, and/or configured. Include in this discussion to what extent you believe the shared
resources can be isolated.
UML_PEN provides programmable routers intended to be an integral part of a GENI aggregate
(e.g. ProtoGENI) framework. UML_PEN runs OpenVZ enabled Linux kernel. Users can instantiate
virtual containers (i.e. virtual machines) atop of OpenVZ Linux kernel, in order to share hardware
resources and provide modest isolation. Each container acts as a router with multiple virtual interfaces
that are created based on user requests. Users can obtain the root access of the virtual containers to
configure the router to run existing software like CLICK, or compile customized packet processing
software using GNU tool chain. Measurement and diagnosis functions implemented on Network
Processor (NP) can be discovered as additional resources. The link bandwidth sharing is enforced by
either x86 multi-core processors or NPs.
11.3 Aggregate Physical Network Connections (Horizontal Integration)
Provide an overview of the physical connections within the substrate aggregate, as well as between the
aggregate and the GENI backbones. Identify to the extent possible, non-GENI equipment, services and
networks involved in these connections.
Page 46 of 71
(DRAFT) Spiral 1 Substrate Catalog
GENI-INF-PRO-S1-CAT-01.5
January 20, 2009
Figure 13-3 shows how a PEN fits into Emulab/ProtoGENI. The red lines represent the experiment
plane and the purple lines represent the control plane. There can be multiple PENs in a GENI aggregate
although only one PEN is shown in this figure.
11.4 Aggregate Manager Integration (Vertical Integration)
Identify the physical connections and software interfaces for integration into your cluster's assigned
control framework
The physical connections are illustrated in Figure 13-3. The red lines represent the experiment
plane and the purple lines represent the control plane. There can be multiple PENs in a GENI aggregate
although only one PEN is shown in this figure. The connections between PENs and switches are 1Gbps
Ethernet links, while other connections can be either 1Gbps or 100Mbps Ethernet links. The “boss” and
“ops” nodes are control & management nodes in Emulab framework.
Control Network
Experiment Network
Boss
Ops
Virtual Link
Control Switch
PEN
VE0
VE1
......
VNIC
...
Virtual Node
NFE
Jail0
Regular NIC
Jail1
Network
Processor
VNIC
Physical Nodes
Experiment Switch
Figure 11-3 How PEN fits into Emulab/ProtoGENI
11.5 Measurement and Instrumentation
Identify any measurement capabilities either embedded in your substrate components or as dedicated
external test equipment.
The measurement in PEN can be done on either x86 or network processor, and measurement on NP
has more benefits. At the current stage, the measurement metrics we can obtain from NP are primarily
flow statistics, including start-of-flow timestamp, current timestamp, packet count for each direction
(host to NP and NP to host), byte count for each direction and the flow tuples. We can also offer
physical NP ports statistics including number of octets, the number of unicast/multicast/broadcast
packets, and the numbers of packets of different size intervals that have been sent and received by each
physical port.
Page 47 of 71
(DRAFT) Spiral 1 Substrate Catalog
GENI-INF-PRO-S1-CAT-01.5
January 20, 2009
11.6 Aggregate Specific Tools and Services
Describe any tools or services which may be available to users and unique to your substrate
contribution.
We will provide (1) setup scripts and templates for creating virtual routers; (2) measurement
services that take advantage of network processors; (3) diagnosis services supported by NP.
12 Measurement System
Substrate Technologies: PC based IP Packet monitors, Data Storage Systems
Cluster: C
Project wiki http://groups.geni.net/geni/wiki/MeasurementSystem
12.1 Substrate Overview
Provide an overview of the hardware systems in your contribution.
The hardware systems that being developed are PC-based systems that can be attached to taps
or span ports on switches/routers to collect measurements of IP packet traffic corresponding to
GENI slices. These hardware systems will interface initially with the ProtoGENI control
framework but can be extended to interface with other control frameworks. The systems are
also closely tied to high capacity data storage systems , which will be used to archive the data
that is collected. While the data storage systems are important to our design, at present we are
considering utilizing third party systems such as Amazon's S3 service which will give us
highly scalable, reliable and available capacity.
12.2 GENI Resources
Discuss the GENI resources offered by your substrate contribution and how they are shared,
programmed, and/or configured. Include in this discussion to what extent you believe the shared
resources can be isolated.
The measurement systems will enable flexible, privacy preserving capture of packet traffic
associated with individual slices or slice aggregates. This measurement capability is critical to
a broad range of experiments envisioned for GENI. While our initial focus is on IP packet
capture, we are designing our systems to be able to eventually capture and archive other types
of data feeds (eg. SNMP or flow data) from devices deployed in GENI.
The systems will be shared/multiplexed between experiments by generating filter sets for
individual slices and their associated packets. This will enable a single measurement system to
gather traffic for multiple experiments at the same time. The details of what will be captured
are specified by the user and justified against the privacy policy constraints for the host
network. The data archival needs are also user specified and likewise will be justified against
capacity and policy constraints of the GENI framework. Over the course of the project, we
will develop additional capability to specify the capture of data aggregates (eg. flows) and
provide analysis tools for users to facilitate the interpretation and analysis of the collected data.
Page 48 of 71
(DRAFT) Spiral 1 Substrate Catalog
GENI-INF-PRO-S1-CAT-01.5
January 20, 2009
12.3 Aggregate Physical Network Connections (Horizontal Integration)
Provide an overview of the physical connections within the substrate aggregate, as well as between the
aggregate and the GENI backbones. Identify to the extent possible, non-GENI equipment, services and
networks involved in these connections.
Physical connectivity of the measurement systems is critically important to their relevance in
experiments. We envision the systems will be deployed widely on many links throughout the
GENI substrate. The scope of the deployments will depend on balancing financial and
managerial constraints against the utility of the systems.
The first set of physical connections are best thought of as data connections. These are the
connections to links over which experiments will be run and will be made through taps (either
active or passive) or span ports on network devices. These connections enable packets
associated with slices to be captured.
The second physical connection is between the measurement systems and the control
framework. These connections can be made through relatively low speed links and are best
thought of as the management and control interface.
The third physical connection is between the measurement systems and the data archival
systems. These connections can potentially require high bandwidth and should be on separate
(virtual) links so that they do not interfere with experiments. Data from the repository/archive
is envisioned to be assessable to researchers through third party network connections that do
not involve GENI.
12.4 Aggregate Manager Integration (Vertical Integration)
Identify the physical connections and software interfaces for integration into your cluster's assigned
control framework
As stated above, the measurement systems will need to be accessible from the control
framework (in our case ProtoGENI) via physical paths from the control systems to the
distributed measurement systems. It is assumed that this can be facilitated via routable IP
addresses allocated from the local networks to the measurements systems.
The specification of the software interface from the control framework to the measurement
system is currently under development. Essentially, it enables users to specify links on which
they wish to capture packets and the filters (identifying specific fields) that they wish to apply
to the packets. This results in the generation of filters on the measurement systems, which will
then capture and archive the specified packets as long as a slice is active. The measurement
interface will respond to the user request with a URL for the data archive for their slice on the
measurement system storage infrastructure.
Page 49 of 71
(DRAFT) Spiral 1 Substrate Catalog
GENI-INF-PRO-S1-CAT-01.5
January 20, 2009
12.5 Measurement and Instrumentation
Identify any measurement capabilities either embedded in your substrate components or as dedicated
external test equipment.
The components in this section are all about measurement. In addition to enabling users to
capture packets associated with their experiments, the measurement systems will also generate
a set of system utilization measurements that will be available via SNMP and activity logs that
will be available via syslog.
12.6 Aggregate Specific Tools and Services
Describe any tools or services which may be available to users and unique to your substrate
contribution.
The primary goal of our systems is to provide a basic packet measurement service for
experiments in GENI. This will include tools for data aggregation (eg. packets to flows) and
data analysis.
While our initial focus is on packet capture, it is possible that these systems will be able to
gather and archive a wide range of data generated by other systems such as SNMP
measurements from other devices deployed in GENI.
13 ViSE
Substrate Technologies: Multi Sensor (camera, radar, weather station) Network
Cluster: D
Project wiki http://groups.geni.net/geni/wiki/ViSE
13.1 Substrate Overview
Provide an overview of the hardware systems in your contribution.
ViSE is a mult-sensor/multi-user sensor network testbed in Western Massachusetts. The testbed
will consist of 3 nodes with physical connections over long-range 802.11b with directional antenna as
well as a backplane connection over a commercial cellular network. Each ViSE node is custom-built
and consists of a number of different hardware components, listed below. In addition to listing each
component and a brief summary of its capabilities, we also list relevant external links that include more
information. Where appropriate, we elide certain details of the node hardware including cabling,
custom-built power boards (e.g., for the Gumstix backplane), external enclosure connectors, etc.
In addition to the specific hardware components, the position of each node is an important part of
our testbed's profile. We currently have one node positioned at the top of a firetower on Mount Toby in
Amherst, MA. This node is roughly 10 km from one node on the UMass CS Building and one node on
the MA1 Tower on the UMass Campus. The Mount Toby node is able to communicate with both the
CS building node and the MA1 Tower node. The CS building node and the MA1 Tower node do not
Page 50 of 71
(DRAFT) Spiral 1 Substrate Catalog
GENI-INF-PRO-S1-CAT-01.5
January 20, 2009
Figure 13-1 ViSE Network and Node diagrams
have line of sight and cannot communicate directly over 802.11b. A server inside the CS building is
able to connect to the CS ViSE node over 802.11b. The MA1 Tower ViSE node has a wired Ethernet
connection. These two servers will be the primary entry points for GENI user network traffic to/from
the ViSE nodes.
Section I.A. Computational Hardware
Motherboard. MSI GM965 Mini-ITX form factor. The motherboard supports Core 2 Duo Mobile
processors and the Celeron M using type Socket P (478 pin) with a 800/533MHz front side bus. The
chipset is an Intel GME965 northbridge and ICH8M southbridge. The board includes 2 DDR 533/667
SDRAM sockets. We use the motherboard's Wake-on-LAN feature to power the node up and down
using an external microcontroller---the Gumstix---described below. More information may be found at
http://www.logicsupply.com/products/ms_9803.
Processor. 1.86 Ghz Intel Celeron M (Merom) CPU 540. More information may be found at
http://www.logicsupply.com/products/ms_9803.
Memory. 2GB of DDR2 667 RAM.
Each node has a 32 gigabyte Flash card for local storage
Gumstix Backplane. Each main node has a Linux Gumstix connected via serial and Ethernet to the
main node to serve as a backplane, and is able to communicate with the mainboard over the Ethernet
and the serial console. We are using the Gumstix Verdex Pro. The backplane connects to commercial
Page 51 of 71
(DRAFT) Spiral 1 Substrate Catalog
GENI-INF-PRO-S1-CAT-01.5
January 20, 2009
cellular. Our current plan is to initially place only one cellular-connected on Mount Toby, which is
physically inaccessible during the Winter. Details of the Gumstix are at
http://gumstix.com/store/catalog/product_info.php?products_id=209.
Adlink Data Acquisition Card (DAQ) PCI-9812. We use an ultra-high speed analog input card to read
the high-bandwidth data produced by the radar. More information about the card may be found at
http://www.qproducts.sk/pci-9812-9810-9812a.html.
Section I.B. Peripherals/Sensors
Wireless NIC. The motherboard includes a Mini-PCI slot. We are using the Atheros AR5BXB63 MiniPCI card. The card uses the Madwifi driver under Linux. More information may be found at
http://www.logicsupply.com/products/wn6302a_f4.
GPRS modem. The node on the CS building includes a GPRS modem connected to the Linux Gumstix
to provide backplane connectivity independent of the unprivileged GENI traffic. We use a data plan
from ATT.
Point-to-Point Directional Antenna. Each wireless NIC connects to a directional antenna for longdistance 802.11b communication. We are using an HG242G-NF Hypergain 24dBi Grid Antenna
2.4Ghz. More information may be found at
http://www.hyperlinktech.com/familylist.aspx?id=146.
Weather Radar. We are using a RayMarine Model 7845647 and Part E52067. We have modified a
standard Marine/boat radar for weather sensing. The radar sends data to the Adlink DAQ card and is
controlled by a custom PIC board that connects to the main board over USB. Commands may be
forwarded using libusb in Linux. More about the radar may be found at
http://www.raymarine.com/
DavisPro Weather Station. The DavisPro weather station has sensors for temperature, humidity,
pressure, wind, solar radiation, rainfall, rain rate, and wind chill. We are using a Vantage2 06152C
weather station. The DavisPro uses the wview driver under Linux and connects via USB. More
information is available at
http://www.davisnet.com/weather/products/weather\_product.asp?pnum=06152C.
Pan-Tilt-Zoom Camera. We are using an Axis 212 Network IP camera on each node inside of a heated
camera enclosure. The IP camera connects to one of the two available Ethernet outlets on the main
board. Details of the camera may be found at http://www.axis.com/products/cam\_212/.
Section I.C Enclosure/Power
Enclosure. Each node is housed in a Stahlin Enclosure. Each enclosure is mounted on a Campbell
Scientific CM10 Tripod. More information can be found at http://campbellsci.com/cm10.
Page 52 of 71
(DRAFT) Spiral 1 Substrate Catalog
GENI-INF-PRO-S1-CAT-01.5
January 20, 2009
Solar Panel. Each node draws power from an attached solar panel. A charge controller (the MPPT25025A12V) regulates power from the solar panel to the main board, radar, and sensors. Details of the
charge controller are at http://store.altenergystore.com/Charge-Controllers/SolarCharge-Controllers/
Battery. Each node includes a 12V ATX power supply from
http://www.logicsupply.com/products/m1\_atx and a battery charger that charges an
attached car battery. Currently, all nodes have external A/C power as well; the solar panel/car battery
are primarily used as backup and as a catalyst for future energy-aware work (which is not apart of the
SOW for the current subcontract). More information is available at
http://store.altenergystore.com/Charge-Controllers/AC-ChargeControllers/Samlex-12V15A-3STG-AC-BATTERY-CHARGER/p1052/.
13.2 GENI Resources
Discuss the GENI resources offered by your substrate contribution and how they are shared,
programmed, and/or configured. Include in this discussion to what extent you believe the shared
resources can be isolated.
ViSE is focused on slivering actuatable sensors and including them in GENI slices. This is the primary
resource offered by the testbed in addition to access to each ViSE compute node. Each ViSE node will
run an instance of the Xen virtual machine monitor and user slivers will be bound to a Xen virtual
machine created at the user's request. Xen (or Linux) includes mechanisms to partition a virtual
machine's memory allotment, CPU share, egress bandwidth, and storage volume size.
In addition to these resources being available for a sliver, each Xen virtual machine will have one
or more sensors attached to it; ViSE sensors include the PTZ camera, radar, and weather station.
Initially, our focus is on offering the radar to users and (secondarily, as listed in our SOW) the PTZ
camera.
Importantly, each sensor will be able to be actuated by the user from within their slice. Initially, we
assume that all ViSE nodes will run the same flavor of Linux. Virtual sensors will be exposed as
devices through the standard Linux device driver framework as /dev/ files. Each sensor will expose a
distinct interface that depends on its actuating capabilities; these interfaces will be exposed as ioctls to
the relevant device driver. Initially, these interfaces will be simple and are largely TBD since their
development is part of our project's description. Below we provide some initial insights into these
interfaces based on the current physical interfaces exposed by the radar and comment on the degree of
sharing capable of the device.
The radar essentially exposes three simple functions: power, standby, and transmit. Power turns the
radar on, standby warms the radar up, and transmit spins and radiates the radar to gather reflectivity and
voltage data. Our current radars expose two other actuators: range and gain. The PTZ camera interface
will follow a similar model, but with different ioctl calls (e.g., move, capture, etc.). The initial
approach to sharing the radar will map work from WFQ/SFQ to actuators to regulate the time that each
slice controls a physical sensor. Thus, each sliver will contain a share of the sensor, just as it contains a
share of the CPU.
The primary limitation to sharing the sensor (as opposed to the CPU or other resources) will be that
a steerable actuator may only be in one position at a time. If two co-located slivers request different
Page 53 of 71
(DRAFT) Spiral 1 Substrate Catalog
GENI-INF-PRO-S1-CAT-01.5
January 20, 2009
actuations at the same time there will be a performance impact. If two slivers have similar or
overlapping actuations then the physical actuator may be able to satisfy both actuations without a
performance impact. With this in mind, the degree of sharing capable with the device is somewhat
dependent on degree of overlap in the user's needs.
Virtualization of the weather station is the least complex sensor in the testbed (i.e., it has no
sophisticated actuators). The data from the weather station on the CS building will be available on the
UMass Trace Repository (http://traces.cs.umass.edu/index.php/Sensors/Sensors), and will be available
to experiments.
13.3 Aggregate Physical Network Connections (Horizontal Integration)
Provide an overview of the physical connections within the substrate aggregate, as well as between the
aggregate and the GENI backbones. Identify to the extent possible, non-GENI equipment, services and
networks involved in these connections.
Node-to-Node: The internal connections are over 802.11b using directional Point-to-Point antenna.
The GPRS modem that we use as a backplane leverages the commercial Internet (through ATT).
Internet-to-Node: The servers that act as gateways to the nodes are connected to the UMass
Department of Computer Science network. We envision a node in the CS department serving as a
gateway node to the network and hosting the Orca software for a management authority. Any
connection to the GENI backbone would have to go through these servers.
Internet-to-Backplane via Cellular: Our Operations and Management plane will use the backplane
provided by the Gumstix as a way to debug and power up/down the nodes if necessary. The Gumstix
backplane has both Ethernet and Serial connections as well as Wake-on-Lan capability. Additionally,
the Mount Toby node (the most inaccessible) includes a cellular-enabled power strip that is able to
power, reboot, or cut power to the node via a cell phone text message.
13.4 Aggregate Manager Integration (Vertical Integration)
Identify the physical connections and software interfaces for integration into your cluster's assigned
control framework
We will use Orca to schedule, configure, and access testbed node, and will share many of the
internal and external interfaces to the Orca control framework. Internally, we will modify Orca's
standard set of handlers (e.g., join, modify, leave and setup, modify, teardown) for Xen virtual
machines to include directives to configure virtual sensor slivers for each requested slice; otherwise, we
envision sharing much code with the current Orca framework. We will also modify Orca's standard
broker/clearinghouse policy to divide control of virtual sensors across time and space. Orca exposes the
options to request and configure different resources as opaque property lists. Our augmented
broker/clearinghouse policy and handlers will include properties specific to our virtual sensors. These
capabilities will be exposed initially through Orca's standard web interface for users to request slices.
One primary difference between our testbed and Orca's software is that our nodes are disconnected
(e.g., the management authority is only able to communicate with Mount Toby node through another
Page 54 of 71
(DRAFT) Spiral 1 Substrate Catalog
GENI-INF-PRO-S1-CAT-01.5
January 20, 2009
testbed node). We will make slight modifications to Orca's control plane software to enable the
disconnected mode of operation.
13.5 Measurement and Instrumentation
Identify any measurement capabilities either embedded in your substrate components or as dedicated
external test equipment.
Each sensor measures attributes of its physical surroundings. We expect this to be the most used
feature of the testbed. We currently format the raw reflectivity and voltage data gathered by the radar
as NetCDF files. Details of the NetCDF standard may be found at
http://www.unidata.ucar.edu/software/netcdf/ and
http://en.wikipedia.org/wiki/NetCDF.
A brief sample output from the radar is below. The DavisPro outputs *.wlk files, which are simple
text files with the relevant weather data. Finally, the PTZ camera produces sequences of *.jpg images.
While we currently have no plans to expose the Wireless NIC data (e.g., signal strength, bit rate,
ESSID) we envision this being potentially useful for researchers of long-distance wireless
communication and are exploring its possibilities.
Sample NetCDF data:
netcdf CS_20081118200052 {
dimensions:
Radial = UNLIMITED ; // (15 currently)
Gate = 150 ;
variables:
float Azimuth(Radial) ;
Azimuth:units = "Degrees" ;
int Time(Radial) ;
Time:units = "Seconds" ;
int TimeUSec(Radial) ;
TimeUSec:units = "MicroSeconds" ;
float RawVoltage(Radial, Gate) ;
RawVoltage:units = "Volts" ;
float Reflectivity(Radial, Gate) ;
Reflectivity:units = "dBz" ;
// global attributes:
:RadarName = "CS" ;
Page 55 of 71
(DRAFT) Spiral 1 Substrate Catalog
GENI-INF-PRO-S1-CAT-01.5
January 20, 2009
:Latitude = "49.49083" ;
:Longitude = "-72.53800" ;
:Height = 0.f ;
:HeightUnits = "meters" ;
:AntennaGain = 26.f ;
:AntennaGainUnits = "dB" ;
:AntennaBeamwidthEl = 25.f ;
:AntennaBeamwidthAz = 4.f ;
:AntennaBeamwidthUnits = "deg" ;
:TxFrequency = 9.41e+09f ;
:TxFrequencyUnits = "hertz" ;
:TxPower = 4000 ;
:TxPowerUnits = "watts" ;
:GateWidth = 100.f ;
:GateWidthUnits = "meters" ;
:StartRange = 1.f ;
:StartRangeUnits = "meters" ;
data:
Azimuth = 138.1354, 138.1354, 138.1354, 138.1354, 138.1354, 138.1354,
138.1354, 138.1354, 138.1354, 138.1354, 138.1354, 138.1354, 138.1354,
138.1354, 0 ;
Time = 1227038452, 1227038452, 1227038452, 1227038452, 1227038452,
1227038452, 1227038452, 1227038452, 1227038452, 1227038452, 1227038452,
1227038452, 1227038452, 1227038452, 1227038452 ;
TimeUSec = 145438, 742353, 742656, 768753, 769040, 778393, 780048, 801755,
802003, 802200, 833348, 834358, 852119, 852302, 852485 ;
13.6 Aggregate Specific Tools and Services
Describe any tools or services which may be available to users and unique to your substrate
contribution.
We envision users initially loading code that interacts with the virtual sensor interfaces through
standard mechanisms (e.g., ssh/scp), although we may investigate more sophisticated workflow tools as
they become available. Our work will leverage Orca's web interface to provide users an interface for
requesting resources and slices.
Page 56 of 71
(DRAFT) Spiral 1 Substrate Catalog
GENI-INF-PRO-S1-CAT-01.5
January 20, 2009
Our focus is primarily on the interface to the virtual sensors in each sliver. To this end, we expect
to develop user-level libraries (e.g., libsensor) for the sensors to allow users to program against them at
application-level, rather than using system-calls and ioctl calls directly.
14 WiMAX
Substrate Technologies: IEEE 802.16e WiMAX base-station
Cluster: E
Project wiki http://groups.geni.net/geni/wiki/WiMAX
14.1 Substrate Overview
Provide an overview of the hardware systems in your contribution.
This GENI Spiral 1 project is aimed at providing a state-of-the-art IEEE 802.16e WiMax base
station as an open, programmable and virtualizable cellular base station node. This open GENI base
station node (“GBSN”) device is intended to support flexible experimentation in wide-area mobile
network service scenarios similar to today’s cellular systems. The GBSN will support programmability
at both radio link and network routing layers via an “open API”, and will work with off-the shelf
WiMax handsets and data. The overall goal is to leverage emerging carrier-class cellular wireless and
access network platforms to provide a reliable, high-capacity and cost-effective solution for wide-area
mobility in GENI.
The emerging WiMax standard (802.16e) is a good technology base for such an open base station
node because it represents the state-of-the-art in radio technology (OFDMA, dynamic TDMA with
QoS, MIMO and mesh modes, etc.) and is expected to offer cost advantages relative to the cellular 
3G  LTE roadmap, particularly for small deployments. Also, initial WiMax products are IP-based
and are typically bundled with much fewer vertical stack protocols than corresponding cellular options
such as UMTS and LTE. NEC’s Release 1 802.16e base station product is viewed as an excellent
starting point for the addition of GENI specific open programmability features. Although the NEC base
station was designed for IP-based network applications, we have been able to determine a way to
unbundle the basic layer-2 functionality of the device and make it accessible through an external control
API. This makes it possible for us to develop Linux-based GENI code on an external PC controller,
substantially meeting all the layer 2,3 programmability and virtualization requirements in a manner that
is consistent with the approach used for wired GENI routers.
Page 57 of 71
(DRAFT) Spiral 1 Substrate Catalog
GENI-INF-PRO-S1-CAT-01.5
January 20, 2009
GENI Router
(at PoP)
GENI Backbone
Network
GENI Access
Network
(Ethernet SW &
Routers)
GENI Control
Interface
(initially OMF)
WiMax
coverage
area
(~3-5 Km)
Base Station
Controller PC
(Linux OS)
WiMAX Base Station
(GBSN)
R6+
interface
RF
interface
GENI terminals
(PC’s or PDA’s with
Commercial WiMax cards
Figure 14-1. High level diagram of WiMax base station and its interfaces to the GENI network
Figure 16-1 shows a schematic of the WiMax base station router and its connection to the rest of
the GENI network. As shown, the WiMax base station is typically connected to a GENI access
network with layer 2 switched connectivity using Ethernet or optical fiber technology. The figure also
indicates three distinct interfaces associated with the GENI WiMax base station. These are the GENI
control interface for experimenter access to virtual networks (slices) supported by the GBSN. Initially,
this project will use the ORBIT management framework (OMF) as the control interface between the
outside world and the base station controller (which is a Linux PC). This is the primary external
interface relevant to a GENI experimenter. The second interface internal to the GBSN is the R6+
interface by which the base station controller communicates with the base station hardware (which
includes its own internal controller running a proprietary NEC operating system and
control/management software). The R6+ interface exposes the hardware features such as assignment of
MAC/PHY resources (i.e. OFDMA time-frequency slots, power levels, service classification, etc.) to
each flow, as well as management interfaces for initial configuration, scheduler policy selection and
queue management.
The base station will initially be set up at a cellular collocation site at Rutgers University’s Busch
campus in Piscataway, NJ. As shown, the coverage area is expected to be about 3-5 Km radius,
covering the entire campus and also some parts of Highland Park, New Brunswick and Piscataway. In
terms of end-user devices supported by the WiMax network, the
802.16e base station’s signal is compatible with a growing number
of WiMax mobile platforms (for example, the Samsung M8000 or
Nokia N800 and Motorola WiMax handsets).
The NEC Release 1 WiMAX base-station hardware (photo in Fig.
Figure 14-2 High level diagram of WiMax base station and its interfaces to the GENI network
Page 58 of 71
Fig. 16-2 NEC’s Rel 1 802.16e
BS
(DRAFT) Spiral 1 Substrate Catalog
GENI-INF-PRO-S1-CAT-01.5
January 20, 2009
16-2) is a 5U rack based system which consists of multiple Channel Cards (CHC) and a Network
Interface Card. The shelf can be populated with up to three channel cards, each supporting one sector
for a maximum of three sectors. The BS operates in the 2.5 Ghz or the 3.5 Ghz bands and can be tuned
to use either 5, 7 or 10 Mhz channels. At the MAC frame level, 5 msec frames are supported as per the
802.16e standard. The TDD standard for multiplexing is supported where the sub-channels for the
Downlink (DL) and Uplink (UL) can be partitioned in multiple time-frequency configurations. The
base-station supports standard adaptive modulation schemes based on QPSK, 16QAM and 64QAM.
The interface card provides one Ethernet Interface (10/100/1000) which will be used to connect to the
high performance PC. The base station has been tested for radio coverage and performance in realistic
urban environments and is being used in early WiMAX deployments – typical coverage radius is ~35Km, and peak service bit-rates achievable range from 15-30 Mbps depending on operating mode and
terrain. Note that these service bit-rates are significantly higher than those achievable with first
generation cellular technology (such as EVDO), and should be sufficient to support advanced network
service concepts to be investigated over GENI.
14.2 GENI Resources
Discuss the GENI resources offered by your substrate contribution and how they are shared,
programmed, and/or configured. Include in this discussion to what extent you believe the shared
resources can be isolated.
Page 59 of 71
(DRAFT) Spiral 1 Substrate Catalog
GENI-INF-PRO-S1-CAT-01.5
January 20, 2009
The 802.16e base station allocates time-frequency resources on the OFDMA link with a number of
service classes as specified in the standard – these include unsolicited grant service (UGS), expedited
real time polling service (ertPS), real-time polling service (rtPS), non-real time polling (nrtPS) and best
effort (BE), as shown in Fig. 16-3. The radio module as currently implemented includes scheduler
support for the above service classes in strict priority order, with round-robin, or weighted round-robin
being used to serve multiple queues within each service class. These packet queuing and service
scheduling features are expected to provide adequate granularity for virtualization of radio resources
used by each slice in GENI. It is noted here that OFDMA in 802.16e with its dynamic allocation of
time-frequency bursts provides resource management capabilities qualitatively similar to that of a wired
router with multiple traffic classes and priority based queuing. The GENI slice scheduling module to
be implemented in the external PC controller is responsible for mapping “Rspec” requirements (such as
VM1 L3
Protocol,
etc.
VM2 L3
protocol
VMn L3
Protocol
External
PC
Controller
GENI slice scheduling
Open API interface
802.16e Common Packet Sublayer (CPS)
ertPS
UGS
C
I
D
C
I
D
#
2
#
3
C
I
D
#
N
(RR, WRR etc.)
C
I
D
C
I
D
C
I
D
#
1
#
2
#
3
#
N
(RR, WRR etc.)
C
I
D
C
I
D
C
I
D
#
1
#
2
#
3
BE
nrtPS
rtPS
C
I
D
C
I
D
#
N
(RR, WRR etc.)
C
I
D
C
I
D
#
1
#
2
C
I
D
#
N
(RR, WRR etc.)
Priority-based Scheduling
C
I
D
C
I
D
C
I
D
#
1
#
2
#
3
C
I
D
#
N
(RR, WRR etc.)
Base
Station
BS DL-MAC Scheduler
WiMAX Frame scheduler
Figure 14-3 Packet Scheduling in the 802.16e BS & Interface to GENI Slices
bandwidth or delay) to the available 802.16e common packet layer services through the open API.
Slices which do not require bandwidth guarantees can be allocated to the nrtPS class, while slices with
specific bandwidth requirements (for quantitatively oriented experiments, for example) can be allocated
to the UGS category.
14.3 Aggregate Physical Network Connections (Horizontal Integration)
Provide an overview of the physical connections within the substrate aggregate, as well as between the
aggregate and the GENI backbones. Identify to the extent possible, non-GENI equipment, services and
networks involved in these connections.
14.4 Aggregate Manager Integration (Vertical Integration)
Identify the physical connections and software interfaces for integration into your cluster's assigned
control framework
As mentioned earlier, the GBSN includes an external controller that runs Linux. In the initial prototype,
we will use the ORBIT Management Framework (OMF) software to interface the base station to other
Page 60 of 71
(DRAFT) Spiral 1 Substrate Catalog
GENI-INF-PRO-S1-CAT-01.5
January 20, 2009
parts of the GENI network. The Linux controller is implemented using UML virtualization, although
we will later consider upgrades to other VM platforms. The OMF software is implemented via “Orbit
grid services” on the control slice shown in Figure 16-4. For more information on OMF, please refer to
documentation at http://www.orbit-lab.org/wiki/Documentation .
The controller provides support for multiple slices assigned to the GENI WiMAX node. Each slice
will run within its own virtual machine (using software such as UML – User Mode Linux) as shown in
Fig. 16-4. Each VM will be capable of providing multiple virtual interfaces, so that programs loaded on
a slice that runs within a virtual machine can emulate its own router and perform IP routing. Virtual
interfaces will be mapped to physical interfaces based on the next hop for a virtual interface. The
controller will receive IP packets from the base-station on the R6+ interface mentioned earlier. When a
packet is received, it will be forwarded to the appropriate slice for further processing. The outgoing IP
packets from a slice will be placed on queues specific to a virtual interface. Outgoing packets on virtual
interfaces mapped to the layer 2 interface of the WiMAX base station will be tagged so that they can be
assigned traffic class and bandwidth parameters (BE, ertPS, rtPS etc.) as determined by the flow CID
(connection ID).
Experiment setup
Slice Resource allocation
Mapping to WiMax L2
Virtual
Interfaces
UML VM1
VM2
VM3
Control VM
slice
1
slice
2
Slice
n
OMF
&
WiMax BS
L2 Control
TCP,UDP/IP
stack-1
TCP,UDP/IP
stack-2
TCP,UDP/IP
stack
TCP,UDP/IP
stack-n
Linux
Physical
Interface
Physical
Interface
To WiMax
base-station
Packet Queue
per physical interface
To backhaul network
Figure 14-4 External Base Station Controller Architecture
The L2 base station control software on the external controller provides APIs to both control Layer
2 parameters and also to receive L2 specific information from the base station. An experimenter’s
program within a slice (obtained through the OMF control interface) can use these APIs to modify L2
parameters as well as receive L2 specific data both at load time and also at run time. Slice scheduling
and resource allocation at the external controller (Layer 3) and at the base-station (Layer 2) will be
specified using mechanisms similar to the RSpec command under consideration for the GMC.
The WiMax base station described will be integrated into the ORBIT management framework. An
experimenter will be able to access the WiMax network through the ORBIT portal and use available
Page 61 of 71
(DRAFT) Spiral 1 Substrate Catalog
GENI-INF-PRO-S1-CAT-01.5
January 20, 2009
ORBIT scripting, experiment control, management and measurement tools to run their experiment. Of
course, the experimenter will also have to set up and physically deploy necessary end-user equipment
(PC’s, mobile devices, laptops) within the coverage area. OMF facilities will be extended to provide
software downloads for Linux-based end-user devices. For more details on running an ORBIT
experiment, refer to http://www.orbit-lab.org/wiki/Tutorial .
14.5 Measurement and Instrumentation
Identify any measurement capabilities either embedded in your substrate components or as dedicated
external test equipment.
We also plan to use features of the ORBIT measurement library for collecting real-time measurements
for experiments. The library will run on the GBSN controller and record per flow or per packet
measurements and maintain databases for each experiment, and make it available through the ORBIT
experiment control portal. The framework will handle both Layer 2 and 3 measurements. The collection
library will aggregate the measurements and send them to a collection server running in the OMF
management system.
14.6 Aggregate Specific Tools and Services
Describe any tools or services which may be available to users and unique to your substrate
contribution.
15 TIED
Substrate Technologies:
Cluster: A
Project wiki http://groups.geni.net/geni/wiki/DETER
15.1 Substrate Overview
Provide an overview of the hardware systems in your contribution.
15.2 GENI Resources
Discuss the GENI resources offered by your substrate contribution and how they are shared,
programmed, and/or configured. Include in this discussion to what extent you believe the shared
resources can be isolated.
15.3 Aggregate Physical Network Connections (Horizontal Integration)
Provide an overview of the physical connections within the substrate aggregate, as well as between the
aggregate and the GENI backbones. Identify to the extent possible, non-GENI equipment, services and
networks involved in these connections.
15.4 Aggregate Manager Integration (Vertical Integration)
Page 62 of 71
(DRAFT) Spiral 1 Substrate Catalog
GENI-INF-PRO-S1-CAT-01.5
January 20, 2009
Identify the physical connections and software interfaces for integration into your cluster's assigned
control framework
15.5 Measurement and Instrumentation
Identify any measurement capabilities either embedded in your substrate components or as dedicated
external test equipment.
15.6 Aggregate Specific Tools and Services
Describe any tools or services which may be available to users and unique to your substrate
contribution.
16 PlanetLab
Substrate Technologies: GENI Users Access to PlanetLab (800 nodes/400 sites)
Cluster: B
Project wiki http://groups.geni.net/geni/wiki/PlanetLab
16.1 Substrate Overview
Provide an overview of the hardware systems in your contribution.
16.2 GENI Resources
Discuss the GENI resources offered by your substrate contribution and how they are shared,
programmed, and/or configured. Include in this discussion to what extent you believe the shared
resources can be isolated.
16.3 Aggregate Physical Network Connections (Horizontal Integration)
Provide an overview of the physical connections within the substrate aggregate, as well as between the
aggregate and the GENI backbones. Identify to the extent possible, non-GENI equipment, services and
networks involved in these connections.
16.4 Aggregate Manager Integration (Vertical Integration)
Identify the physical connections and software interfaces for integration into your cluster's assigned
control framework
16.5 Measurement and Instrumentation
Identify any measurement capabilities either embedded in your substrate components or as dedicated
external test equipment.
Page 63 of 71
(DRAFT) Spiral 1 Substrate Catalog
GENI-INF-PRO-S1-CAT-01.5
January 20, 2009
16.6 Aggregate Specific Tools and Services
Describe any tools or services which may be available to users and unique to your substrate
contribution.
17 Kansei Sensor Networks
Substrate Technologies:
Cluster: D
Project wiki http://groups.geni.net/geni/wiki/KanseiSensorNet
17.1 Substrate Overview
Provide an overview of the hardware systems in your contribution.
Kansei is an infrastructure facility for wireless sensor network experimentation with
high fidelity. Its hardware-software infrastructure supports convenient local and remote atscale testing on a variety of sensor platforms deployed in diverse environments. The Kansei
project is headed by Professor Anish Arora.
Located in the Ohio State campus, Kansei spans two deployments. One deployment is
in an indoor, but open, warehouse setting with 8500 sq. ft. Here there are 4 types of sensor
mote arrays:
 112 Extreme Scale Motes (XSM)
 432 TelosB motes
 112 iMote2, and
 112 Extreme Scale Stargates (XSS) that serve as micro servers.
Page 64 of 71
(DRAFT) Spiral 1 Substrate Catalog
GENI-INF-PRO-S1-CAT-01.5
January 20, 2009
All of the arrays are organized in a grid which is deployed approximately 3 feet off the
ground, by using an array of wooden tables. The configuration of the grid is as follows.
 The Stargates are organized in a 14*8 grid.
 To each Stargate, one XSM is connected through a 52-pin serial port.
 To each XSS, 4 TelosB motes and 1 iMote2 mote are connected through a USB
hub.
 The Stargates are connected using both 100 Mbps wired and 802.11b wireless
Ethernet. Through the wired network, all Stargates are connected to a central server
which provides various services and tools for users.
The warehouse arrays of Kansei provide a testbed infrastructure to conduct experiments
with 802.11b networking as well as different 802.15.4 radios installed on XSMs, TelosBs
and iMote2s. We have access to iMote2s which .NET capable, as well as SunSPOT motes
which are Java capable, but these have yet to be integrated with the testbed infrastructure.
The second Kansei deployment is in the Dreese Laboratories building occupied by the
Computer Science Department. In the false ceiling of each floor of this 8 storey building,
30 TelosB motes are placed. Application specific TelosB and other sensors communicate
with this array of nodes.
Page 65 of 71
(DRAFT) Spiral 1 Substrate Catalog
GENI-INF-PRO-S1-CAT-01.5
January 20, 2009
The Dreese fabric also includes 35 Cellphone-Mote pairs which form a peer to peer
network. Unlike the TelosB array, this array is at present not remotely programmable.
As a part of the Kansei consortium, the NetEye testbed is a high fidelity testbed which
provides a controlled indoor environment with a set of sensor nodes and wireless nodes
deployed in Wayne State University. The physical NetEye substrate consists of an
expandable collection of building block components. The set of components that will be
integrated with Kansei and GENI include 130 TelosB motes with light sensors. The NetEye
project is headed by Prof. Hongwei Zhang.
17.2 GENI Resources
Discuss the GENI resources offered by your substrate contribution and how they are shared,
programmed, and/or configured. Include in this discussion to what extent you believe the shared
resources can be isolated.
Kansei allows slices to be granted on one or more of the arrays —Stargates, XSM,
TelosB, IMote2—that comprise its substrate.
The following is a list of the resources that are available on each type of node:


A Stargate is equipped with 400MHz PXA55 XScale processor capable of running
Linux, 64 Mb RAM and 32MB flash. It also contains a 10/100 Ethernet NIC,
802.11b NIC, RS-232 Serial, and one USB port.
An XSM has a 4 MHz, an 8 Bit CPU, 128 KB instruction memory and 4KB RAM.
Page 66 of 71
(DRAFT) Spiral 1 Substrate Catalog


GENI-INF-PRO-S1-CAT-01.5
January 20, 2009
For communication, the mote uses a 916MHz low-power single channel radio
capable of delivering 10 KB/sec of raw bandwidth. The radio’s range is between 10
and 25 feet. It is thus identical to the Mica2 as a node platform, but differs in that it
has a variety of sensors and actuators including a magnetometer, a microphone,
four passive infrared receivers, a photocell, a sounder, and feedback LEDs. XSMs
were designed specifically for the ExScal project by The Ohio State University, the
University of California at Berkeley, and CrossBow Technology, and were
manufactured by CrossBow Technology.
A TelosB mote is an IEEE 802.15.4 compliant device that is manufactured by
Crossbow Inc. It has a T1 MSP430 microcontroller, 2.4 GHz Chipcon CC2420
radio, 4 MHz 8 bit CPU, 128kB instruction memory, and 4KB RAM.
An Imote2 (IPR2400) is an advanced wireless sensor node platform with a CPU and
memory at a micro-level yet with a sensor mote energy footprint. It is built around
the low-power PXA271 XScale processor, running at 14 to 416 MHz and integrates
an802.15.4 radio (CC2420) with a built-in 2.4GHz antenna. It contains 56kB
SRAM, 32MB FLASH and 32MB SDRAM. Each iMote2 is connected to a stargate
individually through a USB hub connected to the USB port of the stargate.
Kansei allows any OS that has been ported to the mote platforms to be loaded on that
mote. In the past, a significant percentage of the experiments have chosen TinyOS for the
motes.
Since conventional mote operating systems do not support multi-threading efficiently,
typically, each slice requests a number of “full nodes” today. (We fully expect this situation
to change in the future.)
Configuration and code-deployment on the slices is performed via the Ethernet network
to the Stargates, and as need be though the serial port of the Stargate onto the XSM, and the
USB port to the TelosB or Imote2. We note that a part of the Stargates resources are
reserved for managing the XSMs and Tmotes.
Kansei provides isolation to concurrent experiments through a couple of techniques.
Through its scheduling service, it offers the option that no concurrent job runs on the same
type of array. In terms of radio interference, users can choose different frequencies for their
experiments to achieve radio band isolation. However, Kansei can only guarantees
compile-time isolation of radio interference at the moment. Since radio frequencies used by
sensor nodes can be changed at run-time, we need further techniques to improve radio
isolation.
NetEye consists of a fixed array of wireless and sensor nodes. The wireless nodes are
Dell Vostro 1400 laptops with the Intel dual core processor, 1GB RAM and 80GB hard
drives. These 15 laptops are placed on the top of 15 wood benches deployed in a 35 grid.
Page 67 of 71
(DRAFT) Spiral 1 Substrate Catalog
GENI-INF-PRO-S1-CAT-01.5
January 20, 2009
(Note: the laptops are supporting TelosBs, and they are not designed to run GENI
experiments in Spiral 1.)
NetEye has 130 TelosB motes which are connected to the aforementioned laptops
through USB2.0 interface. 6-12 motes are connected to each laptop.
17.3 Aggregate Physical Network Connections (Horizontal Integration)
Provide an overview of the physical connections within the substrate aggregate, as well as between the
aggregate and the GENI backbones. Identify to the extent possible, non-GENI equipment, services and
networks involved in these connections.
The connections between the motes and the Stargates and central Kansei servers have
been detailed above.
The Kansei servers can be connected to the OSCnet, a state-of-art internet backbone
network (with bandwidths higher than Internet2 backbones) in the state of Ohio.
OSCnet ties-in in to the Internet 2.
In NetEye, the motes are connected to laptops through USB interface, and the laptops
are connected to the NetEye server via Ethernet. The NetEye server will be connected to
the WSU campus network, via which the NetEye server (and the NetEye substrate) will be
connected to the Internet2 backbone through Merit Networks. There may be a cost
associated with the Internet2 connection.
17.4 Aggregate Manager Integration (Vertical Integration)
Identify the physical connections and software interfaces for integration into your cluster's assigned
control framework
The Kansei Server acts as the interface between the Kansei substrates and GENI
backbone. The Kansei Server is connected to the OSU campus network, which is part of
the state-wide OSCnet. OSCnet ties-in into the Internet 2 at Cleveland, OH.
The interface between NetEye and the GENI backbone is the NetEye server. The
NetEye server will be connected to the WSU campus network, via which the NetEye server
(and the NetEye substrate) will be connected to the Internet2 backbone through Merit
Networks.
Both Kansei Server and NetEye Server will expose API as web services for scheduling,
deploying and interacting experiments on the substrates.
Page 68 of 71
(DRAFT) Spiral 1 Substrate Catalog
GENI-INF-PRO-S1-CAT-01.5
January 20, 2009
Previously, Kansei had implemented its own control framework. In cooperation with
the GENI project, we now plan to use the ORCA control framework to integrate with rest
of the GENI framework. The refactored KanseiGenie architecture is shown below.
Specifically, the ClearingHouse will be accommodated into ORCA. Detailed software
interfaces are available from our Kansei Genie Wiki at
http://sites.google.com/site/siefastgeni/documents-1 .
17.5 Measurement and Instrumentation
Identify any measurement capabilities either embedded in your substrate components or as dedicated
external test equipment.
The Kansei health management service, called ‘Kansei Doctor’, is an autonomous
health service, periodically measures the number of available nodes, their health, radio link
quality and other networking metrics relevant for sensor networks. Kansei also provides
storage space for users to log their custom experimental data, and it has user interaction
services such as data/command injection.
NetEye will use the Kansei health services/tools. We plan to develop a number of new
tools to measure different environmental factor for sensor networks. NetEye provides
storage space for users to log their experimental data, and user interaction services just as
Kansei does.
Any tools developed in future will be used by both NetEye and Kansei.
17.6 Aggregate Specific Tools and Services
Describe any tools or services which may be available to users and unique to your substrate
contribution.
Kansei and NetEye provide a web portal for local or remote users to easily access the
testbed. Using the current web-interface an authorized user can create slices, schedule a job
on the testbed to automatically program the sensor and wireless devices on a particular
slice and store the experimental data on the server. Kansei supports scripted, real time data
and event injection. The Kansei health monitoring service called the ‘Kansei Doctor’ is an
autonomous self-contained health monitoring system which can be requested to run along
with an experiment to monitor the job in real-time. Kansei and NetEye supports scripted,
real time data and event injection.
The entire set of health monitoring and data injection services will be available for
users. Apart from that we plan to develop online user interaction services called ‘dashboard’ services which are unique to our substrate, will be available for users.
Page 69 of 71
(DRAFT) Spiral 1 Substrate Catalog
GENI-INF-PRO-S1-CAT-01.5
January 20, 2009
The resource scheduling mechanism that we plan to implement can also ensure a fair
share of resources to its users and allow a dynamic request of resources from its users.
18 Orbit
Substrate Technologies:
Cluster: E
Project wiki http://groups.geni.net/geni/wiki/ORBIT
18.1 Substrate Overview
Provide an overview of the hardware systems in your contribution.
18.2 GENI Resources
Discuss the GENI resources offered by your substrate contribution and how they are shared,
programmed, and/or configured. Include in this discussion to what extent you believe the shared
resources can be isolated.
18.3 Aggregate Physical Network Connections (Horizontal Integration)
Provide an overview of the physical connections within the substrate aggregate, as well as between the
aggregate and the GENI backbones. Identify to the extent possible, non-GENI equipment, services and
networks involved in these connections.
18.4 Aggregate Manager Integration (Vertical Integration)
Identify the physical connections and software interfaces for integration into your cluster's assigned
control framework
18.5 Measurement and Instrumentation
Identify any measurement capabilities either embedded in your substrate components or as dedicated
external test equipment.
18.6 Aggregate Specific Tools and Services
Describe any tools or services which may be available to users and unique to your substrate
contribution.
Page 70 of 71
(DRAFT) Spiral 1 Substrate Catalog
GENI-INF-PRO-S1-CAT-01.5
January 20, 2009
19 Acronyms
The following table defines acronyms used within the GENI Project.
GENI
Global Environment for Network Innovations
Page 71 of 71
Download