Document

advertisement
GENI Goals for Spiral 2, and Cluster D Roadmap to end of Spiral
1 and for Spiral 2
Revised during conference call on June 11, 2009
Additions on July 1, 2009
Needs to be finalized in Cluster D meeting on July 2, 2009
Contents
1. GENI goals for Spiral 2 .................................................................................................................... 1
2. Cluster E roadmap to end of Spiral 1 and for Spiral 2 ....................................................... 2
2.1 Live Experiments ....................................................................................................................... 2
2.2 Identity Management ............................................................................................................... 3
2.3 Improved Integration ............................................................................................................... 4
2.4 Instrumentation ......................................................................................................................... 7
2.5 Interoperability .......................................................................................................................... 8
3. ORCA Integration Roadmap ....................................................................................................... 10
Brief ORCA overview ..................................................................................................................... 10
Software integration overview .................................................................................................. 11
Substrate ........................................................................................................................................ 11
Portals/tools ................................................................................................................................ 11
Infrastructure integration overview ....................................................................................... 12
ORCA feature/capability roadmap ........................................................................................... 12
1. GENI goals for Spiral 2
1) -Live experiments
The central goal of Spiral 2 is to support significant numbers of research
experiments in the end-to-end prototype systems.
The GPO expects live experimentation to begin near the end of Spiral 1, which will
intensify through Spiral 2 as we begin continuous operation of the prototype
systems. This will begin to give us all substantial (early) operational experience, as
these experiments will help us all understand the prototypes' strengths and
weakness, which will drive our Spiral 3 goals.
Additionally, we expect development in Spiral 2 to emphasize:
Ver. 1.1
www.geni.net
1
2) - Identity management
3) - Improved integration (data & control planes, within clusters)
4) - Instrumentation
5) - Interoperability (permitting clusters to access the widest number of
aggregates)
2. Cluster E roadmap to end of Spiral 1 and for Spiral 2
These are proposed items (from the June 11 meeting) that have been organized
according to the Spiral 2 goals. Some items have been added.
What items should be added? Deleted?
What is the importance of each item (H, M, L)?
What is the difficulty of each item (H, M, L)?
2.1 Live Experiments
a)The GPO expects live experimentation to begin near the end of Spiral 1.
b) What experiments can be done with BEN by the end of Spiral 1?
c) What experiments can be done with DOME by the end of Spiral 1?
(By the end of summer, GENI will be the only way to access DOME.)
d) What experiments can be done with ViSE by the end of Spiral 1?
(By the end of summer, will GENI be the only way to access ViSE?)
e) What experiments can be done with Kansei by the end of Spiral 1?
f) In Spiral 2, we begin continuous operation of the prototype systems.
Perhaps divide into local access and centralized GENI Cluster D access.
Substrate owner is responsible for continuous local access.
RENCI is responsible for clearinghouse (broker).
Ver. 1.1
www.geni.net
2
Who is responsible for overall Cluster D availability?
What are responsibilities of GMOC?
g) The central goal of Spiral 2 is to support significant numbers of research
experiments in the end-to-end prototype systems.
What is end-to-end?
Between which aggregates?
What are likely needs for experimenters? Servers?
Sensors with clouds
Content with buses
Buses with radar
2.2 Identity Management
As Cluster D seeks to include more users (researchers), it needs better identity
management.
a) What will be done by the end of Spiral 1 to serve “other GENI users” by the
individual aggregates?
Dome login on sc local db
Kansei local db
b) Would it be helpful to define a master list (registry) of “other GENI users”?
How?
Where would it be stored?
How would it be used?
c) How could Cluster D move to use Shibboleth/ InCommon for identity
management?
Note: RENCI has written a solicitation 2 proposal that covers this.
Shibboleth at unc and duke
Ver. 1.1
www.geni.net
3
Supports direct login by user at portal
Have used shibboleth in portal to actor
tomcat jeff not too hard
shibboleth plugins for apache and
2.3 Improved Integration
a) Inter-actor
SOAP
implementation
Current implementation is
functional and undergoing
robustness testing.
Fully functional SOAP
implementation capable of
interconnecting actors located in
different containers with no
restrictions on actor/container
arrangements.
06/2009
done
b) Simplify admin and operation of security mechanisms between servers, and
messages between actors. Distribution of keys? Manual ugly not scale
requires documentation security look
c) Privacy not currently provided for messages between actors; add with https?
Not soon
d)
Configuration
manager
Both at site
authority and
slice controller
Current configuration
manager is capable of
invoking a series of Ant tasks
that always execute in the
same sequence. Limited to
idempotent driver
implementations using either
a node agent or XML-RPC
interface described above.
Flexible configuration manager
with an abstraction layer for
device capabilities using XML
and perhaps a scripting language
(Groovy, Jython, Ruby), capable
of executing arbitrary sequences
of configuration actions with
complex control flows on
complex substrates. At site
authority, support for complex
slivers
From slice controller, configures
sliver
Configuration commands can be
created using either NDL toolkit
or custom mechanisms.
TBD,
expected late
2009
Important ant
is too limited
Who does work
handler or next
portal
Ver. 1.1
www.geni.net
4
e) Clean-up and fully specify interfaces between actors.
SOAP interfaces with WSDL used today, plus piggybacked resource descriptions
Mainly a resource description problem; how to describe and then extend?
All need to be extensible
e) Resource
description
mechanisms
Most commonly used
resource description
mechanism is table driven (i
instances of type T).
NDL-OWL based resource
description framework extended
to multiple substrate types:
optical networks, wireless
networks, edge resources.
Of interest to Cluster E
ViSE: VMs with sensors
DOME: as a VM today; towards a
cloud
KANSEI: may be of use
On-going
work. Early
prototype
schema
available
07/2009
g) Resource
allocation
policies
Current policies rely on the
current table-driven resource
description model.
Policy framework to allow for coscheduling of disparate types of
resources including wireless,
optical network links, edge
resources, core network
resources as well as cross-layer
path computation under multiple
constraints.
Needed as improve resource
description, for multiple
substrates, for dispirate
resources
On-going
work. Early
prototype
API available
07/2009
h) Management
interface
Current management
interface is embedded in the
portal attached to each
container. Allows for
implementation of custom
plugins that implement
management tasks.
Undocumented
Not remoted, only by local
Java code
As move forward, need to
retain the local interface
An extensible SOAP-based
interface allowing ORCA to
interface with management
entities (e.g. GMOC, distributed
network operations,
visualization tools etc.).
Look at actors, slices, etc.
TBD
No externally accessible
clearinghouse available.
Externally accessible
clearinghouse maintained as a
production capability. Ready to
accept broker implementations
from other teams
i) ORCA
clearinghouse
Ver. 1.1
Perhaps add emergency stop, but
not general manipulation
www.geni.net
07/2009
5
In order to support the needs of Cluster D and fulfill the Spiral 1 Year 1
requirements of the GPO, ORCA-BEN project will create an ORCA clearinghouse,
capable of supporting delegation of resources by multiple participating institutions
and testbeds.
This clearinghouse will be represented by a host running a Tomcat container with
multiple brokers contained within it. ORCA team will be responsible for maintaining
this hardware/software configuration. The brokers, deployed into the
clearinghouse, will be responsible for allocating different types of substrate. The
integration of other projects with the ORCA clearinghouse will occur in multiple
phases:
Individual projects implement and test their brokers locally and, when ready,
deploy their broker(s) into the ORCA clearinghouse. The teams continue running
other actors (site authorities and slice controllers) locally and change their
configuration to communicate with their broker(s) inside the ORCA clearinghouse
using SOAP. For slices that span multiple testbeds, service managers will be
designed to acquire resources from more than one broker inside the ORCA
clearinghouse. This will be accomplished by the end of Spiral 1 year 1 to satisfy GPO
integration goals. Limited capabilities for co-scheduling of resources exist today that
can be used in this phase.
(add drawing)
j) Support use of both local broker (near aggregate) and centralized broker.
Can be done by site authority today.
k) Extended brokers
In conjunction with other teams, ORCA team will design and implement brokers
capable of co-scheduling resources from multiple testbeds as well as slice
controllers capable of communicating with the new brokers. This work will involve
integration of NDL, it will begin at the end of Spiral 1 and proceed during Spiral 2
(year 2).
l) Implement broker-to-broker messages.
Having one broker allocate resources to another broker is not too hard.
However, hard if brokers are peers
m) How is resource discovery done by service managers?
Need to find all brokers.
How do service managers know about available brokers?
One central registry with pointers to brokers?
Use CNRI (handle system) as central registry?
Ver. 1.1
www.geni.net
6
n) Experiments involving multiple aggregates
Between which aggregates?
What are likely needs for experimenters? Also servers?
Requires backbone connectivity of VLANs
o) Backbone connectivity between aggregates.
Layer 2 (VLANs) via Internet2
How to connect aggregate?
How to setup in the aggregate?
How to setup connections within Internet2?
p) VLANs from BEN to Internet2?
q) VLANs from DOME/ViSE to Internet2?
r) VLANs from Kansei to Internet2?
2.4 Instrumentation
a) GMOC feed from ORCA actors.
b) Better way to parse and understand logs in ORCA actors. Better working on it
c) Way to examine and decode ORCA messages. Only via logs?
d) BEN: not much for solicitation 1; proposal written for solicitation 2. Experiment
tool support
GMOC feed from BEN.
e) DOME: monitoring of the operations is there now, status, etc
Mechanism to allow experimenter to get the data off the bus data to researcher
data to repository embargo for some time see orbit for repository
GMOC feed from DOME
f) ViSE: rudimentary monitoring of the nodes, their wireless connectivity
also xen stats
Ver. 1.1
www.geni.net
7
GMOC feed from ViSE
Radar 10Mbps - 100Mbps continuous time-stamped
g) KANSEI: promised measurement for job scheduling, what nodes and links are
up, etc.
GMOC feed from KANSEI
By the end of year 1: Kansei users can check the status of his experiments
(e.g., success, failure, or pending)
By the end of year 2: Kansei users can interact and control his experiments
through the "Experiment Interaction User Service"
Store data at source, process it there?
Data between sensor fabrics, and to clouds
Towards producer consumer network
h) Plan for Embedded Real-Time Measurements in Spiral 2?
Prototype hardware
Interface with BEN
Interface with cf
i) also form paul barford, etc.
2.5 Interoperability
Interoperability with other clusters would be very helpful in Spiral 2.
How to utilize multiple clusters
a) Export ProtoGENI interface from Slice Controller actor.
Ver. 1.1
www.geni.net
8
Interface for
external
experiment
control tools or
experiment
managers.
Custom XML-RPC built for
Gush and DOME.
Based on current code
(JAWS), that needs to be
cleaned up
These interfaces allow externally
built-actors/tools to interact
with a generic ORCA slice
controller. We plan a generic
XML-RPC interface for use by
experiment control tools written
in multiple languages.
Stable, public interface
This interface supports a subset
of ORCA resource reservation
capabilities modeled after the
ProtoGENI XML-RPC interface.
TBD
Might enable ProtoGENI
experiment controller to
configure and launch experiment
in Cluster D
Would not generally allow a
researcher in Cluster C
(ProtoGENI) to access a resource
in Cluster D.
b) Integrate ORBIT as an aggregate into Cluster D.
Start with common resource representations
Then, tighter integration might be done
c) Coordinated Layer 2 connections between slivers in aggregates in different
clusters.
Layer 2 (VLANs) via Internet2
How to connect aggregate?
How to setup in the aggregate?
How to setup connections within Internet2?
d) Generalized resource discovery
What about queries from other clusters?
What about queries to other clusters?
One central registry for pointers to all resources in all clusters?
Ver. 1.1
www.geni.net
9
3. ORCA Integration Roadmap
Provided by ORAC/BEN project on May 26, 2009
This document describes the roadmap for integration within ORCA. Various options
for software integration with ORCA codebase are outlined. Future code
features/enhancements, permanent infrastructure (ORCA-BEN GENI clearinghouse)
are also discussed.
This document is intended for the Cluster D projects as well as any other project
wishing to integrate their infrastructure (substrate) and tools with ORCA control
framework.
Brief ORCA overview
ORCA implements a number of actors (as Java classes) capable of running on a
single host (within a single Tomcat container) or on different hosts, communicating
with each other using secure SOAP1. These actors are:
(a)
Broker, containing a policy for allocating resources of different types
(runs within a Clearinghouse).
(b)
Aggregate manager or Authority for a site or domain, administratively
responsible for some elements of a substrate. An authority
delegates allocation of its resources to a broker.
(c)
Service manager/slice controller, acting on behalf of a researcher to
request resources from the broker, and then configure them for
the experiment
A Site authority contains a configuration manager (class Config), capable of
installing slice-specific configurations into the substrate hardware elements on
behalf of either the broker or the slice controller. The current configuration
manager is capable of invoking a series of Ant tasks for a specific type of a substrate2
to accomplish substrate configurations (discussed in more detail below).
Actors within a single Tomcat container are configured by using a web front-end
attached to the container. Using the web front-end, for example, resources can be
granted to site authority using the web front end. A site authority can delegate the
resources to a specific broker. Service manager allows for the creation of plugins
that act as slice controllers on the substrate by requesting and configuring resources
from one or more brokers.
See feature/capability roadmap at the end of the document for full SOAP availability
See feature/capability roadmap at the end of the document for notes on enhancements to the
configuration manager
1
2
Ver. 1.1
www.geni.net
10
Software integration overview
Substrate
In order to integrate elements of substrate with ORCA, one must identify the
different types of substrate available and create a broker policy to allow for the
allocation of the substrate. A policy, implemented as a Java class, can contain any
number of rules and constraints on how specific types of substrate must be
allocated. The current ORCA codebase contains a number of policy examples, the
most commonly used of which is a table-driven scheduling policy, capable of
allocating a certain number i of units of type T from pools of interchangeable units
of type T, which were delegated to the broker by one or more site authorities3.
The second part of substrate integration is needed to allow ORCA to create slivers of
the substrate by allowing it to pass configuration commands to the substrate
elements. Each substrate component type (sliver type) provides a handler script to
setup or teardown slivers on a component of that type. In principle, handlers could
be written in any scripting language that can be invoked from Java (e.g., Jython,
JRuby, Groovy), but all current handlers are scripted using Ant. Using the Ant
approach at least two alternatives are possible for handlers to pass configuration
commands to the components:
(a)
The handler uses secure SOAP to invoke component-specific driver
modules. Drivers are written in Java and run within a Java-based
Node Agent supplied with the Orca code. The node agent is either
deployed into the substrate component (if it supports Java VMs) or
into a proxy. The drivers are capable of executing commands on
the component, based on properties passed through the handler.
Examples of such mechanisms are ORCA-BEN integration with
Cisco 6509, Polatis fiber switches and Infinera DTNs.
(b)
The handler uses a messaging protocol such as XML-RPC to configure
the substrate. The substrate elements exposes custom
configuration interfaces in XML-RPC, which are invoked by the
handler. An example of this type of integration is contained in the
DOME code. This approach does not require a node agent and
allows using non-Java based substrate drivers.
Portals/tools
To integrate a portal or tool acting on behalf of a user it is necessary to create a
communications mechanism between the portal/tool and the ORCA service
manager. This mechanism would allow the portal/tool to manage requests for
resources from the ORCA broker(s) mediated by the slice controller. This
mechanism could be fully custom or can be built using the currently available
examples of such a mechanism implemented as XML-RPC to allow for the
3
See feature/capability roadmap for introduction of semantic resource representations using NDL
Ver. 1.1
www.geni.net
11
integration with Gush, as well as a similar example implemented for the integration
of DOME project4.
Infrastructure integration overview
In order to support the needs of Cluster D and fulfill the Spiral 1 Year 1
requirements of the GPO, ORCA-BEN project will create an ORCA clearinghouse,
capable of supporting delegation of resources by multiple participating institutions
and testbeds.
This clearinghouse will be represented by a host running a Tomcat container with
multiple brokers contained within it. ORCA team will be responsible for maintaining
this hardware/software configuration. The brokers, deployed into the
clearinghouse, will be responsible for allocating different types of substrate. The
integration of other projects with the ORCA clearinghouse will occur in multiple
phases:
Phase 1. Individual projects implement and test their brokers locally and,
when ready, deploy their broker(s) into the ORCA clearinghouse.
The teams continue running other actors (site authorities and slice
controllers) locally and change their configuration to communicate
with their broker(s) inside the ORCA clearinghouse using SOAP.
For slices that span multiple testbeds, service managers will be
designed to acquire resources from more than one broker inside
the ORCA clearinghouse. This will be accomplished by the end of
Spiral 1 year 1 to satisfy GPO integration goals. Limited
capabilities for co-scheduling of resources exist today that can be
used in this phase.
Phase 2. In conjunction with other teams, ORCA team will design and
implement brokers capable of co-scheduling resources from
multiple testbeds as well as slice controllers capable of
communicating with the new brokers. This work will involve
integration of NDL, it will begin at the end of Spiral 1 year 1 and
proceed during Spiral 1 year 2.
ORCA feature/capability roadmap
The following roadmap describes features and capabilities of ORCA software as well
as the development of the infrastructure required to roll out production-level ORCA
clearinghouse capable of supporting other projects.
Feature/
capability
ORCA
clearinghouse
4
5
Current5 state/limitations
Future enhancements
No externally accessible
clearinghouse available.
Externally accessible
clearinghouse maintained as a
production capability. Ready to
accept broker implementations
from other teams
Expected
date
07/2009
See feature/capability roadmap for XML-RPC enhancements
As of 05/2009
Ver. 1.1
www.geni.net
12
Inter-actor
SOAP
implementation
Current implementation is
functional and undergoing
robustness testing.
Configuration
manager
Current configuration
manager is capable of
invoking a series of Ant tasks
that always execute in the
same sequence. Limited to
idempotent driver
implementations using either
a node agent or XML-RPC
interface described above.
Resource
description
mechanisms
Most commonly used
resource description
mechanism is table driven (i
instances of type T).
Resource
allocation
policies
Current policies rely on the
current table-driven resource
description model.
Interface for
external
experiment
control tools or
experiment
managers.
Custom XML-RPC built for
Gush and DOME.
Management
interface
Current management
interface is embedded in the
portal attached to each
container. Allows for
implementation of custom
plugins that implement
management tasks.
Ver. 1.1
Fully functional SOAP
implementation capable of
interconnecting actors located in
different containers with no
restrictions on actor/container
arrangements.
Flexible configuration manager
with an abstraction layer for
device capabilities using XML
and perhaps a scripting language
(Groovy, Jython, Ruby), capable
of executing arbitrary sequences
of configuration actions with
complex control flows on
complex substrates.
Configuration commands can be
created using either NDL toolkit
or custom mechanisms.
NDL-OWL based resource
description framework extended
to multiple substrate types:
optical networks, wireless
networks, edge resources.
Policy framework to allow for coscheduling of disparate types of
resources including wireless,
optical network links, edge
resources, core network
resources as well as cross-layer
path computation under multiple
constraints.
These interfaces allow externally
built-actors/tools to interact
with a generic ORCA slice
controller. We plan a generic
XML-RPC interface for use by
experiment control tools written
in multiple languages. This
interface supports a subset of
ORCA resource reservation
capabilities modeled after the
ProtoGENI XML-RPC interface.
An extensible SOAP-based
interface allowing ORCA to
interface with management
entities (e.g. GMOC, distributed
network operations,
visualization tools etc.).
www.geni.net
06/2009
TBD,
expected late
2009
On-going
work. Early
prototype
schema
available
07/2009
On-going
work. Early
prototype
API available
07/2009
TBD
TBD
13
Ver. 1.1
www.geni.net
14
Download