Draft v1 D5.2

advertisement
Draft D5.2
Ver. 1
TOC
Executive summary ............................................................................................................ 1
Introduction ......................................................................................................................... 1
Deviations to the project road map.............................................................................. 2
Lessons learned ................................................................................................................... 2
Use Case reports.................................................................................................................. 3
UC1 Aggregated Virtual CPE.................................................................................................... 3
UC2 Multiple Networks Virtual CPE ..................................................................................... 8
UC3 Dynamic Network in GRID Environment ................................................................11
Setup and Environment .................................................................................................................... 13
Testbed hardware and software used ........................................................................................ 13
Creation of the usecase testbed..................................................................................................... 13
Adding the hosts to the StratusLab installation ..................................................................... 15
Running VMs on the Hosts .............................................................................................................. 16
Running VMs in hosts ........................................................................................................................ 17
Conclusions ............................................................................................................................................ 17
UC4 Distributed Virtualized Hosting .................................................................................18
UC5 Ultra High Definition Multimedia Environment ...................................................21
General goal ........................................................................................................................................... 23
Challenges and mitigation ............................................................................................................... 23
Lessons learned ................................................................................................................................... 23
Summary................................................................................................................................................. 24
Key Performance Indicators ........................................................................................ 25
Perspectives for the future........................................................................................... 25
Conclusion .......................................................................................................................... 25
Executive summary
..to fill..
Introduction
..to fill..
Deviations to the project road map
The use case regarding Distributed Virtualized Hosting suffered from change of
plans by the partner involved. Accordingly, the use case were not investigated
fully. Lack of user demand regarding private cloud and introduction of a system
functional compatible to the Mantychore software within the network
operations center were among the main reasons for down prioritization of the
use case. Nevertheless the functionalities required from this use case have been
addressed within the software development.
Software development experienced an overall delay which has influenced the
process of engaging users. The delay were reflected in the schedule of
deployment of services. This left off shorter time and fewer development cycles
for the period of user trying out and making feed back on the software. This
shortage of time have affected the resulting maturity of the use cases.
The project has not lead to a general service offering open to the hole NREN
community. The service have been tailored to the specific use cases. In the UC1,
HEAnet is offering a new way to connect customers. This offer is in principle
open to any institution in their domain. In UC2, DeIC has demonstrated
connection to multiple networks as a pilot in a production environment.
However DeIC is not offering this solution to other customers because the
business perspective have dried out.
Lessons learned
Unwillingness to risk: The project partners experienced several times a
resistance among network operators in engaging in pilot setups involving the
production network. For good reason those people care for stability and uptime
of their network. Already in the project planning phase this applied to the UC3
and UC5 where alternative project variants were accounted for. In UC2 we faced
this problem at the time we should engage a concrete institution into pilot setup.
Two times we were met with the message: "Interesting project, but we do not
want to take the risk of making a change to the production setup". Lack of
manpower ressources were mentioned as part of the reason for declining
participation. Also the unwillingness to engage is probably related to not being
contractual obligated within the project.
2
Use Case reports
UC1 Aggregated Virtual CPE
The use case for Virtual CPE for Educational Institutions is intended to solve
the problem of “equipment bloat” at the edge between an educational
institution and its National Research Network (NREN). In order to properly
manage the connection between provider and client, it is common for an
NREN to deploy routers at the client site. This allows the NREN to separate
the two networks in a clean way, manage the connection all the way down to
the CPE device, and to accommodate the client’s specific requirements
without impacting on other clients’ networks.
However, over the last decade, as the services deployed have become more
sophisticated and reliability has become crucial, the amount of equipment
deployed at this network edge has increased considerably. By virtualising the
Client Premises Equipment (CPE) routing function, the goal is to reduce the
amount of equipment deployed by the NREN, while simultaneously increasing
the amount of control the customer has over this device that forms a part of
their network edge.
On a CPE device, the interface toward the provider is well known and
generally standardised, with an IP connection over an interface which is
routed using static routes or BGP. The interface toward the client institution is
much more flexible; it’s effectively a part of the client’s network and in some
way has to conform to that network. However, since both interfaces are
Figure 1 Original state
3
provided on a single device, inevitably either the provider or the customer
must manage an interface that the other is responsible for. So while reducing
the amount of physical equipment, we also take the opportunity to make a
better way to manage this separation between client and provider.
Achieved goals
The first goal is to reduce the equipment deployed at the customer-provider
edge. A common layout (per Figure 1) is:
- one or two firewalls or routers (operated by customer)
- one or two client LAN switches (operated by customer)
- one or two CPE routers (may be operated by the NREN or by the customer)
- one or two provider service switches (operated by NREN)
- possible transmission equipment (operated by NREN or third part
telecomms provider)
This scenario removes the CPE routers at each client, implementing this
function on separate aggregating routers closer to the core. Routing
separation is carried out on the aggregating routers, and management is
performed toward both the aggregating routers and the provider service
switch which remains on site. (See FIgure 2.)
The pilot uses two aggregating routers, in separate locations, to implement
the vCPE. One of these is located in the HEAnet offices at George’s Dock,
Figure 2 Aggregated vCPE
4
Figure 3 Close up of pilot scenario
central Dublin. The other is in a datacentre in Parkwest, West Dublin.
The pilot scenario (see Figure 3) consists of these two routers, connected by
HEAnet’s point-to-point network (controlled using the GEANT Bandwidth on
Demand (BoD) interface, though the circuits in the scenario are entirely with
HEAnet’s domain.) The routers are connected to each other, have separate
connections upstream to the core (which in this case is provided by another
router in the central Dublin test lab), and connections downstream to a LAN in
the HEAnet office.
As well as the command line interface, the tool also presents a Graphical
User Interface (GUI) that is specific to this scenario. The GUI is presented
through a web browser. This allows the NREN’s Network Operations Centre
to create and manage multiple vCPE networks on behalf of clients, and also
allows the client to login and edit their parameters if they wish. The created
vCPE follows the predefined template illustrated in Figure 3, including any
necessary routing configuration. A screenshot of the GUI is shown in Figure 4.
The initial configuration of the vCPE network via the GUI meets these goals:
- Connectivity is enabled to the downstream LAN without physical CPE
routers
- Available IP addresses and VLAN numbers are suggested to the user
- BGP is configured toward the provider
- Connectivity on the production point-to-point is requested via the BoD
interface
- Access control lists (firewall filters) can be configured
Ongoing configuration of the vCPE network via the GUI or CLI meets these
goals:
5
- Client can have restricted access to the functions that affect their internal
network
- Internal routing protocol (OSPF) can be enabled and disabled
- IPv6 connectivity can be configured
Remaining goals to be completed at this time are:
- Complete IPv6 configuration via the GUI
- Selection of internal routing protocols
Not planned to be completed:
- Enable initial BGP session with 0 prefixes permitted until selected by
customer (as provider takes responsibility for enabling the connection
correctly)
Problems encountered and mitigated
Full testing requires GUI.
While we planned an aggressive schedule for a pilot of the service using
production customers, and the development schedule was set with the
required features in mind, it was not practical to begin this until the GUI was
fully formed and tested. To mitigate this, repeated tests and demonstrations
were performed both with the CLI and with a rudimentary GUI using the API,
on test connections using the underlying production network, and the first
production connection is now scheduled to be enabled as of the time of
writing of this document.
Customer/provider service combinations
The initial iteration of this use case requires certain prerequisites in the
combination of equipment deployed by the provider and by the client. In
particular, some clients have only a single physical connection and single
physical CPE. The initial scenario proposes to provide two VLANs over this
physical connection so that the client gets the benefit of physically separate
virtual CPE. However, this is only possible where either the provider can
deliver multipoint layer 2 VPN connections (which are not yet common) or the
client can land the VLANs on an existing switch. This is mitigated either by
providing a single connection (so no degradation from the previous state), or
by changing the equipment on site (which does provide an improvement in
resilience, but at the cost of some extra equipment.)
Extending the use case into a service
The NREN in this case plans to extend this use case into a service on an
aggressive timescale, and indeed has already announced the service to
customers at its national conference in late 2012. A pilot service is underway,
with funding sought to extend this pilot into a full production service.
6
Alternative user scenarios of OpenNaaS
The abstract scenario described in this use case, while simple, is common to
many smaller clients of an NREN. (Larger clients such as major universities
typically have more complex setups.) There are two clear paths for expansion
of this scenario.
The first is to expand it to more complex client networks. One postgraduate
institution has already expressed interest in using virtual CPE with OpenNaaS
for its network which consists of multiple sites connected to each other and
onward to the NREN. While the GUI that was developed for this use case is
quite specific to the scenario described above, the underlying OpenNaaS tool
can already support more complex topologies; further development would
consist of adding the remaining routing features required, and developing a
more flexible GUI for the scenario.
The second is to enable more sophisticated services from the NREN.
Virtualisation and automation of circuits enables services (such as vCPE
itself) which would be unfeasible if every connection required individual,
manual configuration or deployment of equipment. Once a client institution
has virtualised its CPE, it becomes possible to create a much more complex
topology - perhaps spanning more than one country - that remains a part of
the client’s network and can be fully managed by them. This enables
exploitation of services such as Bandwidth on Demand which today can
sometimes only be fully exploited if additional physical layer 3 equipment is
deployed.
Use case result
We have been, and continue to be, satisfied with this use case. The potential
service has generated user interest, and has been able to attract funding for
equipment to it as a pilot with a plan toward a full production service. We do
expect that further development will be needed as individual client
connections are moved over and particular customer requirements arise.
However, the OpenNaaS platform appears to be a very strong foundation for
this service.
7
Figure 4 Routing configuration in the virtual CPE GUI
UC2 Multiple Networks Virtual CPE
The Danish partner – UNI-C, now DeIC – entered the project to make use of the
developed technology in production environments.
8
UNI-C is a state agency under the Danish Ministry of children and education1.
Among its missions, UNI-C operates different networks. At the time of the
application, UNI-C was for example operating the Danish NREN - Forskningsnet -,
the Healthcare Data Network (Sundhedsdatanet)2, a network for the schools
(Sektornet). These networks are separated, dedicated networks. This means that
in order to be connected to one of these networks, an institution needs to have
dedicated equipment (for example a router). Access to this equipment is usually
regulated by a set of rules in order to guarantee the security of the network. This
is particularly crucial in the case of the Healthcare Data Network, which
exchanges personal medical data considered sensitive.
The Danish use case took its starting point in the situation of some institutions,
which happen to be connected to several dedicated networks operated by UNI-C.
This was in particular the case of University Hospitals, which are typically
connected to the Healthcare Data Network and the NREN. UNI-C wanted to use
OpenNaaS to emulate the current situation in a virtualized way. This means to
use the same hardware – typically a router – to connect to different networks
while respecting the requirements in terms of security, separation and
administration.
In this situation, virtualization technologies offer the advantage of diminishing
economic costs (as less hardware has to be procured and maintained). OpenNaaS
presents a set of advantages compared to the virtualization functionalities
usually offered by hardware vendors. The first of these advantages is that
OpenNaaS is a single platform capable of managing different pieces of hardware.
Although the list of supported hardware remains limited at this stage, this means
that a network operator doesn’t have to deal with the evolution of the hardware
or its diversity. In this perspective, the solution offered by OpenNaaS presents
economies of scale, as the user configuring the connectivity for an institution
doesn’t necessarily have to deal with the lower layer in terms of hardware. A
third advantage compared to the virtualization functionalities offered by
different vendors is the capacity of OpenNaaS to assign different roles and define
different users. In this perspective, it is possible to have a situation where
different users do not have access to the same resources. This was a requirement
in the Danish use case as access to the hardware managing the connection to the
Healthcare data network is typically very controlled compared to the access to
the equipment connecting to the NREN.
During the project, two challenges had to be addressed by the Danish partner.
The first challenge was organizational. UNI-C has evolved significantly since the
beginning of the project and its missions have been redefined. UNI-C is not the
operator of the Healthcare Data Network and the Danish NREN any longer. The
Healthcare Data Network is now operated by a private partner – Netic and the
See http://uni-c.dk/Service/English for a presentation of UNI-C.
See http://medcom.dk/wm109991 for a presentation of the Healthcare Data
Network.
1
2
9
NREN is now operated by the Danish e-Insfrastructure Cooperation – DeIC3. DeiC
is a virtual organization administratively dependent of the Danish Technical
University (DTU). DeIC has overtaken the responsibilities of UNI-C in the
Mantychore project. These organizational changes had an impact on the Danish
use case in a significant way: UNI-C was not any longer the only decision maker
on the way networks had to be operated: other partners had to be convinced in
order to deploy a new technology on production networks.
The second challenge was linked to the delays experienced by the software
development team during the first phase of the project. Within the context of
organizational change described above, the Danish partner suffered from the fact
that it could not show and demonstrate the qualities of the software early in the
project. The Danish partner experienced a resistance from the new network
operators to implement a solution on their production networks without a long
evaluation phase.
Within this context, the Danish use case has been a bit more limited than
expected. OpenNaaS is used at this stage by a single institution – DTU Food, the
National Food Institute4. The National Food Institute has more than 400
employees. They conduct research and advise the Danish society for food related
matters. The use case consisted in using OpenNaaS to re-connect the Insititute to
Healthcare Data Network. The Institute – which is connected to the network of
the Technical University – was no longer connected to the Healthcare Data
Network as a consequence of organizational changes (not related to the changes
experienced by UNI-C): the Institute used to be part of a Ministry before they
became part of the Danish Technical University.
The set-up is described below.
The use of OpenNaaS has been satisfying. The point of view of the technicians
which set up the system has been reported elsewhere: OpenNaaS was easy to use
when access to the relevant documentation was granted5. The end-users at DTU
Food also considered OpenNaaS satisfying in the sense that it wasn’t disruptive
at all. From their point of view, the set-up of the connections was transparent.
When asked for their opinion on this set-up, the researchers of DTU Food
actually commented on the services they were granted access to (the connection
agreement system at the heart of the Healthcare Data Network) rather than on
the technical set-up6.
www.deic.dk
See here for a presentation of DTU Food:
http://www.food.dtu.dk/english/Service/About-the-institute
5 refer to Stefan’s report
6 Flemming Bager, Head of division at DTU Food, stated: ”when we tried to log in,
it went totally smoothly and we experienced the webpage to be very simple and
easy to to navigate”. He refers to the Connection Agreement System which
3
4
10
The specific set-up in the Danish use case has opened perspectives for DeIC.
Instead of considering using OpenNaaS only as a platform used for the
virtualization of hardware and delegation of rights, DeIC is now investigating the
use of OpenNaaS has a way to solve last mile issues on campuses. As in the set-up
used for DTU Food, some researchers need to have access to some services
outside the campus in a way that does not disrupt or threaten the other services
offered by the campus network. For example, it is the case of the bandwidth-ondemand service which typically ends at the edge of the campus. It is also the case
for some situations in which researchers would like to have a private network
with specific resources. DeIC is now working on these scenarios.
UC3 Dynamic Network in GRID Environment
This usecase uses an OpenNaaS created network to provide connectivity
between the different components of a StratusLab cloud installation. A cloud
installation, by its nature, comprises of a number of distributed components. The
functionality of the overall system relies on network connectivity. Flexibility in
how this connectivity is achieved may contribute to how flexible the layout of a
cloud installation can be.
Report overview
This report contains the following sections:
1.
2.
3.
4.
5.
6.
7.
Objectives of Usecase.
StratusLab Introduction.
Testbed hardware and software used.
Creation of the usecase testbed.
Running VMs on the Hosts.
Outcome of VM attempts.
Conclusions
Objectives of usecase
This usecase aims to use OpenNaaS to connect a StratusLab remote host using an
OpenNaaS created virtual network. Prior to the host being added, it will be
accessible using (for example) a public IP address. For this usecase, it is required
that the remote host appear in a private network address space, accessible by
the frontend machine. Using an OpenNaaS network will allow the remote host to
appear (to the FE machine) in this type of address space. For this usecase, the
remote host will have private IP addresses in the 10.53.x.x range.
StratusLab introduction
StratusLab is a cloud platform designed to work alongside grid and cluster
computing. It is the output of a project co-funded by the European Community's
Seventh Framework Programme (Capacities) under Grant Agreement INFSO-RI261552. StratusLab integrates OpenNebula (ONE), which manages the machine
virtualisation layer. This usecase aims to use OpenNaaS in an installation of
enables users connected to the Healthcare Data Network to select institutions
they want to be connected to and request their agreement for the connection.
11
StratusLab to create 'overlay' private network. This will connect additional hosts
to a StratusLab installation where the prerequisite network configuration is not
already in place (i.e. where the host are in a remote location).
Architecture
A StratusLab installation consists of a number of components:
1. The Frontend (FE) machine acts as a user interface for both users and cloud
administrators. Each Cloud user has an account and access to StratusLab
commands which create, stop and monitor instances of virtual machines.
2. One or more Host machines, each of which has network connectivity to the FE
machine. Hosts must be ‘bare-metal’ machines and must support hardware
virtualisation. In order to successfully facilitate running a number of VM, hosts
usually have multiple CPUs and large amounts of memory (RAM).
3. A Persistent Disk (P-Disk) server, providing the storage space (effectively the
disk-drives) for each VM. VMs access their allocated disk storage using ISCSI
over the network. NFS may also be used.
4. A Marketplace, user-created images for VMs are created and stored here. When
required, an image is downloaded to the P-Disk machine. StratusLab security
features ensure that images are digitally signed when uploaded and are not
altered after that point.
The FE machine, the P-Disk and Marketplace machines can be (and often are) on
separate systems. In this usecase, the FE and P-Disk are co-located on the same
machine. The Marketplace service is on a separate system and is shared by other
StratusLab installations. Figure 1 shows the layout of the StratusLab installation
used here.
Sequence for VM Creation
Before discussing how the virtual network will be used in this StratusLab
installation, it is necessary to outline the sequence of events which take place
when a new VM is created.
When a user initiates creation of a VM, the following steps take place:
12
1. The required image is downloaded from the Marketplace to the P-Disk server.
2. A storage volume for the VM is created on the P-Disk server, to where the
downloaded image is copied.
3. The VM is then booted on one of the available host machines.
4. During that boot process, the volume created on the P-Disk server is mounted on
the host and acts as its disk storage.
5. The network connecting the host to the frontend machine is used to access to
this storage.
Setup and Environment
All machines used run CentOS 6.2+ (Red-Hat compatible) operating systems.
StratusLab is installed using the RPM packages and these are available in the
project repository. In preparation for adding a new host to an installation, the
root user account on the FE machine needs to have passwordless SSH access to all
potential hosts (achieved by placement of public keys). Other preparatory work
includes installing virtualisation packages. It is also necessary to create a
network bridge on each host. These steps are carried out manually by a cloud
system administrator. Following this preparation, hosts are added (and
removed) manually by the administrator.
Testbed hardware and software used
The testbed used consists of a StratusLab frontend machine, and two hosts. One
host is located within the immediate environment of the FE (including the PDisk) machine and is also in the same subnet. These machines are located in the
premises of the School of Computer Science and Statistics in Trinity College
(TCD) Dublin. The Marketplace machine is also in this location. The second host
is located approximately 20Km away in a HEAnet datacentre. Further to these
components, a Juniper router, located in HEAnet offices in central Dublin.
The main hardware components used are listed as follows:





Frontend machine, a virtual machine (Xen-guest), running Centos 6.4, public IP
address.
Marketplace machine, a virtual machine (Xen-guest), running SL release 6.0.
StratusLab Host (TCD), Dell XPS 420, Xeon, dual core, 3GHz, 4GB RAM.
StratusLab Host (HEAnet), Dell PowerEdge R420, dual core, 2.2GHz, 8GB RAM,
public IP address.
Juniper route, model mx240 (located in HEAnet premises), running JUNOS
[112R1.10], public IP address.
The main software components used are listed as follows:



OpenNaaS (release 0.12).
GRE tunnel (in Linux).
StratusLab version 2.3-0.20120706.
Creation of the usecase testbed
An instance of OpenNaaS was installed on the StratusLab frontend (FE) machine.
The target router for this usecase was a Juniper router (myre), located in HEAnet
premises. A public key from the FE machine was placed on the router (myre),
allowing OpenNaaS to accessed a Juniper router in HEAnet.
13
As an initial step, the sequence necessary to create a virtual network from a
machine within TCD to a remote machine was established. This was successfully
carried out using a VM provided by HEAnet. This VM was a temporary substitute
for the remote (bare-metal) machine which later became available.
The TCD VM and the HEAnet router and VM are in two different address spaces
(and different physical locations). Two firewalls also lie between the FE machine
and the HEAnet equipment. One of these is controlled by this research group, the
other is a border router which handles traffic for the entire institute. It is
controlled by TCD’s overall Information Services and Systems group (ISS).
Where it is desirable to use private addresses in the virtual network to be
created, it is necessary to connect the FE machine to the remote router via a
tunnel which passes through both these routers.
For this usecase, a GRE tunnel (Generic Routing Encapsulation) was chosen. This
protocol was chosen because it is compatible with both Linux and JUNOS
operating systems. Such tunnels are created by executing a command on the
machines at either ends of the intended tunnel. To facilitate using this, it was
necessary to alter firewall rules on both firewalls. This was achieved with the
support of ISS.
Installation of StratusLab
StratusLab was installed on the frontend using Linux YUM installer. After the
initial RPM installation, configuration files which contain parameters for the
Frontend and Persistent Disk operations are edited. When this is completed, a
final command to ‘install’ StratusLab is called. Preparation steps were also made
on the two ‘host’ machines. However, at this point the required network
connection to both of the intended hosts was not in place.
Creation of the virtual network
The required network was created using OpenNaaS command and was based on
the ‘trial’ network mentioned earlier. The steps taken are summarised here:
1. Initially, the router (myre) was created (i.e. created in the context of OpenNaaS
and made available to OpenNaaS for further operations).
2. A number of sub-interfaces were then added.
3. A logical (virtual) router was then created within the (real) router, myre.
4. IP addresses were assigned to each of these sub-interfaces. Each of these acts as
an endpoint for a tunnel, or act as a connection to another subnet.
5. It was also necessary to create a route on myre which directs all incoming traffic
to the public IP address of the router myre.
6. Next, using OpenNaaS, commands were invoked to set up ‘one half’ of each of the
GRE tunnels which connect the router to the machines. Following this, the
‘second half’ of the tunnel is created on the machines which are connected by the
virtual network created.
The resulting network is illustrated in Figure 2.
14
Adding the hosts to the StratusLab installation
Following a successful creation of the virtual network, the host may now be
added. In this usecase, the local host uses a public IP address, while the host
located in HEAnet has a private IP address. Figure 3 shows a screen-grab from
the StratusLab host monitoring web interface, showing the two hosts active.
Figure 3, Host available to the StratusLab Installation.
It can be seen in Figure 3, entry 17 is the host located in TCD (using the cs.tcd.ie
domain name). Entry 18 is the remote host, but appears as having a private IP
address. Figure 4 shows how the virtual network fits with the overall layout of
the StratusLab installation.
15
Addition of further remote hosts
Adding a subsequent remote host would require that a further sub-interface be
added in the logical router within myre in HEAnet. This further sub-interface
would be assigned the IP address 10.53.14.2, and the remote host connected to it
would have the address 10.53.14.1 (or the next available range in that subnet).
Running VMs on the Hosts
Following the successful addition of both hosts to the StratusLab installation,
VMs were started in each of them. Due to the limited bandwidth of the virtual
network created, a small (27MB) Tty Linux image was chosen for the VMs. In
order to force either one or other of the hosts (local or remote) to be used, each
host can be individually ‘enabled’ or ‘disabled’ using the StratusLab command
line interface. This ensured that the host chosen for a given VM by StratusLab
could be selected by the user. Figure 4 and figure 5 shows the StratusLab
monitoring webpage while this is taking place.
Figure 4, A VM starting in the (local) TCD StratusLab host.
16
Figure 5, A VM starting in the remote StratusLab host.
Running VMs in hosts
After adding both hosts, in turn each was disabled to test the one remaining
enabled one. A VM was successfully created in each host. For the remote host,
this access requires an iSCSI login to the server across the virtual network.
Figure 6 shows the StratusLab monitoring webpage with both VMs running.
Conclusions
OpenNaaS proved to be suitable for the task envisaged in this usecase. It
provided a mechanism to create connecting networks between distributed
computing resources. This network was also extensible, and at one point
included four different machines (the TCD frontend and host, and both the VM
and remote host machine in HEAnet’s premises).
OpenNaaS software installation was trivial, requiring only the expansion of a
TAR file. The ability to execute commands individually or bundled together in
groups within a script made reuse of such commands very useful.
Overall assessment
The usecase was successful insofar as the additional hosts were added to the
cloud installation, one of which used an OpenNaaS created network. Once the
virtual network was created, and a ‘pingable’ connection was created, adding the
remote host to the cloud frontend was identical to adding the local host. Having
17
added both hosts, a VM was successfully created in each of them. In the case of
the remote host, network traffic for the VM deployment used the OpenNaaS
network. This demonstrates that an OpenNaaS virtual network can be used to
connect elements of a distributed cloud installation.
Usefulness of OpenNaaS in cloud infrastructure
The capacity to create networks in user determined address spaces allows cloud
administrators to connect resources in a way which may suit their needs.
Where there may be a number of cloud installations, OpenNaaS created
networks may be used to move resources from one installation to another.
OpenNaaS would allow this to happen without changing any underlying network
settings because changes could be carried out in virtual routers and networks
only.
Tunnelling to external routers – the ‘last mile’ problem
This usecase used routers which were located outside of the TCD infrastructure.
However, using a machine within TCD which had a public IP address, and GRE
tunnelling, it was possible to exploit OpenNaaS using a remote router. This setup
offers one solution to what may be called ‘the last mile’ problem. While it may be
possible to create a network of several virtual routers spanning several
international boundaries, the tunnelling technique used in this use case connects
such a network computers outside of the NREN environment.
UC4 Distributed Virtualized Hosting
Distributed Virtualized Hosting (Private Cloud) scenario
*Initial General Goal*
NORDUnet is focused on design, deployment, and operation of network services
and services that support resource sharing and federation across domains.
Beyond core networking, NORDUnet is involved in a range of projects in
bandwidth on demand, media, storage, middleware, identity management, etc.
The common frame of reference for the majority of these activities is the need to
support global research communities across institutions and countries, and
allow such communities to exploit e-Infrastructures.
Universities today often have their own private cloud, i.e., an internal hosting
provider. It is often desirable to be able to move selected hosts in a private cloud
to a commercial off-site hosting provider.
18
However, hosts are often parts of larger services with complex
interdependencies. Moving a host may involve assigning a new IP address which
in turn may imply that other parts of the service may need reconfiguration. In
order to avoid this it is desirable to allow the hosting provider access to the same
IP network as the customer.
*Change of plans*
Due to the fact that NORDUnet has had a very successful and expansive peering
strategy over the last couple of years, the number of routers in the NORDUnet
network have increased significantly since the start of the Mantychore project.
During this time the NORDUnet network has also expanded to several new
locations in Europe and North America where NORDUnet has been able to
connect to other networks through peering connections.
The increased number of routers, the increased number of operations done
regularly by the NORDUnet network operation centre and the increasing number
of new peering connection to keep track of made it clear that operating these
aspects of the network manually was not going to scale for very long. At this time
the Mantychore software (OpenNaaS) was not in a production ready state and
NORDUnet started investigating other different network management softwares
solutions.
*New solution*
After looking at different solutions on the market NORDUnet came to the
conclusion that the software NCS (network configuration software) from Tail-f
was best suited to handle the current and future damands for automation and
configuration in the NORDUnet network.
NCS is primarily used as a frontend to the equipment in the NORDUnet network.
NCS is setup to manage and configure all equipment in the NORDUnet network.
On top of being able to configure all the network equipment from one central
location it is also possible to extend the NCS code and introduce different service
concepts.
19
*Current implementation*
Currently NCS is deployed in the NORDUnet network and is used to configure
services delivered on top of the network. Primarily new customers and peering
connections are implemented using NCS instead of operators manually
configuring these services in the routers. General service updates and automatic
configuration updates are also handled by the NCS software instead of operators
manually updating configuration in the routers based on changes in the
connected peering partners or customer networks.
*Current development*
NORDUnet is currently also developing and soon implementing the possibility to
manually or automatically provision VPN connections through the NORDUnet
network. The NCS module for this service implementation will also support the
NSI protocol. Being able to use the NSI protocol will enable other NSI compatible
solutions to setup connections in a multi domain network scenario.
*Possible OpenNaaS integration*
As NCS is implemented in the NORDUnet network and have standardised APIs
there have been discussion to integrate the OpenNaaS software with the NCS
software. The current liaison between i2Cat and Tail-f is helpful in regards to
acheive this possibility. Integrating and using OpenNaaS as a webfrontend to NCS
could give potential possibilities for the ease of operation for the network
operators when configuring thework services. Currently this functionality have
however not been important enough to priorities in the OpenNaaS development.
Further development of the OpenNaaS software could take this into account to
make the software more attractive to network service providers.
*Status of use case*
After the NCS software was implemented in the NORDUnet network it was no
longer as critical to implement OpenNaaS to achieve a more dynamic and
automated service creation in the network. NORDUnet has also not experienced
20
a high degree of prioritisation from customers to realize the Distributed
Virtualized Hosting scenario as initially indicated. These two developments
during the project lead to the current decision not to implement the use case
according to the initial specifications.
NORDUnet will continually keep track of the current demands on the market and
the customer input helps to a high degree to decide in which directions the
network and the operations of the network is developed. With the
implementation of NCS and the availability of OpenNaaS, NORDUnet consider the
operations of the network and the possibility to achieve the delivery of services
in an increasingly more complex network being delivered to customers in an
easier and more cost effective way.
UC5 Ultra High Definition Multimedia Environment
The UK Ultra-high Definition (UHD) consortium is a networked infrastructure for the
development and deployment of next generation networked multimedia applications
and services. It consists of five members namely: University of Bristol, University of
Strathclyde, Digital Design Studio, University of Cardiff and Technium CAST.
The current setup is made up of two networked nodes – Bristol and Cardiff – which
are interconnected via the JANET UK network, and is investigating the development
and deployment of interactive UHD 3D media applications in the medical domains.
3D images of relevant structures such as the brain, heart, lungs and the skeletal
system can be generated in real time by applications developed by the Medical School
in Cardiff. The generated 3D views are dependent on the Region of Interest chosen by
the user. By combining these multimedia innovations with relevant networking
technologies, the consortium aims to deploy distributed applications of this nature.
These applications will be relevant to Education – for the training of student, and to
Healthcare services – for remote surgical operations and remote consultations.
The existing networked environment has been achieved via JANET Lightpath
network but there are plans in the future to deploy these applications and services
over the JANET IP network and this will give Mantychore the opportunity to
contribute. Different end user device capabilities will result in varying network
requirements and this coupled with different attachment points i.e. users connecting
either via Bristol, Cardiff, Glasgow or Bangor will create the need to implement an IP
market place which can dynamically abstract the underlying JANET IP infrastructure
and provide users with customised services to satisfy their requirements.
21
Fig. 1 Use case settings
Users: UHD Consortium Universities including University of Bristol and University
of Cardiff
Service Provider: University of Bristol
Infrastructure Owner: JANET and University of Bristol
Interaction with OpenNaaS
1. Implementation of a test-bed comprising 3 software Juniper routers running
JunOS 10.04
2. Create logical routers
3. Provide capability for user to request/define required interfaces of the logical
routers
4. Pre-configure and allocate IP addresses
1. Client-facing and provider-facing interfaces are created
2. Client-facing interface is a VLAN toward the layer 2 transmission
system
3. Provider-facing interface is a logical tunnel within the physical router,
toward the host router, for onward connectivity
5. Provider-facing interfaces are configured with IP addresses
6. Delegate control of the logical routers to client
7. Client configures interface toward local network
8. Create VLAN sub-interfaces if necessary
9. Configure IPv4 addresses on each VLAN
Manual steps
22
1. Creation of software router within emulation environment with the required
functionality for Mantychore deployment.
2. Creation of the Physical topology.
3. Development of required interfaces (APIs) for deployment of Mantychore
over the software routers.
4. Creation of virtual network topology consisting of logical routers and links
General goal
Leveraging the flexible, dynamic and fully dedicated virtual IP networks provided by
OpenNaaS, this use case develops and deploys the interactive UHD media
applications via the JANET network. The goal is to enable a working pilot showcase
of the deployment of interactive UHD media applications.
Challenges and mitigation
Due to the present setup of the networked infrastructure via the JANET Lightpath
network, it is not possible at this stage to physically deploy the OpenNaaS tools and
services on JANET infrastructure. Hence, we propose a two-stage implementation of
this use-case scenario:
1. There are two steps for stage one:
1) Implement use-case scenario using software routers (aka Olive)
available in Bristol university laboratory connecting University of
Cardiff via JANET for HD video streaming
2) Implement use-case scenario over Mantychore test bed for UHD video
streaming
2. Persuade JANET to physically deploy OpenNaaS frameworks over
infrastructure; in particular, to support JANET UHD Showcase applications
Lessons learned
Network performance is a very important aspect of any infrastructure, be it
virtual or physical. This is particularly true for the multimedia use case where
bandwidth is a key requirement to meet. To obtain a better understanding of the
performance of the virtual network created by OpenNaaS, the following 3
metrics are measured:



Delay
Bandwidth
Jitter
The delay was tested using ping tool, by send 30 ICMP echo requests to the
target host and recording the rrt for all 30 packets with average delay. The
23
bandwidth was tested using a software tool named iperf, with 10 measurements
during 10 seconds period. The jitter is also measured while testing the UDP
bandwidth using iperf tool. Table 1 below illustrates the averages of 20 times for
all 3 metrics on both our physical network and virtual network as shown in Fig.
1.
Virtual
network
Physical
network
Latency
(ms)
BandwidthUDP (Mbps)
7.764
143
Bandwidth- Jitter
TCP
(ms)
(Mbps)
139
0.076
7.754
214
165
0.062
Table 1: Performance of physical and virtual network
Table 1 shows that the bandwidth of both virtual and physical network does not
meet the requirements for transmitting UHD video (at least 250Mbps+). This is
due to the limited data rate supported by the software emulated routers used in
this experiment. Therefore, only HD video streaming is tested on the current
infrastructures.
Summary
Emergence of Ultra High Definition (UHD) is a key driver for future network
research and innovation. Traditional IP infrastructures cannot support large-scale
deployment of UHD applications. This use case successful demonstrated that
OpenNaaS is a viable solution for enabling the deployment of video on demand
services on IP networks. In particular, OpenNaaS has been shown to be flexible and
dynamic, thus allows the real-time provisioning of logical and fully isolated IP
networks with customised routing policy and functions over a single underlying
physical infrastructure.
24
Key Performance Indicators
..to fill..
Perspectives for the future
..to fill..
..keywords: promissing future of OpenNaas, GÉANT does consider a general
service offering on NaaS, ..
Conclusion
..to fill..
-end of document-
25
Download