CONTENT Architecture Requirements and

advertisement

Convergence of Wireless Optical Network and IT

Resources in Support of Cloud Services

Small or medium scale focused research project (STREP)

Co-funded by the European Commission within the Seventh Framework Programme

Grant Agreement no. 318514

Strategic objective: The Network of the Future (ICT-2011.1.1)

Start date of project: November 1st, 2012 (36 months duration)

Deliverable 2.3

Overall System Architecture Definition and

Specifications

Version 1.0

Due date:

31/10/2013

Submission date:

11/04/2020

Deliverable leader:

AIT

Author list:

Anna Tzanakaki (AIT), Markos Anastasopoulos (AIT), Konstantinos Georgakilas

(AIT), Eduard Escalona (i2CAT), Jordi Ferrer Riera (i2CAT), Giada Landi (NXW),

Giacomo Bernini (NXW), Nicola Ciulli (NXW), Roberto Monno (NXW),

Alessandro Martucci (NXW), Bijan Rofoee (UNIVBRIS), Shuping Peng

(UNIVBRIS), George Zervas (UNIVBRIS), Reza Nejabati (UNIVBRIS), Dimitra

Simeonidou (UNIVBRIS), Kostas Katsalis (UTH), Thanasis Korakis (UTH),

Leandros Tassiulas (UTH), Dora Christofi (PTL), Georgios Dimosthenous (PTL)

STREP xxxx CONTENT| Deliverable i/ 86

Abstract

This deliverable provides an initial description of the CONTENT architecture considering the specifications imposed by the relevant business models and service requirements identified in

D2.1 and D2.2. This involves both a high level functional description of the CONTENT architecture as well as a detailed description of the proposed layered structure. In addition, this deliverable includes the description of a modelling/simulation framework that is being developed to evaluate the proposed architecture and identify planning and operational methodologies to allow global optimization of the integrated converged infrastructure. Some initial modelling results are also presented. Finally a plan for the CONTENT experimental evaluation is also provided.

Keywords

Physical Infrastructure, Infrastructure Management, Network Control, Orchestrated End-to-End

Services, Cross-domain Virtualization, Converged Virtual Infrastructure, Wired/Wireless convergence, End-to-end service orchestration, Energy Efficiency, TSON, LTE, WiFi

2

Table of Contents

Abstract...................................................................................................................................... 2

Figure Summary ........................................................................................................................ 4

Table Summary .......................................................................................................................... 5

Executive Summary ................................................................................................................... 6

1. Introduction ......................................................................................................................... 7

2. Existing solutions supporting cloud and mobile cloud services ............................................ 8

2.1. Physical Infrastructure (UNIVBRIS, UTH) .................................................................... 8

2.2. Infrastructure management .........................................................................................11

2.3. Service provisioning ....................................................................................................13

3. CONTENT Architecture Requirements and business models (PTL, i2cat, NXW, UTH) ......16

3.1. CONTENT Architectural Requirements .......................................................................16

3.2. Actors and their roles in CONTENT architecture .........................................................18

3.3. CONTENT Business Model ........................................................................................19

4. The CONTENT Architecture...............................................................................................21

4.1. CONTENT vision and architectural approach ..............................................................22

4.2. Physical Infrastructure Layer .......................................................................................31

4.3. Infrastructure management .........................................................................................39

4.4. Virtual Infrastructure Control Layer .............................................................................45

4.5. Converged service orchestration .................................................................................56

5. CONTENT architecture evaluation .....................................................................................59

5.1. Modelling and simulations ...........................................................................................59

5.2. Plan for Experimental evaluation .................................................................................76

6. Conclusions .......................................................................................................................77

7. References ........................................................................................................................78

8. Acronyms ...........................................................................................................................84

3

Figure Summary

Figure 1: Content infrastructure model ....................................................................................................... 21

Figure 2: Virtualization over heterogeneous network infrastructures.......................................................... 21

Figure 3: The overall CONTENT layered architecture ................................................................................ 23

Figure 4: Cross-layer interactions and actors in the CONTENT architecture ............................................. 24

Figure 5: VI planning sequence diagram, including resource registration phase ....................................... 25

Figure 6: Control Layer deployment and configuration sub-phase sequence diagram .............................. 28

Figure 7: Cloud service operation sub-phase sequence diagram .............................................................. 30

Figure 8: TSON network interconnecting DC and wireless access networks ............................................. 32

Figure 9: Resource abstraction in TSON with extended flexibility .............................................................. 34

Figure 10: NITOS testbed architecture ....................................................................................................... 37

Figure 11: High level architecture of Infrastructure Management layer ...................................................... 40

Figure 12: Wireless Virtualization ............................................................................................................... 44

Figure 13: Converged virtualization approach (Source: [content-d4.1]) ..................................................... 45

Figure 14: High level architecture of Virtual Infrastructure control layer ..................................................... 47

Figure 15: Low-level control functions in TSON metro domain .................................................................. 49

Figure 16:Low-level control functions in the wireless domain ..................................................................... 52

Figure 17: Enhanced control functions in TSON metro domain ................................................................. 54

Figure 18: Service Orchestration Layer high-level architecture .................................................................. 57

Figure 19: LTE Bearer Architecture, Source: 3GPP TS 36.300 ................................................................. 60

Figure 20: Multi-Queuing model for the converged Wireless-Optical Network and DC Infrastructure ....... 62

Figure 21: Multi-Layer Converged Wireless Optical and DC infrastructures .............................................. 63

Figure 22: Comparison between various traffic offloading schemes: a) the Cloudlet approach, b) the

CONTENT approach .......................................................................................................................... 67

Figure 23: Energy Consumption for various computation offloading strategies ......................................... 68

Figure 24: Comparison in terms of delay between the proposed architecture and the cloudlet (common delays in wireless access are omitted) ............................................................................................... 69

Figure 25: Impact of traffic load on power consumption for the proposed and the Cloudlet scheme ........ 70

Figure 26: Impact of mobility and traffic load on the total power consumption 𝐦 = πŸ“ ................................ 70

Figure 27: Impact of resilience of the requested resources [Anastasopoulos12] ....................................... 71

Figure 28: Wireless-Optical-DC Topology .................................................................................................. 72

Figure 29: Network resources savings (optical+wireless capacity units) across mobility factor ρ and demands volume ................................................................................................................................ 74

4

Figure 30: Excess DC resources (VMs) across mobility factor ρ and demands volume ............................ 75

Figure 31: Iterative development lifecycle ................................................................................................... 76

Table Summary

Table 1: VI planning phase step-wise description ...................................................................................... 27

Table 2: Control Layer deployment and configuration sub-phase step-wise description ........................... 29

Table 3: Cloud service operation sub-phase step-wise description............................................................ 31

Table 4: High-level network functions ......................................................................................................... 56

Table 5: EPS Bearer QoS profiles [ROKE] ................................................................................................. 61

Table 6: Standarized QCI attributes ............................................................................................................ 61

5

Executive Summary

Intended Audience

This deliverable is primarily intended for internal use by the consortium partners and the

European Commission. However, it is a public document (PU) and therefore will be made publicly available through the CONTENT web site.

Scope

This deliverable report provides an initial description of the CONTENT architecture as it has been defined considering the specifications imposed by the relevant business models and service requirements. This involves both a high level functional description of the CONTENT architecture as well as a detailed description of the proposed layered structure. The overall

CONTENT architecture has been produced taking as input the material reported in D4.1

“Definition of the virtualization strategy and design of domain specific resource/service virtualization in CONTENT”.

This deliverable includes also the description of a modelling/simulation framework that is being developed with the aim to evaluate the proposed architecture. This framework targets to identify planning and operational methodologies that will allow global optimization of the integrated converged infrastructure with increased functionality, flexibility, scalability as well as reduced cost and energy consumption. Some initial modelling results are also presented. Finally a plan for the CONTENT experimental evaluation is provided.

Document Structure

The document is structured as follows. Section 1 provides an introduction to the thematic area that CONTENT fits in as well as the general concept that CONTENT proposes. Section 2 describes the relevant state-of-the-art. Section 3 provides a summary of the CONTENT architectural requirements and business models as they have been identified and described in detail in D2.1 and D2.2 [D2.1], [D2.2]. Section 4 provides a functional description together with a detailed structural presentation of the CONTENT architecture. This includes the details of the individual layers involved as well as a description of the interaction between the different layers in the form of workflows. Section 5, includes a discussion on the modelling framework that is being developed with the aim to evaluate the CONTENT architecture and propose optimal ways to plan and operate it as well as a plan for the CONTENT experimental evaluation. Finally, section 6 summarises the conclusions.

6

1. Introduction

As the availability of high-speed Internet access is increasing at a rapid pace and new demanding applications are emerging, distributed computing systems are gaining increased popularity. Over the past decade, large-scale computer networks supporting both communication and computation were extensively employed in accordance to the cloud computing paradigm. Cloud computing facilitates access to computing resources on an ondemand basis, enabling customers to access remote computing resources that they do not have to own. This introduces a new business model and facilitates new opportunities for a variety of business sectors. At the same time it increases sustainability and efficiency in the utilization of the available resources reducing the associated capital and operational expenditures as well as the overall energy consumption and CO

2

footprint. Recently the concept of mobile cloud computing, where computing power and data storage are moving away from the mobile devices to remote computing resources [DINH] is also gaining increased attention. It is predicted that cloud computing services are emerging as one of the fastest growing business opportunities for

Internet service providers and telecom operators [CISCO-2013], [MUN]. In addition, mobile internet users are expected to exceed in number the desktop internet users after year 2013, introducing a huge increase in mobile data, a big part of which will come from Cloud computing applications [MUN].

To effectively enable this emerging business opportunity, there is a need for a converged infrastructure supporting integrated wireless and wired high capacity optical networks interconnecting IT resources that allows seamless orchestrated on-demand service provisioning across the heterogeneous technology domains. Such a converged infrastructure will reduce capital and operational expenditures, increase efficiency and network performance, migrate risks, support guaranteed Quality of Service (QoS) and meet the quality of experience (QoE) requirements of Cloud and mobile Cloud services.

To address this need, CONTENT is focusing on a next generation ubiquitous converged network infrastructure. The infrastructure model proposed is based on the Infrastructure as a

Service (IaaS) paradigm and aims at providing a technology platform interconnecting geographically distributed computational resources that can support a variety of cloud and mobile cloud services. The proposed architecture addresses the diverse bandwidth requirements of future cloud services by integrating advanced optical network technologies offering fine (sub-wavelength) switching granularity with a state-of-the-art wireless access network based on hybrid Long Term Evolution (LTE) and WiFi technology, supporting end user mobility. To enable sharing of the physical resources and support the IaaS paradigm as well as the diverse and deterministic QoS needs of future cloud and mobile cloud services, the concept of virtualization across the technology domains is adopted.

This deliverable report provides an initial description of the CONTENT architecture considering the specifications imposed by the identified relevant business models and service requirements.

This involves both a high level functional description of the CONTENT architecture as well as a detailed description of the proposed layered structure. In addition, this deliverable includes the description of a modelling/simulation framework that is being developed to evaluate the proposed architecture and identify planning and operational methodologies to allow global

7

optimization of the integrated converged infrastructure. Some initial modelling results are also presented. Finally a plan for the CONTENT experimental evaluation is provided.

The remaining of this document is structured as follows: Section 2 describes the relevant stateof-the-art. Section 3 provides a summary of the CONTENT architectural requirements and business models as they have been identified and described in detail in D2.1 and D2.2 [D2.1],

[D2.2]. Section 4 provides a functional description together with a detailed structural presentation of the CONTENT architecture. This includes the details of the individual layers involved as well as a description of the interaction between the different layers. Section 5, includes a discussion on the modelling framework that is being developed with the aim to evaluate the CONTENT architecture and propose optimal ways to plan and operate it as well a plan for the CONTENT experimental evaluation. Finally, section 6 summarises the conclusions.

2. Existing solutions supporting cloud and mobile cloud services

Existing mobile cloud computing solutions allow mobile devices to access the required resources by accessing a nearby resourcerich cloudlet, rather than relying on a distant “cloud,”

[SATAYAN]. In order to satisfy the low-latency requirements of several content-rich mobile cloud computing services such as high definition video streaming, online gaming and real time language translation [MUN], one-hop, high-bandwidth wireless access to the cloudlet is required. In the case where a cloudlet is not nearby available, traffic is offloaded to a distant cloud such as Amazon’s Private Cloud, GoGrid [GOGRID] or Flexigrid [FLEXISCALE].

However, the lack of service differentiation mechanisms for mobile and fixed cloud traffic across the various network segments involved, the varying degrees of latency at each technology domain and the lack of global optimization tools in the infrastructure management and service provisioning make the current solutions inefficient.

In addition, research activities such as e.g. the EU IP project Mobile Cloud Networking [MCN] are concentrating on offering mobile cloud services taking advantage of the convergence of mobile communications and cloud computing enabled by the Internet. MCN is proposing the use of micro- and macro-data centres (DCs). According to MCN macro-DCs are standard largescale computing farms deployed and operated at strategically selected locations. Micro-DCs are medium- to small-scale deployments of server clusters across a certain geographic area, for instance covering a city or a certain rural area and as part of a mobile network infrastructure.

According to MCN the provider exploits the MCN architecture to compose and operate a virtual end-to-end infrastructure and platform layer on top of a set of fragmented physical infrastructure pieces provided by different mobile network and data centre owners/operators. Thus, it is providing a differentiated end-to-end MCN service (mobile network + compute + storage) that is not limited to a certain geographic area.

2.1. Physical Infrastructure (UNIVBRIS, UTH)

2.1.1. Optical infrastructure for Cloud Computing

Recent technological advancements in optical networking technologies are able to provide flexible, efficient, ultra high data rate and ultra low latency communications for data centres and

8

cloud networks. Regarding optical transmission solutions offering high data rate and low latency several field trial deployments of 400Gb/s channels have been reported [Alcatel13] while research on 1 Tbps is already in progress [Gerstel12] [IEEE802_3]. With regards to flexible optical networks, technologies such as elastic grid, multi-mode, multi tone, multi core, and

Photonic Integrated Circuits (PIC) based systems are under extensive research and trials to open up other dimensions for higher data rate transmission and achieve peta bits of data rate using resource in different dimensions (Time, space, spectrum, modulation format, etc).

FlexiGrid and/or Gridless technologies (providing flexible spectrum spacing without being bound to the ITU-T defined DWDM grid) facilitate super channels [Jinno12] [Juniper12] that a key enablers for edge networks when directing vast amount of data between data centres. Also

Optical Multi Core networks are becoming of increasing interest with the exponential capacity they introduce to optical networks [Amaya13]. Moreover, higher granularity optical networking technologies are of high interest to enable the providers and users to use their network resources more efficiently. Optical OFDM solutions [Jinno09], Optical Packet switched network

[Intune11], and optical burst switched technologies [zervas11] are examples of such technologies. These advanced novel optical network technologies enable the flexibility and elasticity required by the uncertain, diverse and dynamic application demands in data centres to address the variability requirements in resources, bandwidth etc. Therefore, they can empower the optical networks to be better adaptive to the cloud environments.

Time Shared Optical Networks(TSON) [zervas11] as a dynamic and bandwidth flexible sub lambda networking solution has been introduced to enable efficient and flexible optical communications by time-multiplexing of several sub wavelength connections and fitting them over fewer number of wavelengths as might be used in WDM networks. TSON allocates time slices over wavelengths just enough to full fill the bandwidth requirements of each request, and statistically multiplex them to use each wavelength capacity to carry data for as many requests as possible. TSON leaves the electronic processing and burst generation to the edges of the network, and transfers the data of several users in form of optical bursts transparently in the core between TSON edge routers which apart from efficiency in using optical resources can be highly power efficient as well.

In order to support the multi-tenancy of the cloud infrastructure, optical network virtualization becomes a key technology enabler. Optical network virtualization is crucial to the network infrastructure as it enables network operators to generate multiple coexisting but isolated virtual optical networks (VONs) running over the same physical infrastructure, which are operated and functioning concurrently and independently [Peng11]. Optical network virtualization in general adopts the concepts of abstraction, partitioning, and aggregation over node and link resources to realise a logical representation of network(s) over the physical resources. By resource abstraction operators are able to extract the capabilities of the underlying technologies and use them for planning the partitioning and aggregation. Partitioning and aggregation of the resources correspond to the slicing of the network resources as a pool of resources, into entities which can be managed and controlled independently [Nejabati11], [Geysers-d22]. In detail study of the state of the art optical networking technologies is provided in D4.1 Appendix a.

9

TSON with fine granularity and central resource allocation and control, greatly facilitates the composition and management of virtual optical networks, and their integration with network slices across other technology domains. Interconnecting wireless networks with datacenters,

TSON can very efficiently aggregate the traffic of multiple wireless nodes and transfer them to the datacenters and also the other way around. The fine granularity supported in TSON enables great resource matching with wireless networks, hence creating highly customisable and resource efficient bandwidth sliced networks end to end.

The research community and industry have heavily invested on virtualisation of the existing technologies, and also on new technologies that can inherently provide virtualisation capabilities

[Jinno13]. Virtualisation of optical networks is one of the main enablers for deploying software defined infrastructure and networks in which independently of the underlying technologies operators can provide a vast array of innovative services and application with a lower cost to the end users [Heavy12]. Nonetheless, optical network virtualization is still at its early stage.

Compared to optical network virtualization, IT/Server virtualization has reached its commercial stage, e.g. VMWare vSphere. Most importantly, in order to better serve the dynamic requirements arisen from the IT/Data Centre side, the coordinated virtualization of both optical network and IT resources in the Data Centres are desired. How to jointly allocate the two types of resources and achieve optimal end-to-end infrastructure services has been investigated in the literature but the reported work concentrates mostly on optical infrastructures supporting wavelength switching granularity [Tzanakaki-O-13], [Tzanakaki14]. However, solutions addressing optical network technologies supporting sub-wavelength granularity are still in their early stages such as the CONTENT proposed approach [Tzanakaki-A-13].

2.1.2. Wireless technologies in support of cloud networking

High-speed wireless access connectivity is provided by three prominent technologies: cellular

LTE networks, WiMAX and WiFi. These technologies vary across a number of distinct dimensions [Lehr04], including the part of the spectrum they use; the antenna characteristics, the encoding at the physical layer, the sharing of the available spectrum by multiple users and of course the maximum bit rate and reach. Because of the boom in smart phones usage and social media services, cellular operators are now thinking of better ways to offload cellular networks from this extreme data stream demand by users. On one hand, femtocells seem promising because the spectrum can be re-used more frequently over a smaller geographical region, as small as a house, with easy access to the network backbone. On the other hand,

WiFi networks are readily available in most homes and are easy to install and manage

[RFIC2013]. The CONTENT architecture relies on a converged 802.11 and 4G - Long Term

Evolution (LTE) access technology network used to support cloud computing services.

Convergence of Wi-Fi instead of 3G/4G cellular networks answers questions like what happens when users switch networks in midsession and which networks should different applications have access to. Wi-Fi is able to provide a capacity boost via a significant amount of radio spectrum that's separate from expensive cellular spectrum [con2011].

10

In general, virtualization of the wireless domain and slicing can take place in the physical layer,

Data-link layer (with virtual Mac addressing schemes and open source driver manipulation) or network layer (VLAN, VPN, label switching). In [Bhanage10] the SplitAP architecture is proposed in order for a single wireless access point to emulate multiple virtual access points.

Clients from different networks associate with corresponding virtual APs though the use of the same underlying hardware. The approach of creating multiple virtual wireless networks through one physical wireless LAN interface, so that each virtual machine has its own wireless network is proposed in [Aljabari10]. In [Kokku10b] the Cellsclice is proposed that focuses on deployments with shared-spectrum RAN sharing. The design of CellSlice is oblivious to a particular cellular technology and is equally applicable to LTE, LTE-Advanced and WiMAX.

Cellsclice adopts the design of NVS for the downlink but indirectly constrains the uplink scheduler’s decisions using a simple feedback-based adaptation algorithm.

A review on the 802.11 and LTE technologies is available in D4.1 [CONTENT-D4.1] where the state-of-the-art research on wireless provider virtualization per technology is presented.

2.2. Infrastructure management

Heterogeneous infrastructure management has been already addressed in several European projects, as well as in several commercial systems. Traditionally, infrastructure management has been vertically separated; i.e. each technological segment had its own management system. Thus, the management of the different essential operation components (e.g. policies, processes, or equipment) for overall effectiveness was performed in a per-domain basis. There are clearly differentiated network management systems and cloud management systems.

Network management has been addressed following two approaches depending on the context and requirements of the network owners. On the one hand, centralized management assumes the existence of a single system that controls the whole network of elements, each of them running a local management agent. On the other hand, distributed management approaches introduce the concept of management hierarchies, where the central manager delegates part of the management load among different managers, each responsible for a segment of the network.

Most of the network infrastructure management systems are proprietary, since there are no common interfaces at the physical level. Each management solution depends on each hardware device vendor. However, ISO Telecommunications Management Network defines the FCAPS model [Castelli02], which is the global model and framework for network management. FCAPS is composed of fault, configuration, accounting, performance monitoring/management, and security components. These are the management categories into which the ISO model defines network management tasks:

ο‚·

Fault Management identifies and isolates network issues, proposes problem resolution, and subsequently logs the issues and associated resolutions.

ο‚·

Configuration Management monitors network and system configuration information so that the impact on network operations can be tracked and managed.

ο‚·

Accounting Management measures network utilization parameters so that individual group users on a network can be regulated, billed, or charged.

11

ο‚·

Performance Management measures and makes network performance data available so that performance can be maintained at acceptable thresholds.

ο‚·

Security Management controls access to network resources as established by organizational security guidelines.

Apart from the general network management model, there are some standard network management protocols. Most relevant are the SNMP and NetCONF. SNMP is standardized by the IETF in RFCs 1157 [RFC1157], 3416 [RFC3416], and 3414 [RFC3414]. NetCONF is also an

IETF network management protocol defined in RFC 4741 [RFC4741] and revised later on RFC

6241 [RFC6241]. Its specification mainly provides mechanisms to install, manipulate, and delete the configuration of network devices.

From the research perspective, there have been also solutions for network management focusing on specific devices. The GEYSERS project [GEYSERS] built the Logical Infrastructure

Composition Layer (LICL) [LICL]. It is a software middleware responsible for the planning and allocation of virtual infrastructures composed of virtualized network and IT resources. On the one hand, the LICL is responsible for resource abstraction, resource publishing and creation of virtual resources. On the other hand, it also deals with VI creation, and management. The abstraction layer of the LICL is the key component related to unified network management, since it provides a common model for multiple vendor devices. It contains a set of specific drivers for each vendor, which perform the translation of the operations to each vendordependent interface, and offers a common interface towards the upper elements of the

GEYSERS architecture.

Nowadays, with the emergence of the Software-Defined Networking paradigm aiming at totally decoupling the control plane from the data plane, the need for unified southbound interfaces and thus an abstraction layer to simplify network operation and management increases vertiginously. In fact, the OpenNaaS [OpenNaaS] framework, as presented in the CONTENT deliverable D4.1 [content-d41], provides a common lightweight abstracted model at the infrastructure level, which enables then the resource management in a vendor-independent manner. At the same time, the OpenDaylight [OpenDaylight] platform aims at providing a software abstraction layer of the infrastructure resources, which contains unified models capable of being then managed by the corresponding elements in the architecture.

In wireless networks significant management challenges exist because of the system complexity and the inter-dependent factors that affect the wireless network behaviour. These factors include traffic flows, network topologies, network protocols, hardware, software, and most importantly, the interactions among them [Li2008]. In addition, due to the high variability and dependency on environmental conditions, how to effectively obtain and incorporate wireless interference into network management remains an open problem since, unlike wireline networks, over-provisioning in wireless networks is often not a solution. This is due to the limited wireless spectrum and the effects of wireless interference.

A systematic management framework consists of measurement, modelling, and control where flexible network models are used to estimate normal network behaviour and perform what-if analysis and effective control strategies are used to perform actions like channel assignment,

12

routing and power control. Various tools exist to monitor and control performance metrics like bit error rate and bandwidth or monitor the link status in the wireless domain. Examples of such tools are Netstumpler, iw (linux), wispy (spectrum analyzer), wirelessNetworkWatcher,

Wireshark (packet analyzer), wavemon, and Nagios (SNMP based). We also reference the

OMF control and management framework that is widely used in wireless testbed environments and the accompanying OMF/OML measurement framework. In practice, a combination of tools and frameworks is required to effectively manage and control a wireless network.

The management of the IT segment of the infrastructure is typically performed using modern platforms based on Web Services (e.g. REST APIs). There is the OGF Open Cloud Computing

Interface (OCCI) [OCCI], which comprises a set of open community-lead specifications delivered through the OGF. OCCI is a protocol and API definition for all kinds of Cloud-related management tasks. It evolved and became a flexible API with a strong focus on integration, portability, interoperability and innovation while still offering a high degree of extensibility. Most well-known CMS implement an OCCI-compliant API (e.g. OpenStack, OpenNebula, or

CloudStack).

2.3. Service provisioning

The service provisioning and orchestration of IT resources (computing and storage) located in geographically distributed data centres, seamlessly integrated with inter-DC networking in support of multi-cloud services has been addressed in several European projects. The research has investigated and proposed technical solutions for a variety of scenarios, focusing on several aspects of the problem, from multi-layer architectures enabling the inter-cooperation between cloud and network domains, up to procedures, protocols and interfaces allowing integrated workflows to support delivery and operation of joint cloud and network services.

The FP7 GEYSERS project [GEYSERS] has developed a framework for on-demand provisioning of inter-DC connectivity services, specialized for cloud requirements, over virtual optical infrastructures. The cloud service orchestration is managed by a service middleware that interacts with a GMPLS and PCE-based multi-domain control plane operating the virtual infrastructure, based on WSON resources, inter-connecting the data centres. Enhanced cloudoriented services are supported at the control plane for computation of connectivity quotations and joint selection of IT end-points and network paths through extensions of the PCE functionalities and protocols. A RESTful interface, called NIPS UNI (Network + IT Provisioning

Service User-to-Network Interface), has been proposed to allow the interaction between service middleware and network control plane. This interface supports dynamic requests for network service setup and tear-down, bi-directional exchange of information on data centre capabilities on one side and network service quotations and prices on the other side, as well as monitoring functions to communicate the status and performance of network and cloud services. Following similar inter-layer approaches, some IETF drafts [CSO-ID] have proposed cross-stratum solutions for the cooperation between application (service) and network layers in path computation for inter-DC network services, potentially combined with stateful PCE [STATE-

PCE-ID] mechanisms.

13

The FP7 SAIL project [SAIL] has proposed solutions for on-demand management and control of computing, storage and connectivity resources in single and multi-provider scenarios, focusing on mechanisms to allow network resources to be provisioned and reconfigured in timeframes compatible with cloud services requirements. The Cloud Networking (CloNe) architecture is organized in four layers, with the lower one responsible for resource virtualization, two layers

(called intra-provider and inter-provider layers) handling the operation of the virtual infrastructures within a single provider domain and across multiple domains respectively, and finally a service layer managing the delivery of the service. SAIL has also proposed an interface, the Open Cloud Networking Interface (OCNI) [SAIL-D52], that extends the Open Cloud

Computing Interface [OCCI] to introduce the specification of networking services into the APIs used for cloud service descriptions and requests.

The FP7 BonFIRE [BONFIRE] project provides a multi-site cloud facility for experimenters in the

Internet of Services (IoS) community, delivering cloud resources in the IaaS model. BonFIRE experimenters can request dedicated Bandwidth-on-Demand (BoD) services to interconnect virtual machines placed in different sites of the BonFIRE test-bed. This function is enabled through an extensions of the OCCI APIs, with the definition of a new resource (the site-link) describing the QoS required for the given inter-site network connection. The BonFIRE system is able to guarantee such connection through the federation with an external facility (AutoBAHN

[AUTOBAHN]) that offers the provisioning of BoD services between two BonFIRE sites. It should be noted that BonFIRE experimenters have full knowledge of the cloud resources availability in different sites and manually select the sites to deploy their VMs and the associated inter-connections. The lack of automated procedures prevents any automated re-optimization or dynamic modification of the inter-site connectivity.

In terms of intra-DC resource provisioning, several DC/cloud management frameworks are currently available, like CloudStack [CLOUDSTACK] or OpenStack [OPENSTACK]. They usually implement dedicated components to provide Network as a Service (NaaS) functionalities. For example, in OpenStack the Neutron [NEUTRON] module is in charge of management and provisioning of network resources, and interacts with the underlying infrastructure through technology-specific plugins that configure the devices. Several plugins are currently included in the OpenStack release, including plugins for SDN controllers like Floodlight

[FLOODLIGHT], Ryu [RYU] or Trema [TREMA]. However, it should be noted that the networking Application Programming Interfaces (APIs) exposed by the cloud management frameworks mainly allow the configuration of the internal network of the data centres, while the integration of the inter-DC connectivity requires further extensions.

In terms of infrastructure service provisioning, following the NaaS paradigm, there is the

OpenNaaS framework [OPENNAAS]. It is an open-source framework that provides tools to manage the different resources present in any network infrastructure. The tools enable the different stakeholders to contribute and benefit from a common software-oriented stack for both applications and services. The architecture is structured in three different horizontal layers. The upper layer, where the infrastructure intelligence resides, is the responsible to provide integration with the northbound middleware. Therefore, the upper layer of the architecture provides on-demand service provisioning. The modularity and flexibility of the platform enables

14

third-party developers to create their own applications within this upper level. Currently,

OpenNaaS supports on-demand both network and Cloud service provisioning. Furthermore, the platform contains plug-ins towards both the GEYSERS system and the OpenStack cloud management framework.

Emerging cloud applications such as real-time data backup, remote desktop, server clustering, etc. require more traffic being delivered between data centres. Ethernet remains the most used technology in the data centre space but the end-to-end provisioning of Ethernet services between remote data centre nodes poses a big challenge. The STRAUSS project [STRAUSS] aims to define a highly efficient and global (multi-domain) optical infrastructure for Ethernet transport, covering heterogeneous transport (OPS, Elastic Optical Network) and network control plane technologies (GMPLS, OpenFlow), enabling an Ethernet ecosystem. Central to this capability is the SDN based service and network orchestration layer. The proposed layer a) is aware of the existing background and existing practices and b) applies new Software Defined

Networking principles to enable cost reduction, innovation and reduced time to market of new services, while covering multi-domain and multi-technology path –packet networks. This layer provides a network-wide, centralized orchestration. This high level, logically centralized entity exists on top of and across the different network domains and is able to drive the provisioning

(and recovery) of connectivity across heterogeneous networks, dynamically and on real time.

The LIGHTNESS project [LIGHTNESS] focuses on the design, implementation and experimental evaluation of high performance data centre interconnects through the introduction of innovative photonic switching and transmission inside data centres. Harnessing the power of optics will enable data centres to effectively cope with the unprecedented demand growth to be faced in the near future, which will be driven by the increasing popularity of computing and storage server-side applications in the society. The LIGHTNESS project will join efforts towards the demonstration of a high-performance all-optical hybrid data plane for data centre networks, combining both OCS and OPS equipment to implement transport services tailored to the specific applications’ throughput and latency requirements, and Top of the Rack (TOR) switch for seamlessly connecting servers in each rack to the hybrid OCS/OPS inter-cluster network. In

LIGHTNESS, the OCS/OPS inter-cluster network will be empowered with a control plane entity able to dynamically provision high bandwidth connectivity services in the LIGHTNESS intra-DCs scenario, involving OPS, OCS and hybrid OPS/OCS TOR switches. Flexible Cloud-to-Network interface for provisioning of dynamic and on-demand connectivity services in the data centre network and south-bound interfaces to let the LIGHTNESS controllers monitor, provision and configure the OPS, OCS and hybrid TOR switches will be designed, prototyped and validated.

In the CONTENT vision, the cloud resources offered by individual data centres must be orchestrated together and integrated with the inter-DC connectivity in order to deliver consistent and QoS guaranteed end-to-end cloud services to fixed and mobile customers. One of the main requirements for effective cloud service orchestration is the definition of information models and languages able to describe complex cloud application and service environments, with the specification of their service components, the relationships and dependencies among them and the rules regulating the evolution of the service at runtime. The Topology and Orchestration

Specification for Cloud Applications (TOSCA [TOSCA]), an OASIS standard, provides meta

15

models for defining IT services. In particular it specifies both a Topology Template to define the structure of cloud services and Plans to define the process models to create, terminate and manage services during their lifetime. The first version of the TOSCA specification has been released in July 2013, and a public demonstration of multi-vendor interoperability is planned in

October 2013, during the 2013 International Cloud Symposium. An alternative orchestration language is AWS Cloud Formation [CLOUD-FORMATION], currently deployed by Amazon and supported by several orchestration systems (e.g. Heat [HEAT] for OpenStack and Stackmate

[STACKMATE] for CloudStack). The Cloud Formation model defines a template representing the set of cloud resources to be orchestrated and is based on concepts like resources, mappings, parameters, properties, and outputs. The language is easily extendable through the definition of specific custom resources.

3. CONTENT Architecture Requirements and business models

(PTL, i2cat, NXW, UTH)

3.1. CONTENT Architectural Requirements

According to [CONTENT - D2.1] the CONTENT’s requirements have been classified into 4 categories:

ο‚·

High Level Business Requirements

ο‚·

Service Requirements

ο‚·

Integrated Services Requirements

ο‚·

Physical Infrastructure Requirements

3.1.1. High Level Business Requirements

All the high level business requirements which were addressed in D2.1 [CONTENT-D2.1] have been considered as necessary. The business requirements included that the CONTENT framework should support cloud services, provide automation support in processes which involve different stakeholders, should be cost and energy efficient and easy to be deployed by existing wireless and optical network technologies. Further to that, the CONTENT solution should provide incentives to the stakeholders in order to continue the deployment of the solution, it should assure availability and reliability and it ought to assure return on investment

(ROI) to each stakeholder. Finally, an SLA should describe the business relationships and agreements between the CONTENT stakeholders.

3.1.2. Service Requirements

The Service Requirements, which can be found in detail in D2.1, involve the requirements of the

CONTENT Service Level Agreement (SLA). Since the CONTENT solution involves multiple stakeholders, the CONTENT SLA should be multilevel. The SLA should provide a description of each stakeholder; define the services, the service duration and the service requirements that will be provided to each stakeholder. The SLA should provide a description of the actions in

16

case of contract violation and describe the business relationships and agreements between

CONTENT and the stakeholders.

The SLA should define QoS parameters such as bandwidth, delay, jitter, it should guarantee the delivery of real time services to the network providers, should provide a minimum/maximum

QoS level that can be offered to the CONTENT Provider.

3.1.3. Integrated Network Service Requirements

According to the integrated network service requirements, the CONTENT cloud services should be accessible through a mobile device supporting Wi-Fi and/or LTE technology. The CONTENT system should support end-users intra-technology and inter-technology handovers, should provide the end-to-end network connectivity and guarantee the efficient operation of the overall infrastructure across the different network domains (wireless access and TSON-based metro).

Furthermore, the CONTENT system should support the provisioning of per-user network services compliant with the profiles defined in the SLAs, provide mechanisms for network service monitoring in order to be able to verify the SLAs and should provide procedures for endto-end service resilience. End-user authentication and service access authorization should be achieved as well as the access to the full combination of cloud and network services should be done by the end-users using the same set of credentials. The CONTENT system should be able to provide the mobile user with end-to-end network services characterized by a minimum required bandwidth, a maximum allowed delay, jitter and packet loss rate and consistent levels of QoS guarantees in case of intra-technology and inter-technology handovers. Finally, the

CONTENT system should be able to provide the mobile user with end-to-end network services characterized by different levels of QoS guarantees, dynamically adapted to the real-time characteristics/nature of the cloud application traffic.

3.1.4. Physical Infrastructure Requirements

The Physical Infrastructure requirements which have been defined in D2.1 were categorized in to wireless domain, optical domain and Integrated Infrastructure.

Wireless Domain

Within the wireless domain, the CONTENT solution should provide the necessary architecture and mechanisms in order to build the converged LTE/WiFi wireless network. The CONTENT system should be able to adapt according to the users traffic, so that radio resources will be efficiently utilized. The CONTENT physical infrastructure should propose mechanisms to address the limitation of current approaches on spectrum access under the MOVNO concept

[D2.2].

The requirements should define the CONTENT slicing mechanism of the wireless resources and provide an efficient slicing approach with bandwidth-based and resource-based reservations. The system should provide the necessary virtual wireless-optical signaling mechanisms and also avoid packet loss and ensure caching efficiency in the sliced wireless network.

17

The Mobility requirements include investigation of vertical handoff procedures and the proposition of new mechanisms according to the CONTENT concept. A virtual wireless network self-organization mechanism may be proposed and distinguish areas to high and low mobility coverage whilst propose new QoS management schemes. The CONTENT solution should be energy efficient and provide necessary tools for control and monitor.

Optical Domain

In terms of the optical domain, composition of isolated logical networks over the physical network and the dynamic (re)allocation of resources should be maintained. The Time Shared

Optical Network (TSON) solution should be used as the optical network infrastructure. The control, monitoring, power consumption and energy saving of the optical domain should be considered. Connectivity services such as P2MP, MP2MP, MP2P and QoS provisioning should be taken into account.

Integrated Infrastructure

In terms of the integrated infrastructure requirements, the CONTENT system should be built on a heterogeneous infrastructure (wireless access networks based on Wi-Fi and LTE technology, optical metro networks based on TSON technology, and data centres). In addition, the

CONTENT system should support distributed DCs, isolation of virtual infrastructures and authorization control of accessing them. Finally, the upgrading and downgrading of already provided virtual infrastructures should be allowed as well as the dynamic optimization of the allocation of the virtual infrastructures over the physical resources

Network Operations and Management

Within the Network Operations and Management requirements, CONTENT should provide operation and Administration Support across t he CONTENT’s optical metro/wireless access network environment. The OAM should provide in-service reliability to the network service provider, provide notification messages such as deviations from the SLA and finally provide reliable means to measure particular onpath network’s out of service provisioning duration.

3.2. Actors and their roles in CONTENT architecture

Three domains have been identified within the scope of the CONTENT’s ecosystem: a) wireless domain (both WiFi and LTE), b) optical metro domain and c) IT domain. Taking into account the three different domains we consider the following actors which participate in the CONTENT architecture:

ο‚·

Physical Infrastructure Provider (PIP): The administrative owner of the physical infrastructure who has the responsibility of creating the virtual instances of resources on top of it. The PIP is further divided into:

18

o Optical Infrastructure Provider (OIP) create virtual instances of resources on top of its optical network infrastructure o Wireless Infrastructure Provider (WIP) create virtual instances of resources on top of its wireless network infrastructure. o Datacenter Infrastructure Provider (DIP) creates virtual instances of resources on top of its datacenter infrastructure

ο‚·

Virtual Operator (VO): Uses virtualized resources from the Physical Infrastructure

Providers on an on-demand basis. It has business legal agreements to access the virtualized resources from one or several Physical Infrastructure Providers. A VO is responsible for the control and management of its end to end Virtual Infrastructure.

ο‚·

Service Provider (SP): Responsible to offer value-added services to the end-user and monitor the service provisioned to the end user.

In [CONTENT-D2.1], [CONTENT-D2.2] special emphasis was given on a new stakeholder: the

Mobile-Optical Virtual Network Operator (MOVNO), which adopts converged virtualization of Wi-

Fi, LTE, Optical metro and IT resources in order to provide not only voice and data, but also IT services to its subscribers. The MOVNO provides converged services across different technological domains in order to provide seamless and efficient integration of wireless and optical network technologies supporting at the same time convergence with IT infrastructure. In comparison with the MVNO, the MOVNO owns and operates virtual resources in the optical metro in order to bridge the gap between the wireless access and the computational infrastructure.

3.3. CONTENT Business Model

[CONTENT-D2.1] specified a new stakeholder covering new business opportunities offered in

CONTENT: the MOVNO . The CONTENT business model for the MOVNO was described in

D2.2.

The MOVNO adopts converged virtualization of Wi-Fi, LTE, optical metro and IT resources for providing, voice, data and IT services to its customers. The MOVNOs can use the infrastructure of other providers, minimizing their technology systems dedicated to billing and customer care, content delivery management and business support systems. The end-users can experience a better quality of service in terms of improved reliability, availability and serviceability, whereas physical infrastructure providers can gain significant benefits through improved resource utilization and energy efficiency, faster service provisioning times, greater elasticity, scalability and reliability of the overall solution. In this context, the adoption of cross-domain and crosstechnology virtualization facilitates migration towards a fully converged infrastructure.

The PIP will be able to provide to the VO virtual resources and will compose virtual infrastructures on top of its physical resources. The VO will then be able to provide to the SP the ability to provide new services to its customers.

The CONTENT framework will create value to the MOVNO, since the new stakeholder will have access to the optical virtual resources and datacenter virtual resources in accordance to the

IaaS paradigm.

19

The MOVNO will have the opportunity to enter new markets and launch new services. The

CONTENT architecture, which enables virtualization at the infrastructure layer, will reduce the

CAPEX of the MOVNO; hence the MOVNO will be able to increasing spending towards enhancing their services.

The PIP will be able to enhance their infrastructure through the revenue that will be generated by the provisioning of infrastructure slices to the MOVNO. The PIP will be also able to provide optimized solutions depending on the MOVNO’s needs. On the other hand, the SP will be able to offer new services to their customers such as mobile gaming, online storage and backup services.

Pay as you Go Business Plan

The PIP should determine the amount of resources they wish to lease with the MOVNO.

Depending on the PIP (WIP, DIP, OIP) they may be able to charge to MOVNO a onetime fixed price for deployment and therefore decide on a pay as you go plan depending on the usage.

Further to this, additional revenues maybe charged in the future for resource organization, service upgrades and new infrastructure features.

Both PIP (WIP, DIP, OIP) and the MOVNO will reduce their capital and operating expenses and will be able to concentrate on their domain of interest. The MOVNO will benefit from an increased number of subscribers due to the fact that the PIP will be able to provide a wider coverage and reach more customers through the QoS offering. The SP may identify what value its customers are willing to pay for depending on the service applications they will offer.

The pay as you go formula that will be determined by the PIP for the MOVNO and by MOVNO for the SP will be of a great benefit when charging, not only by the rigorous use but the unrestricted use.

The PIP will establish a pay-as-you-go contract with the VO in order for the VO to spread its reach, whilst the VO will provide the SP the ability to increase its business opportunities through contracts that will be established with new customers.

ο‚·

The MOVNO will pay the PIP per usage e.g. per access, per user and avoid the flat rate per period.

ο‚·

The SP will pay the MOVNO depending on the network usage.

Service Initialization: Before a partnership is established between the PIP and the MOVNO they should agree on a contract agreement that will describe the terms of usage such as QoS and price.

Service Usage: The PIP should monitor the usage of its resources by MOVNO and gather usage information in order to be able to charge the MOVNO accordingly. The PIP should be able to determine if the resources are under or over-provisioned and scale the rules by negotiating with the MOVNO.

Service Termination: If the MOVNO does not require the service of the PIP or their agreement has expired, then the service is terminated.

20

4. The CONTENT Architecture

The CONTENT infrastructure model (Figure 1) aims at providing a technology platform

interconnecting geographically distributed computational resources that can support a variety of cloud and mobile cloud services. The proposed architecture comprises an advanced heterogeneous multi-technology network infrastructure integrating optical metro and wireless access network domains interconnecting DCs and adopts the concept of physical resource

virtualization across the technology domains involved as shown in Figure 2. In contrast to the

existing solutions that use small DCs in the wireless access and large DCs in the core to support mobile and fixed cloud traffic, respectively, the proposed solution relies on a common

DC infrastructure fully converged with the broadband wireless access and the metro optical network.

Data Centers

Metro Optical

Network

TSON Edge nodes

2

3

Data Centers

TSON Frame mobile users

1

LTE - Wireless

Access Fixed users

Wireless

Access

1

2

3

The IP packets from the LTE BS are forwarded to the TSON edge nodes, encapsulated in Ethernet packet.

The TSON edge node processes the Ethernet traffic, and after converting it to TSON frames, forwards it to the destined TSON edge node.

TSON extracts the Ethernet packets from the TSON frames, and directs them to the data center.

Figure 1: Content infrastructure model

Virtual Infrastructure

Wireless Access Metro Optical Network Data Centers

Figure 2: Virtualization over heterogeneous network infrastructures

21

4.1. CONTENT vision and architectural approach

As already discussed in the introduction section, the infrastructure model proposed by

CONTENT is based on the IaaS paradigm. To support the IaaS paradigm, physical resource virtualization plays a key role in the CONTENT approach and is enabled by the cross-domain infrastructure management layer of the CONTENT architectural structure. Connectivity services are provided over the virtual infrastructure slices, created by the infrastructure management layer, through the virtual infrastructure control layer. Integrated end-to-end network, cloud and mobile cloud services are orchestrated and provisioned through the service orchestration layer.

The details of the CONTENT layered architecture are discussed below.

4.1.1. CONTENT architecture layers

CONTENT proposes a layered architecture with the aim to facilitate the main principles of its novel proposition i.e. cross-technology virtualization in support of optimised, seamless and coordinated cloud and mobile cloud service provisioning across heterogeneous network

domains. The overall CONTENT architectural structure is illustrated in

Figure 3 and includes the following layers:

ο‚·

Heterogeneous Physical Infrastructure Layer : including a hybrid wireless access network (LTE/WiFi) domain, and an optical metro network domain (TSON) interconnecting geographically distributed data centres, supporting frame-based subwavelength switching granularity.

ο‚·

Infrastructure Management Layer : is overall responsible for the management of the network infrastructure and the creation of virtual network infrastructures over the underlying physical resources. This involves functions including resource representation, abstraction, management and virtualization across the heterogeneous network domains.

An important feature of the functionalities supported, is orchestrated abstraction of resources across domains, involving information exchange and coordination across domains.

ο‚·

Control Layer: responsible to provision IT and (mobile) connectivity services in the cloud and network domains respectively. The focus of the project is on the network side, where the control layer establishes seamless connectivity across heterogeneous technology domains (wireless access and optical metro) through a coordinated, end-toend approach to support optimized performance, QoS guarantees as well as resource efficiency and sustainability.

ο‚·

Service Orchestration Layer: responsible for efficient coordination of the cloud and network resources, in order to enable the end-to-end composition and delivery of integrated cloud, mobile cloud and network services in mobile environments supporting the required QoE.

22

End-to-end Cloud+Net Service Orchestration

Enhanced Network Functions (routing, mobility, TE, etc)

Virtual Wireless CP Virtual Optical CP

Virtual

Resource 1

Virtual

Resource 2

Virtualization

Virtual

Resource n-1

Virtual

Resource n

WiFi/LTE Driver

Resource Management

Resource Abstraction

Protocol Manager

TSON Driver

Cloud Manager System

Wireless Access Optical Metro Data Centers

Figure 3: The overall CONTENT layered architecture

4.1.2. CONTENT workflow: planning and operation

The CONTENT layered architecture depicted in Figure 3, is conceived to provide cloud and

mobile cloud services on top of virtual infrastructures that span across multi-technology and multi-domain physical networks. The cooperation and interaction among the different architecture layers are the key enablers for the provisioning of the CONTENT enhanced cloud

services; Figure 4 highlights the cross-layer interfaces and correlates each architecture layer

with the corresponding CONTENT actor. The CONTENT architecture and ecosystem are flexible enough to allow a single actor to play more than one business roles: this is the case of

the MOVNO (shown in Figure 4) that operates the VI offered by the PIP and also provides

enhanced cloud services to the end-users, therefore collapsing the VO and SP roles in a single entity.

The provisioning of enhanced cloud and mobile cloud services is performed in CONTENT through the implementation of two distinct phases that require interactions among actors and layers: the VI planning, and the VI operation. The former aims to create a new VI: a set of multitechnology virtual resources (i.e. including both wireless and optical resources) are composed and provisioned on top of the physical infrastructure by the PIP upon a request coming from the

23

MOVNO. On the other hand, the VI operation refers to the provisioning of dynamic end-to-end connectivity services and the instantiation of on-demand cloud services to be offered at mobile users. This means that the VI operation phase includes both VI control and converged service orchestration functions.

The following two sub-sections provide a description of the VI planning and operation phases, also highlighting the inter-layer cooperation in the CONTENT architecture. For each phase, a sequence diagram for interactions among actors and layers, along with a step-wise description of main actions is provided. It should be noted that VI planning and VI operation are subsequent phases and therefore the successful execution of the former is a strict requirement and pre-requisite for the execution of the latter.

Mobile

Cloud User

Cloud Service Provisioning

Interface

Service Orchestration management interface

End-to-end Cloud+Net Service Orchestration

MOVNO

IML PHY infrastructure

Management

Interface

CP management interface

VI Network Control

Interface

Enhanced Network Functions (routing, mobility, TE, etc)

Virtual Wireless CP Virtual Optical CP

CMS configuration interface

IML Service

Management

Interface

VI Operational

Interface

Virtual

Resource 1

Virtualization

Virtual

Resource 2

Virtual

Resource n

Cloud

Manager

System

Planning

Component

Resource Management

DC Control

Interface

Cloud

Manager

System

Operation

Component

PIP

WiFi/LTE Driver

Resource Abstraction

Protocol Manager

PI Operational Interface

DC Operational

Interface

Wireless Network TSON Data Centers

Figure 4: Cross-layer interactions and actors in the CONTENT architecture

24

VI Planning phase

The VI planning phase within the CONTENT environment is defined as the stage were the virtual infrastructures are requested, planned, and instantiated according to the MOVNO requirements. This phase comprehends the complete set of pre-operation actions. On the one hand, it includes the different procedures where the heterogeneous physical infrastructure is registered into the IML. On the other hand, it also involves the planning and dimensioning of the different virtual infrastructures as a function of the requests coming from the different MOVNOs.

Finally, the virtual infrastructure instantiation is the process that will prepare all the components

within the VI in order to make it operable by the corresponding control plane. Figure 5 depicts

the sequence diagram for the planning phase.

Figure 5: VI planning sequence diagram, including resource registration phase

The following table contains a step-wise description of the sequence diagram.

25

Step Actors

1a MOVNO Operator

1b

2

3

4

5

6

PIP

MOVNO

Operator, PIP

PIP

PIP, MOVNO

Operator

MOVNO

Operator, PIP

PIP

Layers

NCL

IML

NCL, IML

IML

IML, NCL

NCL, IML

IML

Interface Description

Internal Procedure

Business Planning. The MOVNO makes an estimation of its business requirements in order to derive an specific virtual infrastructure request, with a set of interconnected nodes.

IML physical infrastructure management interface

IML Service

Management

Interface

The PIP Administrator registers the different physical resources into the IML. The IML needs to be aware of the resources, which is controlling. Once they are registered, the IML gathers any required information about the different devices

The MOVNO Operator sends a request including the VI description and the timeline for the desired VI. The VI request contains a complete description of the different resources and how they are interconnected.

Internal Procedure

Holistic VI planning. The PIP starts the planning process. It is performed through a holistic approach, where all the infrastructure details are known in advance (considering the three segments included). The algorithm used to allocate virtual resources contained in the request over the different physical resources may optimize different metrics. Once the planning has been completed, the resources are reserved for the given VI, but the VI is still not operative.

IML Service

Management

Interface

IML Service

Management

Interface

The PIP notifies the MOVNO operator that the VI has been planned and that all the resources are reserved.

The MOVNO sends a VI instantiation request in order to start operating the virtual infrastructure. The VI was planned before.

Internal Procedure

The PIP starts the instantiation of all the virtual resources contained

26

7

PIP, MOVNO

Operator within the VI. All the software components are instantiated and the VI is then ready to be operated by the MOVNO.

IML, NCL

IML Service

Management

Interface

The PIP notifies the MOVNO operator that the VI has been successfully instantiated and it is ready to be operated. The

MOVNO operator receives the access details for the VI, in order to enable the NCL deployment and configuration phase.

Table 1: VI planning phase step-wise description

VI Operation phase

The VI operation includes two distinct sub-phases: (a) the Control Layer deployment and configuration and (b) the cloud service operation. The former represents a pure management action performed by the MOVNO on the VI rented from the PIP; it aims at deploying and configuring an instance of the network control layer on top of the virtual network infrastructure and in some cases it may include further network service pre-provisioning actions needed to accommodate pre-planned cloud based end-to-end traffic. Moreover, to enable the integrated control and orchestration of cloud and connectivity services, specific configurations must be enforced on the Service Orchestration Layer and the Cloud Management System (CMS)

responsible for the management and control of the DC resources. Figure 6 depicts this initial

deployment and configuration sub-phase sequence diagram, mainly showing the interactions

among layers and actors in the CONTENT architecture. In addition, Table 2 provides a step-

wise description of whole deployment and configuration procedure.

27

MOVNO

Operator

MOVNO

Network

Control

Layer

Cloud

Management

System

Infrastructure

Management

Layer

PIP

Physical

Infrastructure

1. Network Control instantiation

2. Network Control configuration

Network Control deployed and running

3. CMS configuration

CMS configured

4. Service Orch. configuration

Service Orchestration configured

5. Network service pre-provisioning

6. VI resources provisioning

7. PI resources provisioning

Pre-planned network services provisioned

Figure 6: Control Layer deployment and configuration sub-phase sequence diagram

Step

1 MOVNO Operator

Network Control

Layer

CP management interface

2

Actors Layers

MOVNO Operator

Network Control

Layer

Interface

CP management interface

Description

The Network Control Layer is instantiated and deployed by the

MOVNO operator on top of the VI exposed by the Infrastructure

Management Layer, e.g. in the form of Virtual Machines running the network control plane software

The Network Control Layer is properly configured by the MOVNO operator to control the VI resources rented from the PIP and interact with the Service Orchestration Layer for

28

3

4

5

6

MOVNO Operator

MOVNO Operator

MOVNO Operator

MOVNO

PIP

Network Control

Layer

Network Control

Layer

Infrastructure

Management Layer

CP management interface

VI operational interface cloud and network services provisioning

Cloud Management

System

CMS configuration interface

The MOVNO Operator configures the

CMS to let it operate on top of the physical resources inside the Data

Centre (i.e. network, storage, processing) and interact with the

Service Orchestration Layer.

Service

Orchestration Layer

Service Orchestration management interface

The Service Orchestration Layer is configured by the MOVNO operator to let it interact with the Network

Control Layer, the CMS, and the mobile end-user as well.

If needed, the MOVNO operator performs a static (i.e. managementlike) pre-provisioning of network connectivity services across the wireless access and optical metro domains to satisfy some pre-planned and expected cloud based traffic

The Network Control plane controlling the heterogeneous wireless access and optical metro VI process the preprovisioning request from the

MOVNO Operator and request to the

Infrastructure Management for VI resources provisioning

7 PIP

Infrastructure

Management Layer

Physical

Infrastructure Layer

PI operational interface

The Infrastructure Management translates the VI provisioning request from the Network Control into specific actions to be performed on the wireless access and optical metro physical resources to provision the requested service.

Table 2: Control Layer deployment and configuration sub-phase step-wise description

On the other hand, the cloud service operation is the core part of the VI operation phase, and includes all the actions needed to provide on-demand cloud services to the mobile cloud users, from the service requests, up to seamless reservation and provisioning of virtual and physical network and IT resources. This phase involves all the layers in the architecture and all the

actors (even though indirectly for the PIP case) in the CONTENT ecosystem. Figure 7 presents

the cloud service operation sequence diagram, which is further described in a step-wise fashion

in Table 3.

29

Mobile

Cloud

User

Service

Orchestration

Layer

MOVNO

Network

Control

Layer

Cloud

Management

System

Infrastructure

Management

Layer

PIP

Physical

Infrastructure

1. Cloud + network service request

2. Orchestration of end-to-end cloud and network services

3. Converged connectivity service provisioning

4. VI resources provisioning

5. PI resources provisioning

5. Cloud service provisioning

6. DC network & IT resource provisioning

Seamless cloud and network services established

Figure 7: Cloud service operation sub-phase sequence diagram

Step

1

Actors Layers Interface Description

Mobile Cloud User

MOVNO

Service

Orchestration Layer

Cloud Service

Provisioning Interface

The Mobile Cloud User requests to the MOVNO (i.e. the Service

Orchestration), the establishment of a cloud service. The requests includes the specification of both cloud and network service requirements.

2

3

4

MOVNO

MOVNO

MOVNO

PIP

Service

Orchestration Layer

Service

Orchestration Layer

Network Control

Layer

Network Control

Layer

Infrastructure

Management Layer internal

VI Network Control interface

Vi Operational interface

The Service Orchestration perform all its internal actions to orchestrate and compose the seamless cloud and end-to-end network services. This is based on the IT and network capability and availability information

The Service Orchestration requests the provisioning of a seamless network connectivity service that spans wireless access and optical metro domains: specific QoS requirements to meet the Mobile

Cloud User needs are included.

The Network Control plane controlling the heterogeneous multi-technology

VI process the connectivity request from the Service Orchestration, computes and combines the needed

30

5

6

7

PIP

MOVNO

Infrastructure

Management Layer

Physical

Infrastructure Layer

PI operational interface per-domain network services and finally request to the Infrastructure

Management for VI resources provisioning

The Infrastructure Management translates the VI resource provisioning request from the

Network Control into specific actions to be performed in the involved technology domains, in particular to configure the physical resources in order to accommodate the requested service

Service

Orchestration Layer

Cloud Management

System

DC Control interface

The Service Orchestration invokes the CMS to provision the cloud part of the service requested by the Mobile

Cloud User inside the Data Centre.

The requirements for network, storage and computing resources to be provisioned are included.

MOVNO

PIP

Cloud Management

System

Physical

Infrastructure Layer

DC operational interface

The CMS performs its internal virtualization, composition, and orchestration actions for the heterogeneous resources inside the

Data Centre. It configures IT and network resources according to the

Mobile Cloud User requirements.

Table 3: Cloud service operation sub-phase step-wise description

4.2. Physical Infrastructure Layer

As mentioned earlier the heterogeneous physical infrastructure comprises a hybrid wireless access network (LTE/WiFi) domain, and an optical metro network domain interconnecting geographically distributed data centres. The optical metro network is based on the Time Shared

Optical Network (TSON) technology that supports frame-based sub-wavelength switching granularity that was originally developed in the framework of the EU project MAIS [MAINS].

TSON is designed and implemented as a novel time multiplexing metro network solution, offering dynamic connectivity with fine granularity of bandwidth in a contention-less manner.

TSON deployment will address the access/DC connectivity by providing flexible rates and a virtualisation friendly transport technology which can be converged with wireless and IT resources in order to maximise the performance and provide an end-to-end virtual infrastructure.

Figure 8 shows the TSON infrastructure, interconnecting the wireless access; which contains

the wireless nodes and the mobile packet core, with distributed data centres as the cloud enablers. The TSON hardware comprises FPGA platforms (to receive the ingress traffic and converting it into optical bursts) and 10ns optical switches (to route them within the TSON core to the final destination all-optically). The transmission and routing per node takes place using the allocation information which are displayed as matrices on top of each link.

31

Data Center

1 1 1 1 1 1 1 1 1

1 1 1 1 1 1 0 0 0

1 1 1 0 0 0 0 1 1

1 1 1 1 1 1 1 1 1

1 1 1 1 1 1 0 0 0

1 1 1 0 0 0 0 1 1

FPGA nodes

Data Center

Allocation information

1 1 1 1 1 1 1 1 1

1 1 1 1 1 1 0 0 0

1 1 1 0 0 0 0 1 1

Optical fast switches

Wireless

Network

Wireless

Network

1 1 1 1 1 1 1 1 1

1 1 1 1 1 1 0 0 0

1 1 1 0 0 0 0 1 1

Figure 8: TSON network interconnecting DC and wireless access networks

4.2.1. TSON architecture

The TSON network data plane consists of FPGA nodes for high speed processing at 10Gb/s per wavelength data rates. The ethernet traffic coming from the client sides in wireless network and data centres is considered to be the ingress traffic in the TOSN network. The traffic is processed and converted in to optical bursts, which are then sent to fast switches and from there they are routed to the destination TSON node all optically. The operational architecture of the TSON nodes involves three layers [Mains-d23], detailed below:

1. Routing and resource allocation: TSON uses central routing and resource allocation, which is comprised of Path Computation Element (PCE) for finding end to end paths together with a Sub Lambda Assignment Engine (SLAE) for time slice allocation. These modules need to be installed at any control plane framework at work, and to be accessed and invoked whenever a new request comes to the network. They use a global database of all the resources and availabilities and use its information and update them for each allocation. Within CONTENT, as several virtual networks are running concurrently, each of them needs to have its own independent control and hence an instance of routing and resource allocation tool to manage per virtual network resources.

The information of the whole TSON network then should be held in another database which has the whole infrastructure resource visibility.

2. TSON Layer 2 functions: The direct control of the data plane of TSON nodes is realised through the use of high performance FPGA (Field Programmable Gate Array).

3. TSON Layer 1 functions: Traffic is transported and switched over to the allocated time slots. TSON hardware is composed of FPGAs plus fast switches.

32

TSON provides a set of considerable advantages compared to alternative technologies, which are listed below:

ο‚·

TSON takes advantage of a flexible and efficient resource allocation mechanism, offering a contention free approach. Different routing and allocation algorithms can work on the same network resources with arbitrary algorithms.

ο‚·

TSON has a synchronous TDM mechanism and uses frame and time slicing to provide bandwidth services to the requests. The frame time slice sizes can be tuned to modify the bandwidth granularity. This results in bandwidth granularity that varies between

30Mb/s per wavelength and 10 GE. The granularity of TSON can be modified by changing the resource attributes of communication frames and the number of time slots per frame, to set the granularity to the desired level. This flexibility allows TSON to interface with networks of different data rates.

ο‚·

TSON adopts a novel XML-REST interface as a general and common structure for networking information to be exchanged between TSON nodes and the controller. The use of XML-REST based interfaces offers a highly cost effective and flexible open solution for machine-to-machine inter-working.

ο‚·

TSON takes advantage of a modular and extendible architecture that makes it very flexible for adopting other sub-wavelength switching techniques. For example, TSON network modules (resource allocation engine, XML interface, etc.) can be easily modified to support sub-wavelength switching in the spectral domain as a futuristic possible option.

TSON network resources

TSON network, as explained in D4.1, offers a hierarchy of three levels of granularity of

resources (1) connections, (2) frames, and (3) time slots, as illustrated in Figure 9. By

connection, we referred to a communication session between two end points in the TSON domain. TSON uses statistical multiplexing of data units which is gained by allocating time slices on each frame of communication. The frames are synchronised between all the participating TSON nodes, and the nodes use the allocated time slices per each connection for transmission and switching of data bursts. The number of time slices defines the bandwidth assigned to each session, and a connection as a set of frames defines the duration of a communication session.

TSON prior to its application in the context of CONTENT supported data rates if 100Mbps up to

20 Gbps using 2 wavelength per each link. However, for CONTENT, several extensions are made to the TSON structure (which operates with a TDM mechanism) to make it more flexible in providing connectivity services with variable bandwidth and latency granularities. The frame lengths, bursts lengths, along with the overhead needed for each burst can be defined prior to

the operation (Figure 9). This will create the possibility to program the TSON functional

behaviour for each networking condition. For instance, the lower the number of bursts in a frame, the shorter the frame duration is. This in turn means less latency experienced by the

33

packets in the aggregation buffers when sent out as bursts of data. Also, the balance between the data size and overhead size per each slot can be defined. This enables the network to be adaptive to link and network conditions. For instance, if the quality of the links or the length of the paths does not require much overhead to guarantee error free communication, the throughput of the network can be upgraded. These extensions will enable TSON to support lower data rates per each burst (down to 30Mbps) with variable QoS metrics depending on the setup.

TSON on the other hand uses a more flexible matching and forwarding mechanism. It can match and store the traffic based on single or multiple header fields. This allows the network to isolate traffic in integration from other domains if for instance a VLAN is allocated to a stream end to end, or to create separate network slices on the TSON domain by using a selection of the header fields.

Burst time slice length

Variable overhead

Variable number of Time slots

Variable frame length

Connection (Seconds, Minutes, Hours, …. )

Figure 9: Resource abstraction in TSON with extended flexibility

Forwarding mechanism

The TSON forwarding and matching mechanism has been updated from Dest Mac address matching to use any set of bytes of the Ethernet header (16 Bytes) to be matched, with the information provided from the controller software. This added capability greatly impacts the flexibility of TSON in control of the ingress traffic, which can be set from the controller to use conventional header fields such as Dest

MAC or VLAN ID, or to be set to use any combination of header bytes. This will also solve the scalability issue inherent in 802.1 and 802.1Q protocols as they suffer from limited header permutations.

34

Wireless – Optical, Optical-Wireless

The communication between the two technology domains of optical and wireless networks will use Ethernet interfaces. The wireless domain will provide 1GE links, which are then aggregated in to 10GE and fed into the TSON system. The two domains will use VLAN technology for endto-end data path integration for converged network slices and to define isolation between different coexisting virtual networks. The VLAN tagged traffic will then be directly sent into the

TSON network.

Optical – DC, DC – Optical

DC nodes as TSON clients, will also communicate with the TSON nodes through Ethernet interfaces. Different VMs or VM groups can be identified and tagged using VLAN tagging, and

TSON will be able to transfer them between the DC and wireless domains.

4.2.2. Wireless Network Architecture

The CONTENT architecture relies on a converged 802.11 and 4G - Long Term Evolution (LTE) access technology network used to support cloud computing services.

In the CONTENT architecture the wireless network consists of:

ο‚·

the wireless Access network and

ο‚·

the wireless Backhaul network

The wireless Access Network consists of 802.11 and LTE networks, while the Backhaul network is the packet core network that is used to pass the traffic to the Gateway that will interact with the TSON optical network Gateway.

We summarize the 802.11 network architecture basic components [stand12]:

ο‚·

Station : connects to the wireless medium. Supported services are authentication, privacy, and delivery of the data.

ο‚·

Basic Service Set (BSS) : A BSS is a set of stations that communicate with one another.

ο‚·

Extended Service Set (ESS): ESS is a set of infrastructure BSSs, where the APs communicate among themselves to forward traffic from one BSS to another and to facilitate the movement of mobile stations from one BSS to another.

ο‚·

Distribution System (DS) : the distribution system (DS) is the mechanism by which one

AP communicates with another to exchange frames for stations in their BSSs, forward frames to follow mobile stations from one BSS to another, and exchange frames with the wired network.

ο‚·

Station Services: Similar functions to those that are expected of a wired network.

ο‚·

Distribution Services : provide services necessary to allow mobile stations to roam freely within an ESS and allow an IEEE 802.11 WLAN to connect with the wired LAN infrastructure.

35

Within the IEEE 802.11 Working Group, a large number of IEEE standards and amendments exist. The standard has specified the application of DSSS, FHSS and OFDM as methods to spread originating signals that have been modulated onto a base carrier signal to carry information over infrared light. The CONTENT solution will support the basic 802.11 standard

(802.11a,b,g) for the PHY, while there will be effort to integrate and support improvements on the standard, like for example the 802.11e that is a proposal defining the mechanisms for QoS support.

In contrast to the circuit-switched model of previous cellular systems, Long Term Evolution

(LTE) has been designed to support only packet-switched services. It aims to provide seamless

Internet Protocol (IP) connectivity between user equipment (UE) and the packet data network

(PDN), without any disruption to th e end users’ applications during mobility [LTE].

System Architecture Evolution (SAE) is the core network architecture of LTE wireless communication standard, it is the evolution of the GPRS Core Network and together LTE and

SAE comprise the Evolved Packet System (EPS). The core network (EPS) uses the concept of

EPS bearers to route IP traffic from a gateway in the PDN to the UE. This core network (called

EPC) is responsible for the overall control of the UE and establishment of the bearers.

The main logical nodes of the EPC are:

ο‚·

PDN Gateway (P-GW): The PDN Gateway is responsible for IP address allocation for the UE, as well as QoS enforcement and flow-based charging.

ο‚·

Serving Gateway (S-GW): All user IP packets are transferred through the Serving

Gateway, which serves as the local mobility anchor for the data bearers when the UE moves between eNodeBs.

ο‚·

Mobility Management Entity (MME): The Mobility Management Entity (MME) is the control node that processes the signalling between the UE and the core network.

An overview of the LTE architecture can be found in [LTE] while more detailed descriptions can be found in [Ghosh] and [Sesia].

All the design, implementation evaluation and validation of the wireless part of the CONTENT solution will be based on the NITOS wireless testbed architecture and operation. The

architecture of the NITOS testbed is depicted in Figure 10. Although NITOS testbed focuses in

experimentation, the architecture it uses (separated Access and Backhaul networks) is also what is found in practical deployments and what the wireless providers usually support.

In addition, all the control and management software it uses is based on open source software.

This way any enhancement made and interfaces build in the context of the CONTENT project can be exploited ad-hoc by other testbeds that use the same management and control tools and reservation mechanisms. Nevertheless, as we will analyze in the following section, the proposed architecture will be agnostic to the specific software technology used by the NITOS testbed and can be adopted by wireless network providers that use different control and management frameworks, as long as the necessary interfaces will be build.

36

Figure 10: NITOS testbed architecture

The basic components of the NITOS wireless network are:

ο‚·

WiFi/LTE Access network

ο‚·

Openflow based wireless Backhaul network

ο‚·

Control network

ο‚·

NITOS Bridge

The NITOS Bridge is the point where VLAN network connections through the GEANT network terminate. The wireless Backhaul network is the packet network that is responsible to pass the traffic to and from the WiFi/LTE Access Network to the Wireless Gateway (in the case of NITOS the Gateway resides in the UTH University of Thessaly side and is used for the interconnection to the optical Gateway). A detail description of the NITOS testbed is made in D4.1.

We reference that the testbed Management and Control is made with the OMF framework

[OMF] and the Virtual Network isolation is currently deployed in NITOS testbed in space, time and frequency; NITOS “virtual wireless networks” are composed of the tuple “nodes-frequency channelsperiod of time” and this reservation is guaranteed for every network. In addition, there is OMF access to the OpenFlow-based B ackhaul network. This way a wireless network “slice”

37

can be controlled and operated for the whole path from the Access Points and the LTE network to the wireless provider Gateway.

Programmable Dataplane technology, like for example the Click modular router [click], will be supported by various elements of the architecture (Access Points and the NITOS bridge), while other technologies like openvswitch [vswitch] or vrouter [vrouter] will also be evaluated in the context of the project.

Wireless –optical Interfaces

The physical interconnection between the wireless testbeds (NITOS in Volos, Greece) and the optical testbed (TSON in Bristol, UK) is made through the GEANT network [GEANT]. VLANbased connections are currently used, while the GEANT network robustness guarantees the delivery of speed and services to enable greater collaboration. The CONTENT project will exploit all the advances and updates on the GEANT network and the services offered, like for example the Bandwidth on Demand service (BoD). The involved NRENs (GRNET in Greece and JANET in UK), are already carrying out pilots for the introduction of the BoD service. With the BoD service, multi-domain provisioning is offered to users that can make reservations in advance, or provision in real-time point-to-point circuits with the capacity needed.

The interconnection of the two technology domains includes interfacing of the testbeds both at the physical layer and also at the logical and software layer. For the physical interconnections, the two technology domains need to support common protocols to enable seamless traffic interchange between them. Ethernet 802.1Q VLAN tagging and techniques like QnQ tagging and EVPN have been considered to interface the two networks, which enable inter domain traffic transport, are compatible with other Ethernet based technologies (thus enhancing the scalability of the network), and facilitate the establishment of end-to-end virtual networks as services. For connecting the two testbeds (TSON and NITOS), data rate characteristics of both testbeds are considered. TSON as the metro backhaul solution provides from 100 Mbps up to

20 Gbps connectivity for a variable number of LTE/WiFi cells, depending on the rates required at each access point and the level of aggregation to be performed. The rate matching between the two domains can cause inefficiency in using the available bandwidth. However TSON as the sub-wavelength switching technology with fine data rate granularities and dynamic setup and tear-down of connections can adapt to the requirements in the access quite swiftly.

The control of the physical setup and interconnection of the two networks needs to be addressed in the control layer. This way the allocation/optimization mechanisms will have global understanding of the resources, can have access to these resources (e.g. nodes, ports, spectral, time units) and so create optimum end-to-end slices of the network. In this regard, the

TSON system exposes the information in its edge nodes out to the controller through a web service interface, while NITOS does it via both a web based interface and SFA/mySlice [sfa].

This enables the controller to take advantage of the programmability and flexibility of the TSON and NITOS technologies to establish optimum paths stretching from users in wireless access areas to the distributed data centres through the metro/core areas of the network.

38

Optical – DC, DC – Optical dataplane interfaces

DC nodes as TSON clients, will also communicate with the TSON nodes through Ethernet interfaces. TSON can operate on 10G interfaces using SFP+ 1550 and 1310 nm modules directly connecting the data centre network to TSON, or to use intermediate switches for further grooming and aggregation and then going to TSON. Traffic from different VMs or VM groups can be identified and tagged using VLAN tagging, and TSON will be able to transfer them between the DC and wireless domains over for converged wireless-optical-datacentre slices seamlessly. Also, TSON nodes can be dynamically reprogrammed to use different header information for matching and forwarding of the ingress traffic which suits well for variable source/destination traffic matrices.

4.3. Infrastructure management

The previous section has presented and described the physical infrastructure substrate

considered within the CONTENT project (see Figure 1). The CONTENT test-bed is designed

and implemented to experimentally validate mechanisms and procedures for the joint orchestration of cloud and network resources. In terms of wireless technologies, CONTENT focuses on a hybrid solution based on WiFi and LTE. In terms of wired technologies, CONTENT focuses on TSON metro networks, that is, Wavelength Division Multiplexed (WDM) metro networks with frame-based sub-wavelength switching granularity incorporating nodes that support backhauling from the wireless access segment.

In order to manage the different infrastructure domains, the CONTENT architecture deploys the

Infrastructure Management Layer (IML). It is the element on the architecture that covers all the functions required to provide the infrastructure as a service towards the different service providers. In the context of the CONTENT project, the MOVNO (defined into deliverable D2.1

[content-d2.1]) is the beneficiary of the different infrastructure services offered by the infrastructure management layer.

Figure 11 depicts the high-level architecture of the IML. It is mainly devoted to the converged

management (e.g. monitoring, abstraction, or lifecycle management) of resources from different technology domains and, at the same time, it is the element of the architecture responsible of the creation of isolated virtual infrastructures composed of resources coming from each network domain. Furthermore, the infrastructure management layer contains the Cloud Management

System (CMS), which is the element used to manage the IT segment. The CMS vertically spans through the infrastructure management layer and the network control layer.

39

Figure 11: High level architecture of Infrastructure Management layer

The bottom part of the IML contains the components (i.e. Drivers) that are responsible for retrieving information and communicating with each one of the domains. Once the information has been acquired, the resources are abstracted and virtualized. Moreover, associated to the physical resource management, this layer contains different transversal functionalities required to ensure that the basic requirements identified in the deliverable D2.1 [content-d2.1] are covered. Thus, security, persistence, and resource lifecycle management are also covered within this layer.

Virtualization is the key functionality of the infrastructure management layer. This layer of the architecture provides a cross-domain and cross-technology virtualization solution, which allows the creation and operation of infrastructure slices including subsets of the network connected to the different computational resources providing cloud services.

The virtualization strategy adopted per-domain, as well as the cross-domain virtualization strategy, is defined into deliverable D4.1 [CONTENT-D4.1]. In practice, we have defined two different approaches towards virtualization, namely resource-based and service-based

virtualization. The functional architecture depicted in Figure 11 supports any implementation.

The internal Virtualization component is the element responsible for the creation and management of the different virtual resources used to compose the required isolated slices. The virtual resources created will hold a set of capabilities (e.g. protocols supported, or actions that can be performed over the resource), which will be accessible from the corresponding network control layer. These capabilities will depend on the virtualization approach selected per each domain.

40

The following sub-sections contain a summary of the virtualization scheme selected per each technological domain and its corresponding implications, as well as a summary of the converged wireless-wired virtualization approach.

The following table contains a detailed overview of the functionality associated to each

functional block contained in Figure 11.

Building Block

Specific domain

(Wifi/LTE, OpenFlow,

TSON) Driver

Protocol Manager

Description

These blocks are responsible for the communication with the different domains. These drivers, in fact, implement the different protocols and tools required for communication.

Depending on the characteristics of the domains, and the available management interfaces in the hardware devices, the communication will be in a per-domain or per-device basis.

The main function of each domain driver is to act as a translator towards the domain (i.e. the hardware devices populating each domain) in order to enable both synchronous (e.g. configuration commands, or retrieve information queries) and asynchronous (e.g. events happening in the domain) bidirectional communication. The driver communicates directly with the resource abstraction block through unified interfaces.

It is the block responsible to manage the different domain drivers. It provides different functions to manage the drivers (e.g. load driver, start, stop) as well as some generic interfaces to be implemented by each specific domain driver.

Resource Abstraction

This functional block is responsible to collect all the information from each resource populating the different technology segments (through the specific drivers), and then to build abstracted models with all the significant information required to perform virtualization.

The information model used in this block will be defined in detail into specific implementation work-package.

Resource Manager

Virtualization

This component is the module intended to perform the different management operations over the abstracted resources. From a functional point of view, the resource manager utilizes the different transversal functionalities of the IML layer (lifecycle, persistence, or security) to perform the different management duties within the infrastructure management layer.

This building block implements the whole virtualization system, which was introduced in previous deliverable D4.1

[content-d4.1]. Basically, the virtualization module retrieves information from the abstracted resources and creates

41

virtual slices in order to satisfy the different requests from the operators.

This module is responsible for the creation and management of the different virtual resources that compose the virtual infrastructures provided to the operators. The virtualization module also utilizes the transversal functionalities (lifecycle, persistence, or security) to perform the different management operations over the virtual resources and over the virtual infrastructure objects.

Further details on the virtualization capabilities are provided later in section 4.3.3 Converged network virtualization, since this functional block is the one responsible for realizing the converged virtualization in the

CONTENT architecture.

Monitoring &

Analytics

This block is responsible for collecting all the monitored data through the specific drivers. It will contain per-domain and cross-domain monitoring components, so all the information can be retrieved either in a per-domain basis or in a holistic cross-domain basis. Analytics functionalities within the IML layer are provided by this component. It is not mandatory that the IML layer performs all the actions.

There is some monitoring and analytics information may be forwarded towards the control layer, where the corresponding actions will occur.

4.3.1. Optical Network Virtualization.

As already discussed the optical metro segment technology considered is TSON. TSON is designed and implemented as a novel time multiplexing metro network solution, offering dynamic connectivity with fine sub-wavelength granularity of bandwidth in a contention-less manner. TSON takes advantage of a flexible and efficient resource allocation mechanism by employing a central resource allocation engine of Route, Wavelength, and Time Assignment

(RWTA), offering a contention free approach. It also provides a novel XML-based interface as a general and common structure for networking information to be exchanged between TSON nodes and the associated network controllers. Further details on TSON architecture, workflows, or resource allocation algorithms can be found into previous deliverable D4.1 [content-d4.1] and in [Zervas11], [mains-d2.3].

TSON supports virtualization of the resources using its time-based sub-wavelength switching architecture [Zervas11]. TSON exploits a hierarchical resource definition based on connections, frames, and time slices. In the resource model of TSON, each of the links is represented by a matrix using the rows arrays for each wavelength, and having the array cell in each row to represent the time units in each frame of communication. Thus, TSON enables virtual resources to be created by assigning time slices on the wavelengths as a function of each request. For setting up virtual links, each network slice can be allocated bandwidth as small as 1 time slot per link, (100 Mb/s: 100 time slots over 10Gbps). In case of requiring slices to obtain an

42

aggregated bandwidth, multiple time slot units over one or multiple wavelengths can be allocated. TSON comprises two types of nodes, that is, the edge and the core nodes. The edge nodes use a number of 10GE ports to aggregate the users traffic and convert it into TSON frames. To virtualize the edge node, we can assign these ports to different VONs and achieve a coarse level of virtualization. As the development of the TSON edge nodes, the isolation of users traffic can also be achieved by using Dest Mac address and even any set of bytes of the

Ethernet header, which can be executed according to the information given by the controller software. For virtualizing the core nodes that exploit fast PLZT based switches controlled by

FPGA, the ports and the time slices of a port can be allocated to different VONs.

The unique features of TSON bring more challenges on virtualization. The bursty nature of the signals is more vulnerable to the noise of optical components, such as Erbium Doped Fibre

Amplifier (EDFA) and transceivers, and also other impairments, which limit the reach and

OSNR. Besides the requirements on the wavelength/spectrum continuity, TSON also requires the time slice continuity. During the process of TSON virtualization, available time slots should also be abstracted and exposed as virtual resources. Global synchronization among the network nodes is needed due to lack of buffers in TSON. Therefore, in each virtual slice the synchronization should also be taken into account and guaranteed.

In order to define the virtualization approach to follow for the TSON metro segment, we analysed the whole set of requirements for both resource-based and service-based approaches.

The results, summarized in [CONTENT-D4.1], showed that resource-based virtualization allows full support of the entire set of requirements. We explained that it is mainly due to the fact that the virtual resource capabilities exposed towards the network control layer are exactly the same as the different capabilities presented in the physical resource itself, and are just exposed through a unified and vendor-independent interface. Therefore, the operators are able to maintain the finest level of control on top of the virtual infrastructure. On the other hand, the service virtualization approach, even if fully supporting most of the requirements, provides a minor degree of control available for the virtual operator, resulting in partial support for most of the requirements related to the efficiency of the resource utilization, e.g. in terms of dynamic resource re-allocation, converged infrastructure optimization and efficient provisioning of end-toend services over the integrated infrastructure.

4.3.2. Wireless Network Virtualization.

Wireless access networks considered within the project scenario comprise both WiFi and LTE technologies. Again, a complete description of each technology and analysis of the state-of-theart virtualization techniques has been provided in deliverable D4.1 [content-d4.1]. As a summary, virtualization of the wireless access network can take place in the physical layer, the data-link layer (with virtual MAC addressing schemes and open source driver manipulation), or at the network layer. In fact, the capabilities in all the OSI layers of the different wireless technologies considered provide the potential to adopt a very flexible and efficient virtualization solution at this network segment.

Following the same approach as that for the optical metro network, we analysed in D4.1 both the resource-based and the service-based approaches in order to figure out which one covers

43

the highest number of requirements. Although initially both models enable the support for all the identified requirements, service-based virtualization implementation needs to consider service granularity, since it may be the case that the resources cannot be shared among different virtual infrastructures due to proprietary MAC protocol implementations.

Following the architecture paradigm that is met in practice by all the wireless access providers, we separate the access network resources by the wireless backhaul network resources.

Figure 12: Wireless Virtualization

In figure 12, we can see that L2 and L3 network virtualization can take place using for example technologies like Openflow, while in the Access point resource virtualization technologies like the one proposed in [Bhanage10], can be used; a single wireless access point is used to emulate multiple virtual access points. Clients from different networks associate with corresponding virtual APs though the use of the same underlying hardware. Isolation across groups of wireless users is provided through airtime control based on the information provided by a controller running at the AP that is used to compute slice airtime usage time per virtual AP.

Furthermore, network virtualization and isolation can be done by using technologies like VLAN tagging on the AP. The traffic of each virtual network is tagged in the AP with a specific VLAN tag.

Because of the varying channel conditions and the unpredictable behaviour of the wireless medium, in practice it is very difficult to provide true performance guarantees even without applying virtualization techniques. The fact that VLAN technology is supported in both the

44

wireless backhaul and the access networks, gives us the ability as designers, besides providing network isolation, to build QoS mechanisms in various points of the architecture and the potential/ability to sufficiently achieve performance guarantees of each virtual network by means of delay, throughput etc.

4.3.3. Converged Network Virtualization

The CONTENT cross-domain virtualization mainly aims at provisioning virtual infrastructures composed of heterogeneous resources. In [CONTENT-D4.1] we proposed the high-level

Definition of the CONTENT Virtualization Strategy

converged virtualization depicted in Figure 13.

Figure 13: Converged virtualization approach (Source: [content-d4.1])

Figure 16: Cross-domain virtualization approach

From the architectural point of view, the virtualization solution is completely integrated into the technological driver and the whole virtualization building block. The cross-domain virtualization building block is responsible to acquire information from each domain and perform the holistic mapping in order to create the different virtual resources and allocate them over the physical virtualization over the different domains. As explained before, the IT resources are not infrastructures. Once the virtual resources are created, and the capabilities are attached to completely managed by the virtualization system, but we rely on the CMS capabilities enabled them, the network control layer is responsible for the operation and monitoring of the resources.

Initial results on the implications of using such an approach to perform the virtual infrastructure planning (or embedding) process within the CONTENT environment are provided in the next

4.4. Virtual Infrastructure Control Layer

The previous section has analysed the CONTENT solutions for the creation and delivery of integrated Virtual Infrastructures composed of network domains, spanning from the wireless cross-technology virtualization for the creation of infrastructure slices including IT as well as

58/1

edge of the metro network. This section focuses on the operation of these converged Virtual

Infrastructures, proposing an architecture that allows a Mobile Optical Virtual Network Operator

(MOVNO) to control and manage the entire set of heterogeneous resources through a unified platform.

The final objective of the Virtual Infrastructure network control and management procedures is the efficient, dynamic and automated provisioning of end-to-end connectivity services able to support the delivery of cloud services to mobile users with the required QoS guarantees. From

the point of view of a pure MOVNO (see section 3.2) it is fundamental to operate the rented

Virtual Infrastructure optimizing its overall utilization across the different segments (i.e. from the wireless to the metro domain) and, at the same time, reducing the amount and the complexity of the manual configurations. On the other hand, according to the CONTENT vision we need to consider the additional point of view of a (cloud) Service Provider (SP), who is interested in delivering added-value cloud services to its mobile customers. In other terms, a CONTENTenabled cloud SP can propose new market offers where cloud services are delivered to the customers integrating dedicated wireless network connectivity, with the QoS characteristics required to guarantee a high level Quality of Experience (QoE) perceived from the cloud users.

Starting from these considerations, the CONTENT framework proposes two progressive and

cooperating architectural layers (see Figure 3) operating on top of the converged Virtual

Infrastructure exposed by the Infrastructure Management Layer presented in the previous section.

The lower layer, called Virtual Infrastructure Network Control Layer and mainly associated to the

MOVNO actions, is dedicated to the network control and management of the Virtual

Infrastructure. It provides the functions required for the delivery of end-to-end connectivity spanning from the wireless to the metro network and the efficient utilization of the global, multitechnology, virtual network. This layer is described in details in this section.

On top of the Virtual Infrastructure Network Control Layer, the Service Orchestration Layer is in charge of composing and delivering cloud services, properly integrated with dedicated wireless connectivity services, to the mobile end-users. The Service Orchestration Layer, described in

details in section 4.5, is mainly associated to the cloud SP actions and, consequently, is also

responsible for the efficient coordination of the cloud resources available in the Virtual

Infrastructure.

46

Figure 14: High level architecture of Virtual Infrastructure control layer

One of the fundamental characteristics of the CONTENT architecture is the cooperation between Virtual Infrastructure Network Control Layer and Service Orchestration Layer. This cooperation is the key point to guarantee:

ο‚·

the consistent and converged management of the entire Virtual Infrastructure, from the network to the cloud domains, and

ο‚·

the fulfilment of the dynamic requirements of cloud services, considered as complex but unified entities characterized by multiple types of resources (storage, computing, network) with their own interdependencies.

It should be noted that the layering approach adopted in the CONTENT architecture does not prevent any possible composition of business roles in single actors. For example, the MOVNO and the cloud SP operating the Virtual Infrastructure Network Control Layer and the Service

Orchestration Layer respectively can be either different actors or merged in a single actor, as

also explained in section 4.1.2. In the former case, the interface between Virtual Infrastructure

Network Control Layer and Service Orchestration Layer will connect two different administrative entities and the level of information exchanged among the layers may be more restricted and abstracted in order to preserve confidentiality and avoid the disclosure of internal details.

However, the cloud-network cooperation concepts remain valid in both scenarios.

The Virtual Infrastructure Network Control Layer is based on the SDN paradigm exploiting the concept of network programmability. It is internally organized in a hierarchical structure of network functions, where the converged control of the Virtual Infrastructure relies on “enhanced” network applications built over elementary services of the optical and wireless domain.

47

Following this approach, at the bottom level directly on top of the virtualization layer, basic control plane functions provide simple services like resource configuration and monitoring, intradomain connection setup, etc. These functions are provided through an SDN controller, potentially distributed in several interacting controllers, which manages the entire infrastructure, but operates with separated per-domain scopes through specialized modules able to deal with

the specific constraints of the underlying technologies. Section 4.4.1 and 4.4.2 describe possible

approaches to provide these basic control mechanisms for the optical and wireless connectivity respectively. It should be noted that in the particular case of CONTENT this architectural concept is applied to virtual infrastructure domains characterized by specific technologies (e.g.

TSON for the metro network). However the overall approach can be easily extended to multilayer infrastructures based on different types of technologies (e.g. packet switching), through the usage SDN controllers able to manage the given domains.

In principle, the SDN controller can be distributed over multiple entities responsible for specific domains or even just part of them. This option allows the architecture to improve its scalability so that it can be easily applied in larger environments. It should be noted that the distribution of the controller functions across multiple entities requires protocols regulating the interactions among the different SDN controllers; the BGP protocol may be adopted for this purpose. For simplicity, in the following sections we assume a common SDN controller operating on top of the entire Virtual Infrastructure.

The services at the bottom level expose interfaces that allow to customize and adapt their behaviour, so that they can be easily used as elementary bricks to compose and develop more complex network functions on top. Moreover, the low level functions can be also disabled and fully replaced by corresponding, more powerful, high-level functions. The CONTENT architecture can accommodate different types of enhanced network applications, which may cooperate together for a more efficient management of the heterogeneous Virtual Infrastructure.

A first category is dedicated to the provisioning of end-to-end, multi-layer and cloud-oriented connectivity services. These functions may operate on a high level vision of the Virtual

Infrastructure topology, abstracting the details of each technology domain, and they usually expose rich APIs towards the Service Orchestration Layer to enable the joint composition of network and cloud services. Another category of network applications is dedicated to the internal management and automated re-optimization of the virtual network infrastructure. They do not usually expose any operational interface towards the upper layers and may be specialized to work either over single network segments or with a cross-domain scope. The

CONTENT enhanced network functions and their role in the provisioning and operation of end-

to-end network services is further described in section 4.4.3.

4.4.1. Optical metro connectivity

As introduced in the previous paragraphs, the Virtual Infrastructure Network Control Layer is based on a double level of network functions, where the lower level provides basic connectivity services in the mobile access and metro domains, while the upper level exploits these services to offer end-to-end network connectivity and optimization of the entire, multi-layer infrastructure.

This section describes the low level control functions related to the metro domain, showing their

48

positioning in the SDN-based architecture of the whole Virtual Infrastructure Network Control

Layer.

In the CONTENT architecture, the basic per-domain network functions are provided by the internal procedures of an SDN controller, properly extended to deal with the specific

characteristics of the reference domains. Figure 15 shows the high-level architecture of the SDN

controller, focusing on the functions required to operate the virtual metro network based on

TSON technology.

The internal SDN controller functions work on an abstract version of the TSON devices, exported through a Resource Control Layer that enables the configuration and monitoring of the

TSON virtual resources (single devices or sub-domains of the metro network). The abstract

TSON description is expressed through a unified information model, common to all the SDN controller network functions. The translation between the possibly heterogeneous information models adopted at the SDN controller Southbound interface and the single information model within the SDN controller is a task of the Resource Control Layer .

Figure 15: Low-level control functions in TSON metro domain

The internal SDN controller functions provides basic network services, specialized for the TSON domain, and they expose a set of primitives at the SDN controller Northbound interface in order to allow the enhanced programmability of the network.

The SDN controller must provide at least the following services:

ο‚·

TSON topology service – maintains an update view of the entire metro network, adopting the common information model of the SDN controller to describe the TSON

49

topology and its resource availability. The information model must be designed to capture all the TSON characteristics and constraints relevant for the correct working of the other network functions (e.g. time slots availability, etc.), as well as metrics and policies configured from the operator. The TSON topology manager builds its vision of the metro network based on the information received from the virtual infrastructure through the Resource Control Layer (e.g. new nodes, link failures, etc.) and keeps that updated based on the further data retrieved from other network functions (e.g. established connections from the Connection Manager or computed paths from the Path

Computation function). The level of details maintained by the TSON topology manager can be limited to the availability and capability of the network resources, or can be extended to the partial or full knowledge of the established connections.

ο‚·

TSON statistics service – – collects and maintains monitoring and analytics information about the underlying virtual resources and the network services. The main source of information related to the TSON resources statistics and service performances is the

Resource Control Layer , but further data may be also acquired by other network functions (e.g. time required for path computation and network configuration for a given connection; number of failed service requests, etc.).

ο‚·

Connection service – responsible for setup, tear-down, modification and maintenance of intra-domain connection services. In a TSON network, connection provisioning is usually performed in pro-active mode, in consequence of an explicit request from the upper layers. The connection manager, based on the path computation results received from the Path Computation function, configures the cross-connections on the TSON virtual nodes using the methods provided by the Resource Control Layer . Depending on the characteristics of the network services, the connection manager may also provide recovery functions, based on protection or restoration procedures.

ο‚·

Path computation service (base) – responsible for path computation procedures, operates on the network topology provided by the TSON Topology service. The internal path computation function operates in stateless mode and implements basic algorithms

(e.g. Dijkstra) that apply common objective functions, like shortest path or minimum TE cost, and are able to implement elementary constraints like the exclusion of some nodes/links or a given requirement in terms of bandwidth. On the other hand, more advanced features, like stateful mechanisms, concurrent path computations, algorithms for complex metrics, objective functions, constraints and wavelength and TSON time slot assignment are implemented in the upper layer enhanced functions. It should be noted that if an external PCE is available for multiple control instances (e.g. GMPLS and OF domains) this basic path computation service may be reduced to a mediation function for external queries. Also the SDN controller core logic could directly interact with the external PCE e.g. using standard protocols like PCEP.

In general, the basic network functions need to interact and cooperate together to provide their services. For example, the Connection service needs to use the Path computation service in order to define the virtual nodes and the ports for the configuration of the cross-connections during the setup of an intra-domain connection. Similarly, the path computation algorithms

50

operate on the network information exposed by the TSON topology service to calculate the virtual paths. On the other hand, internal changes or notifications received at a single network function (e.g. a link failure notified to the TSON topology service ) may trigger further actions in other network functions (e.g. restoration procedures at the Connection service ). The Core services provide a set of a common platform and transversal “utility” services in support of the interaction between the SDN controller internal functions.

It should be noted that the SDN controller design does not impose the usage of a specific protocol to control the underlying virtual infrastructure at the southbound interface. The

Resource Control Layer allows to interact with TSON virtual devices exposing different types of interfaces through the implementation of specific drivers. An example is the OpenFlow

[OPENFLOW] protocol, which must be properly extended to model the TSON resources in terms of wavelengths and time slots. Other possible protocols that may be used for the configuration of the virtual resources could be XMPP [XMPP] or NetCONF [NetCONF]. On the other hand, following the resource abstraction approach, the SDN controller can also operate on top of an entire domain. An example is a legacy TSON domains operated through a distributed

GMPLS and PCE control plane (e.g. the MAINS Control Plane presented in deliverable D4.1

[CONTENT-D41] as a possible candidate for the metro network control). In this case the SDN controller will rely on the network services natively offered by the GMPLS control plane, accessed through a Management to Network Interface (MNI) or a User to Network Interface

(UNI) with a specific driver. Moreover, the abstraction functionalities will allow the SDN controller to operate the entire legacy domain as a single logical device characterized by a given connectivity matrix. This option may be further exploited in a mixed scenario where the MOVNO owns a part of the physical metro network infrastructure, originally operated with a legacy control plane, and wants to extend this infrastructure renting an additional virtual portion. In this case, for the MOVNO it is fundamental to control the entire integrated infrastructure, composed of virtual resources supporting the OpenFlow protocol and a physical legacy domain on the other side, through a unified single platform.

4.4.2. Wireless connectivity

This section describes the low level control functions related to the wireless domain, showing their positioning in the SDN-based architecture of the whole Virtual Infrastructure Network

Control Layer.

Figure 16 shows the high-level architecture of the SDN controller, focusing on the functions

required to operate the virtual LTE/WiFi wireless network, including both radio Access and the wireless backhaul segments. The internal SDN controller functions work on an abstract version of the wireless network devices, exported through a Resource Control Layer that enables control, configuration and monitoring of the virtual wireless resources (nodes, frequencies, switches).

51

Figure 16:Low-level control functions in the wireless domain

Following the same design paradigm adopted for the metro domain, the abstract resource description is expressed through a unified information model, while the translation between the possibly heterogeneous information models adopted at the SDN controller Southbound interface and the single information model within the SDN controller, is a task of the Resource Control

Layer.

The internal SDN controller functions provide basic network services, specialized for the wireless domain, and they expose a set of primitives at the SDN controller Northbound interface.

In the wireless domain, the SDN controller must provide at least the following services:

ο‚·

Wireless topology service

Similarly to the TSON topology service, this service maintains an updated view of the entire wireless network, adopting the common information model of the SDN controller to describe the wireless topology and its resource availability.

The wireless topology manager builds its vision of the wireless network based on the information received from the virtual infrastructure through a Resource Control Layer that is synchronized based on information received by the IML (which in the case of the

NITOS testbed retrieves them through SFA / mySlice), while it keeps that updated based on the further data retrieved from other network functions or the IML itself through asynchronous notifications.

52

The level of details maintained by the wireless topology manager can be limited to the availability and capability of the network resources, the wireless spectrum utilization, the wireless backhaul network capabilities and can be extended to the partial or full knowledge of all the virtual wireless networks insights.

ο‚·

Wireless statistics/measurement service

This service collects and maintains monitoring information on the underlying virtual resources and the wireless network services. Runtime information and measurements will be notified to the SDN controller from the IML, which will originally collect them using the OML/OMF framework and OMF Measurement Plane [OMF10] while additional tools will be used (or created) to collect relevant information for the status of the virtualized wireless access and backhaul networks. The collection of monitoring information for the wireless statistics service will require the extension, where necessary, of the OMF

Measurement Plane that includes the OMF tools to instrument an experiment and the corresponding OMF entities to collect and store measured data.

ο‚·

Wireless Connection service

This service is responsible for the setup, tear-down, modification and maintenance of intra-domain connection services. In the wireless domain all the procedures of the connection service are handled by the SFA/OMF framework. At the lower layers, the

SFA framework provides resource reservation, while the Control Plane in OMF is dedicated to execution control (e.g. defining traffic patterns, thresholds or automated actions), and the Management Plane is responsible for managing the infrastructure in real time (provisioning and configuration of the reserved resources). For example, recovery functions can be defined through the OMF Domain-specific Language OEDL.

The IML will interact with the SFA and OMF, wrapping their functionalities and exposing their abstraction towards the SDN controller. Following this approach, the wireless connection service will be able to operate, indirectly, on the OMF commands and, when needed, to reconfigure the OF-based wireless backhaul network using the unified services provided by the IML.

These basic network functions need to interact through well-defined message exchanges and cooperate together to provide their services and promote agility. In principle the core services and their associated interfaces must remain stable, enabling them to be re-configured to meet the ever-changing needs and provide the necessary functionality. For example, the Connection service must interact with both the Topology services in order to define the virtual resources during the setup of an intra-domain connection. Also internal changes or notifications received at a single network function (e.g. a link failure notification for the layer 2 domain, a vertical/horizontal handover notification) may trigger further actions in other network functions.

53

We note that at this stage the SDN controller design is protocol agnostic and does not impose the usage of a specific protocol to control the underlying virtual infrastructure at the southbound interface. The focus stays in the functionality offered by the Resource Control Layer and the interaction with the virtual wireless network, exposing different types of interfaces through the implementation of specific drivers. In the picture we have assumed an OpenFlow driver to control the resources belonging to the wireless backhaul domain and a generic wireless driver for the radio services. This driver may directly use the interface exposed by the IML (e.g. a webservice based interface) without requiring any further mediator.

4.4.3. Cross-domain network service provisioning

The convergence between the wireless and the TSON domains is enabled through enhanced network functions operating on top of the basic services offered by the centralized SDN controller. While the low level control functions have mainly a per-domain scope within the mobile access and the metro networks respectively, the upper layer functions provide all the procedures for end-to-end service provisioning in a multi-layer scenario allowing the MOVNO to operate the virtual network infrastructure as a whole.

Figure 17: Enhanced control functions in TSON metro domain

The CONTENT enhanced network functions can be categorized in two main groups:

ο‚·

Functions that provide mechanisms to create, manipulate and monitor cross-domain and multi-layer connectivity services. These functions can be invoked from the upper Service

Orchestration Layer during the deployment, provisioning, runtime and termination of cloud services and can be properly customized for specific cloud-oriented requirements.

They are a key enabler for the efficient cooperation between Virtual Infrastructure

Network Control Layer and Service Orchestration Layer, exposing an advanced set of network services (e.g. network quotations, monitoring information on end-to-end flow performance, application-based network coordination), beyond the basic ones offered through the Northbound interface of the SDN controller.

54

ο‚·

Functions related to the internal management of the virtual network infrastructure, more oriented to the optimization of the overall network resources independently on the specific cloud services running on top of them, but considering the global traffic and the whole network performance or utilization. These functions may operate in support of the previous ones (e.g. path computation functions for TSON connections and end-to-end service provisioning), or work as stand-alone services dealing with specific aspects of the network management. Some of these functions are still characterized by a perdomain scope (e.g. the management of – multi-technology – handovers in the mobile access network or the dynamic re-adaptation of an overlay virtual topology in the TSON network to accommodate traffic flows with lower granularities). On the other hand, some functions are clearly oriented to the convergence between the two domains, like the mapping of the mobile flows into TSON connections.

ο‚·

Figure 17 shows several enhanced network functions. Not all of them are mandatory in

the CONTENT architecture (in the picture the optional applications are represented in grey boxes), since the main workflows for the provisioning of integrated cloud and network services in mobile environments are still valid also when some functions are disabled (mainly the services dedicated to the automated re-optimization of the network).

However, the concurrent and cooperative actions of the entire application set allows to obtain better performances in the utilization of the whole infrastructure. Moreover, it should be noted that the proposed list is not exhaustive, since the MOVNO could decide to deploy new functions to improve a specific aspect of the network management or integrate new services.

The following table (Table 4) provides an overview of the network functions working over the

SDN controller.

Network

Function

Description

End-to-end provisioning

End-to-end monitoring

Application aware network controller

Service for the on-demand setup, tear-down and modification of end-to-end, multi-layer and cross-domain connectivity services with a certain level of QoS guarantees, including service recovery. These services can be enhanced with cloud-oriented features, e.g. scheduling, network quotations, cross-layer recovery escalation.

Service for collecting monitoring information about the status and the performance of the established end-toend network services.

Service for the coordination of the network according to specific cloud service requirements and net dynamic conditions and performance. It can provide enhanced services, usually in cooperation with the other functions, that jointly process cloud and network parameters.

Examples are the delegation of data centers choice during cloud service deployment, or triggers for potential cloud service modifications due to changes in network conditions.

Operative APIs

(towards the Service

Orchestration Layer)

Requests for network service setup, tear-down and modification with specification of QoS parameters.

Requests for network service quotations.

Polling queries or subscriptions for monitoring data.

Bi-directional requests and notifications for cross-layer information and services, e.g. request for data centre selection, notification of network failures, congestions or handovers of mobile

55

Application oriented network queries service

Service to provide network related information (e.g. topology or capacity of links, potentially of virtual overlay networks) that can be useful for the cloud+net service orchestration decisions to be taken at the upper layer.

This information can be exposed to external entities through an extended version of the ALTO protocol.

Mobility manager Service for the management of handovers among different technologies (e.g. Wi-Fi and LTE) and reconfiguration of the mobile access and backhaul network.

End-to-end path computation

Service for computation of end-to-end, multi-layer and cross-domain paths, optionally in stateful mode and with global concurrent optimization functions. In CONTENT scenario path computation may be also enhanced with cloud-aware algorithms (e.g. for global network and IT optimization).

Wireless/TSON orchestration

TSON Virtual

Topology

Manager

TSON Path

Computation

Element (PCE) +

Sub-wavelength

Label Allocation

Engine

Service for the efficient grooming and automated regrooming of mobile traffic flows in TSON light paths.

Service for automated and flexible management of virtual overlay topologies in the TSON domain to efficiently accommodate dynamic client-layer traffic flows in the multi-layer environment.

Service for path computation and resource assignment in TSON domains; it operates on the network topology provided by the TSON topology basic service. The

TSON path computation function calculates not only nodes and links, but adopts an internal Sub-wavelength

Label Allocation Engine (SLAE) dedicated to the TSON slots assignment. In fact, given the characteristics of the

TSON technology, wavelength and time-slots must be selected on each link in order to guarantee their continuity along the entire path. The TSON slot allocation function takes decisions about wavelength and time slot assignments based on an internal view of available and busy slots.

Table 4: High-level network functions user’s, etc.

Queries for network related information.

N.A.

N.A.

N.A.

N.A.

N.A.

4.5. Converged service orchestration

The Service Orchestration Layer is responsible for the instantiation of end-to-end cloud services for mobile users and their management at run-time. The interaction with the Virtual

Infrastructure Network Control Layer allows to integrate in the cloud services the connectivity for the access of mobile users with the desired QoS and, on the other hand, for the interconnection among the computing and storage resources belonging to the same service instance and placed in different data centers.

The high-level internal architecture of the Service Orchestration Layer is shown in Figure 18,

which highlights the two main functional components: the Cloud Service Lifecycle Manager and the Inter-Data-Centre Resource Manager.

56

The Cloud Service Lifecycle Manager is responsible for the coordination of the cloud service during its different phases, from creation to termination, and it is the entry point for the processing of new service requests. On the other hand, the management of the resources

(jointly cloud and network) associated to the services is handled by the Inter-Data-Centre

Resource Manager. The two functional entities cooperate along the overall service stages, in order to guarantee the continuous matching between service requirements and allocated resources, in compliance with the SLAs established for the given customer and independently on potential degradation of the underlying infrastructure performance.

Figure 18: Service Orchestration Layer high-level architecture

4.5.1. Deployment and provisioning of cloud and network services

A traditional cloud service specification includes details of the required IT resources and their inter-dependencies (e.g. amount of storage or CPUs, type and version of Operating System or software applications). In CONTENT, it is enhanced with network parameters that describe the desired connectivity between the different components of the cloud service and between them and the user itself. This description may include QoS parameters like bandwidth, delay and jitter, as well as requirements in terms of service availability (with an impact on the recovery strategy to be adopted at the network level). It may also define different levels of network QoS and cloud resources depending on the type of wireless connectivity available for the user during the service duration. More in general, the cloud service description may specify automated elasticity rules that identify the modifications to be applied in the cloud service topology under certain conditions (e.g. to react in case of changes in the traffic load).

The Cloud Service Lifecycle Manager elaborates the requests received from the cloud customers and identifies the sequence and the characteristics of the atomic resources composing the end-to-end service, both in terms of computing/storage and connectivity

57

services. During the deployment and provisioning phase it interacts with the Inter-Data-Centre

Resource Manager to request the allocation of the resources in the different domains.

Therefore, the Inter-Data-Centre Resource Manager can be considered as the decision point for the decomposition and mapping of the different IT resources among the available data centers.

These decisions are taken in cooperation with the Virtual Infrastructure Network Control Layer in order to achieve a better utilization of the whole infrastructure, from the network to the cloud domains, while respecting the QoS requirements of the service.

The CONTENT architecture supports a variety of approaches for taking efficient decisions about placement of VMs and storages. A first option can be the combined usage of high-level cloud and network information maintained at the Inter-Data-Centre Resource Manager. On the cloud side, this information describes the capabilities and availability of the registered data centers, obtained through queries to their Cloud Management Systems. On the network side, they could provide an abstracted view of the network topology, e.g. in terms of capacity and price of available or potential connectivity services among a selected set of end-points (usually the data centers). The information can be acquired through queries to a dedicated service (e.g. an ALTO server) or sending requests to the underlying end-to-end network service provisioning function for quotations of connections between explicit pairs of end-points. Another possible approach is the usage of the anycast services that may be provided by an application-aware network controller. In this case the Inter-Data-Centre Resource Manager can pre-select a set of suitable data centers and delegate the final choice to this controller. In this case, further applicationrelated characteristics may be specified in the request, in order to allow for more efficient decisions in the cross-layer perspective.

After the decision phase, the Inter-Data-Centre Resource Manager requests the allocation and activation of the resources. The inter-DC and user-to-DC connectivity services are requested through the interface exposed by the end-to-end network provisioning function of the Virtual

Infrastructure Network Control Layer, while the creation/activation of VM and storage resources are requested through the API exposed by the Cloud Management System responsible for the involved data center(s). A similar procedure is followed during the termination phase, to release the whole set of resources.

4.5.2. Dynamic service operation over virtual infrastructures

Cloud services are dynamic entities that evolve during their runtime, with changes in the characteristics of the reserved IT resource and in the traffic load generated by the applications running on the distributed VMs. Moreover, the mobile nature of the CONTENT cloud users may lead to further changes in the characteristics of the network access available for the end-users and in the profile of the user-to-DC traffic. In this scenario, it is fundamental to operate the virtual infrastructure with flexibility, so that the whole composition of resources allocated for a given service instance can be automatically re-adapted in short time intervals.

The Cloud Service Lifecycle Manager is the entity responsible for taking decisions about the dynamic modifications of a service instance at runtime. In particular, it could request the upgrade or downgrade of computing/storage/network resources according to:

ο‚·

an explicit user request for a service modification, or

58

ο‚·

elasticity rules defined in the initial service specification, or

ο‚·

a particular behavior of the network (e.g. a failure or a performance degradation) or the end user (e.g. an inter-technology handover) that suggests to modify the cloud service profile to obtain better QoE performance in the new conditions.

On the other hand, the Inter-Data-Centre Resource Manager may decide to modify the location of the cloud resources (e.g. VM migrations), but without changing the nature of the cloud service itself. This may be due to purely cloud-oriented considerations, like load-balancing between different data centers, or as consequence of triggers generated by network events, like the degradation of the connectivity between the user and a specific data center.

It is clear that, like in the deployment/provisioning phase, the efficiency of the decisions on service modification can benefit from the cooperation between Service Orchestration Layer and

Virtual Infrastructure Network Control Layer. Therefore the exchange of monitoring information related to network and service events is strongly recommended during the entire operation of a cloud service. This type of data can be provided asynchronously by the end-to-end monitoring function, especially in terms of failures or degradation of the established network services, or may be collected with queries to the dedicated service. Moreover, the application-aware network controller may support coordinated cross-layer actions where automated triggers are sent towards the upper orchestration layer to notify specific network events which are expected to have a direct impact on the cloud service behavior. The enforcement of the decided changes is again performed by the Inter-Data-Centre Resource Manager, under the coordination of the

Cloud Service Lifecycle Manager, and using the interfaces exposed by the End-to-end Network

Service Provisioning function for the network side and by the Cloud Management System of the data centers for the cloud side.

5. CONTENT architecture evaluation

5.1. Modelling and simulations

5.1.1. Modeling of the Physical Infrastructure

As already discussed the infrastructure model proposed aims at providing a technology platform interconnecting geographically distributed computational resources that can support a variety of cloud and mobile cloud services. The proposed architecture comprises an advanced heterogeneous multi-technology network infrastructure integrating optical metro and wireless access network domains interconnecting data centres and adopts the concept of physical

resource virtualization across the technology domains involved Figure 2. The proposed solution

relies on a common DC infrastructure fully converged with the broadband wireless access and the optical metro network. The technology domains comprising the converged network infrastructure, emphasizing on their QoS control mechanisms, are described below.

The Wireless Access Network

The wireless domain of the physical infrastructure (PI) assumes a heterogeneous topology comprising a cellular LTE system for the wireless access part and a backhauling solution to

59

interconnect the wireless access network with the edge nodes of the optical metro network. A key characteristic of LTE networks is that they can support a wide range of services and performance requirements (e.g. real and non-real time streaming, conversational and interactive services with low or high delay as well as background) in a wide range of environments such as indoor, urban and rural. As discussed in Section 4.2.2., in order to support the various types of services with well-defined end-to-end QoS characteristics, the virtual concept of bearer has been introduced by 3GPP that enables routing of the IP traffic from a gateway of the PDN to the

UE [ALCATEL]. In this section we will discuss this mechanism in more detail as end-to-end QoS is a key challenge that the project is addressing and will be carefully consider when evaluating

its performance. More specifically the QoS mechanism of LTE is illustrated in Figure 19, where

it is seen that three different types of bearers are combined 1 to form a virtual path, called EPS bearer, connecting the UE with the P-GW. The EPS bearer is characterized by specific QoS

attributes. These attributes are summarized in Table 5 while the various standardized QoS

Class Indicators (QCI) are provided in Table 6.

EPS Bearer

E-RAB Bearer

External Bearer

S5/S8 Bearer

UE

Radio Bearer eNodeB

S1 Bearer

S-GW P-GW TSON

Figure 19: LTE Bearer Architecture, Source: 3GPP TS 36.300

Once information regarding the QoS profiles for the various types of services becomes available to the network operator, the Connection Admission Control procedure is performed. This procedure ensures that there are sufficient resources in the radio network in order to satisfy the endusers’ service requirements. At this point it should be mentioned that, in order to support a large variety of services, the concept of priority queuing has been proposed as an effective solution to meet the service related latency limitations [ADVA]. This procedure is illustrated in

the left part of Figure 20. Specifically, in the first step, the Packet classifier enables traffic

segregation based on the available service classes, whereas in the second step, packets are buffered at the corresponding priority queues. An additional issue to be taken into account is that the QoS profile of each EPS bearer is not visible to the transport network [Tellab].

Therefore, as discussed in [ Ekström,] in order to guarantee the required service level in the metro/core network, the QCI values are mapped to the headers of the packets transmitted from the eNodeB to the TSON edge nodes.

1

The radio bearer between the UE and the eNodeB, the S1 bearer between the eNodeB and the Serving

Gateway (S-GW) and the S5/S8 bearer between the S-GW and the Packet Data Network Gateway (P-

GW))

60

5

6

7

8

9

Parameter

QoS Class Indicator (QCI)

Allocation and Retention Policy (ARP)

Guaranteed Bit Rate (GBR)

Maximum Bit Rate (MBR)

Description

Scalar which indicates a specific priority, maximum delay and packet error rate ts (e.g. real and nonreal time streaming, conversati The index also indicates whether the bearer has a Guaranteed Bit

Rate (GBR) or not (non-GBR). The actual bit rate is signalled separately.

Used in prioritisation and pre-emption decisions with respect to bearers.

Bit rate that can be expected to be provided by a bearer. Not applicable for non-GBR bearers.

In 3GPP Release 8, MBR = GBR.

Table 5: EPS Bearer QoS profiles [ROKE]

QCI Resource

Type

1

2

3

4

GBR

Non-GBR

Priority

2

4

3

5

1

6

7

8

Packet Delay Packet Loss

Rate

Example

Service

100ms 10^-2

150ms 10^-3

Conversional

Voice

Conversional

Video

50ms

300ms

10^-3

10^-5

Real Time gaming

Non-

Conversional

Video

100ms

300ms

100ms

300ms

9 300ms

Table 6: Standarized QCI attributes

10^-3

10^-5

10^-5

10^-3

10^-5

IMS signaling

Video

Voice, video

The Optical Metro Network Solution

As already discussed the metro network segment adopts the TSON solution. TSON offering sub-wavelength granularity, is able to support short-lived connections and facilitates fast service delivery (as low as 300 μs), low end-to-end delay and multiple levels of guaranteed QoS. In the proposed scenario where TSON is integrated with the wireless access LTE network supporting

61

mobile users, mobility is accommodated by TSON through reallocation of services to different

DCs depending on the relevant location of the end users applying the concept of virtual machine

(VM) migration. Furthermore, fixed and mobile cloud traffic differentiation is achieved through prioritization/sorting of the Ethernet frames into different priority queues with configurable QoS description , as illustrated in the “TSON node” depicted in Figure 17. TSON edge nodes provide the interfaces between the wireless and the optical domains as well as optical and DC domains.

The ingress TSON edge nodes are responsible for traffic aggregation and mapping, while the egress edge nodes have the reverse functionality.

Converged Infrastructure

A critical function in the converged infrastructure proposed is that of the interfaces between the different technology domains, which have to take care of the mapping and aggregation/de-

aggregation of the traffic from one domain to the other. Figure 20 illustrates a general

representation of the converged infrastructure indicating the interfaces between the different technology domains i.e. wireless to optical network domain and optical network to DC domains, as functional blocks.

Fixed Traffic eNodeB

LTE #1

LTE #1

….

LTE #N

Buffer

Rx FIFO 1

Rx FIFO 2

Rx FIFO 3

Rx FIFO 4

Buffer

Aggregator

TSON Node

Tx FIFO

λ

1

Tx FIFO λ

2

Tx FIFO

λ

3

Tx FIFO

λ

4

Rx FIFO 1

Rx FIFO 2

Rx FIFO 3

Rx FIFO 4

Buffer

De-

Aggregator

FIFO #1

Buffer

FIFO #1

….

Scheduler

DC

FIFO #M

Figure 20: Multi-Queuing model for the converged Wireless-Optical Network and DC Infrastructure

More specifically, the edge TSON nodes receive Ethernet frames carrying traffic generated by fixed and mobile users and arrange them to different buffers that are part of the node based on the QCI (or any other service class indicator metric) provided for the service. These Ethernet frames are aggregated into TSON frames, which are then assigned to a suitable time-slot and wavelength for further transmission in the network on a First In First Out (FIFO) basis

[ZERVAS]-[YAN12]. When these frames reach the interface between the optical and the DC domains the reverse function takes place to allow scheduling of demands to the suitable computing resources.

62

VI 1

VI n

Queuing Model of the PI

Wireless Access Metro Optical Network Data Centers

Figure 21: Multi-Layer Converged Wireless Optical and DC infrastructures

5.1.2. Virtual Infrastructure modeling

A key enabler of the proposed solution is the concept of cross-domain and cross-technology virtualization for the creation of infrastructure slices. This involves the abstraction of the physical resources into logical resources that can then be assigned as independent entities to different

VIs. The virtualization process involves planning of the VIs i.e. identification of the optimal VIs that can support the required services in terms of both topology and resources and mapping of

the virtual resources to the physical resources. Figure 21 illustrates the general approach

proposed with regards to virtualization, in accordance to the overall architectural structure, including a virtual layer formed over the physical layer,

5.1.2.1. Problem Formulation

In this Section we propose to plan VIs considering jointly the presence of all network technology domains and the IT resources incorporated in the PI with the aim to offer optimized VIs in terms of specific objectives [Tzanakaki-A-2013].

In highly dynamic heterogeneous environments the problem of optimal VI planning is complex since information regarding the position and the application requirements of the mobile devices, the available resources in the DCs and the optical network as well as the performance of the wireless network domain is uncertain. In order to assess the performance and requirements of this type of VIs we have developed, an optimization scheme suitable for VI planning taking into account both the dynamicity and mobility of end users over an integrated IT and heterogeneous network infrastructure suitable for Cloud and mobile Cloud services. In addition, the end-to-end delay in service delivery is quantified considering the varying degrees of latency introduced by the different technology domains.

63

Traffic demands corresponding to traditional Cloud applications are generated at randomly selected nodes TSON nodes in the wired domain and need to be served by a set of IT servers.

Mobile traffic on the other hand is generated at the wireless access domain and in some cases needs to traverse a hybrid multi-hop wireless access/backhaul solution before it reaches the IT resources through the optical metro network. The granularity of optical network demands is a portion of wavelength (e.g. λ/100), while the IT locations at which the services will be handled, are not specified and are of no importance to the services themselves. Therefore, identification of the suitable IT resources that will support the services is part of the optimization output. In the general case, the VI planning problem should be solved taking into account a set of constraints that guarantee the efficient and stable operation of the resulting infrastructures.

A main assumption is that every demand has to be processed at a single IT server. This allocation policy reduces the complexity of implementation. To formulate this requirement the binary variable π‘Ž 𝑑𝑠

is introduced to indicate whether demand 𝑑 is assigned to server s or not.

This assumption is expressed through the following equation:

∑ π‘Ž 𝑑𝑠 𝑠∈𝑆

= 1, 𝑑 ∈ 𝐷 (1)

For each demand 𝑑 , it’s demand volume β„Ž 𝑑

is realized by means of a number of paths in the VI.

Assuming that the vector 𝑝 (𝑝 ∈ 𝑃 𝑑𝑠)

is used to indicated the candidate path list in the VI required to support demand 𝑑 at server 𝑠 and vector π‘₯ 𝑑𝑝

the non-negative number of lightpaths allocated to path 𝑝 , the following demand constraints should be satisfied:

∑ ∑ 𝛼 𝑑𝑠 𝑠 𝑝 π‘₯ 𝑑𝑝

= β„Ž 𝑑

, 𝑑 ∈ 𝐷 (2)

Summing up the paths through each link 𝑒 (𝑒 ∈ 𝐸) of the VI we can determine the required link capacity 𝑦 𝑒

of link 𝑒

∑ ∑ 𝛿 𝑒𝑑𝑝 𝑑 𝑝 π‘₯ 𝑑𝑝

≤ 𝑦 𝑒

, 𝑒 ∈ 𝐸 (3) where 𝛿 𝑒𝑑𝑝

is a binary variable taking value equal to 1 if link e of VI belongs to path p realizing demand d at server i ; 0 otherwise. Using the same rationale, the capacity of each link e in the VI is allocated by identifying the required paths in the PI. The resulting PI paths determine the load of each link 𝑔 (𝑔 ∈ 𝐺) of the PI. Since the PI consists of a heterogeneous network integrating optical metro and wireless access domains, it is assumed that each link g has a different modular capacity 𝑀 πœ…

(πœ… = 1,2) .

For example, the wireless backhaul links are treated as a collection of wireless micro wave links of 100 Mbps [Fehske] while the TSON solution may offer a minimum bandwidth granularity of 100Mbps. Finally, the number of modules of type πœ… on link g is denoted by 𝑒 π‘”π‘˜

.

A major issue to be taken into account is the distinction between traffic that arises from fixed and mobile devices. The accurate estimation of the resources that should be reserved in the VI to ensure seamless end-to-end service provisioning is based on the mobility model that is adopted, the size of the LTE cells, the type of handover (X2-handover procedure for intereNodeB mobility or S1handover when the eNodeB’s are not interconnected using X2

64

interfaces) as well as the traffic model used. In the ideal case, a seamless handoff for a mobile device can be 100% guaranteed if the equivalent amount of resources is reserved at all its neighbouring cells. However, a more efficient approach would be to relate the reserved resources in the neighbouring LTE cells with the handoff probability, πœ— . For the static scenario where handovers are not considered, let π‘ž (π‘ž ∈ 𝑄 𝑒

) be the PI’s candidate path list realizing virtual link capacity 𝑦 𝑒

.

The following VI capacity constraint should be satisfied:

∑ 𝑧 π‘’π‘ž π‘ž

≤ 𝑦 𝑒

, 𝑒 ∈ 𝐸 (4) where the summation is taken over all paths q on the routing list 𝑄 𝑒

of link 𝑒 and 𝑧 π‘’π‘ž

is the capacity of path π‘ž realizing virtual link 𝑒 . However, in case of mobility an additional amount of resources should be leased across the various technology domains to ensure seamless handoffs. To achieve this, the virtual capacity 𝑦 𝑒

should be realized by a potential set of paths π‘ž′ ∈ 𝑄 𝑒

with capacity 𝑧 π‘’π‘ž′

that will be used after the hand over with probability πœ— π‘’π‘ž′

. Hence, introducing the link-path incidence coefficients 𝛾 π‘”π‘’π‘ž

for the PI taking values equal to 1 if link g of

PI belongs to path q realizing link e , 0 otherwise, the general formula specifying the PI capacity constraint can be stated as:

∑ ∑ 𝛾 π‘”π‘’π‘ž 𝑒 π‘ž 𝑧 π‘’π‘ž

+ ∑ ∑ πœ— 𝑒 π‘ž

′ π‘’π‘ž

′ 𝛾 π‘”π‘’π‘ž

′ 𝑧 π‘’π‘ž

≤ ∑ 𝑀 πœ… πœ… 𝑒 π‘”π‘˜

, 𝑔 ∈ 𝐺, πœ… ∈ 𝐾 (5)

At the same time, the planned VI must have adequate IT server resources such as CPU, memory, disk storage to support all requested services.

A final consideration that should be taken into account is that, depending on the type of service that has to be provided, the end-to-end delay across all technology domains should be below a predefined threshold. Details regarding the extraction of this constraint are provided in the following subsection. The objective of our formulation is to minimize the total cost during the planned time frame of the resulting network configuration that consists of the following components: a) πœ‘ π‘”πœ…

the cost for operating capacity 𝑒 π‘”π‘˜

of the PI link g of module type π‘˜ , and b) 𝜎 π‘ π‘Ÿ

the total cost of the capacity of resource r of IT server s for processing the volume of demand h d

.

Min 𝐹 = ∑ ∑ πœ‘ π‘”πœ… 𝑔 πœ… 𝑒 π‘”πœ…

+ ∑ ∑ 𝜎 π‘ π‘Ÿ 𝑠 π‘Ÿ

(β„Ž 𝑑

) (6)

The costs considered in our modelling are related to the energy consumption of the infrastructure which can be directly associated with the operational expenditure (OpEx) of the planned VI. For the data centres, the power consumption model presented in [SUN] has been adopted where for a service rate of x Mbps, the corresponding power consumption is defined as 𝑝 𝑠

+ β„“ 𝑠 π‘₯ , where subscript 𝑠 indicates the server, 𝑝 𝑠

is the baseline consumption in the idle state and and β„“ s

is the slope of the load-dependent consumption. In the optical network domain, the cost for each optical link comprise the energy consumed by each lightpath due to transmission

65

and reception of the optical signal, optical amplification at each fiber span and switching. The switching power consumption of the TSON solution is based on actual lab measurements.

Furthermore, for the wireless backhaul, the power consumption model presented in [ FEHSKE ] has been adopted where wireless backhaul links are treated as a collection of wireless micro wave links of 100 Mbps capacity and a power dissipation of 50 W each. Thus for a given average backhaul requirement per base station, 𝑐 π‘β„Ž

Mbps, the backhaul power consumption is

0.5W/Mbps. Finally, in the wireless access domain, the power consumption model of the LTEenabled base station presented in [AUER] has been adopted.

5.1.2.2. Modelling Latency

So far, the MILP problem formulation guarantees that the capacities of the virtual and physical links will be adequate to support the transmission of the cloud computing services over the network segment. However, the total delay introduced by the heterogeneous technology domains should be limited below a specific acceptable threshold, namely β„’ th

. The exact values of β„’ th

for the various type of service are given in Table 5. To achieve this, initially, a closed form

approximation for the end-toend delay is extracted after applying the Jackson’s Theorem

[Gross] to the multi-queuing model of the converged infrastructure presented in Figure 20. All

queuing systems have been modelled as M/M/c queues with infinite buffer depth. Therefore, assuming that the conditions of the BCMP theorem are satisfied, the mean end-to-end cloud delay can be described through:

𝔼

π‘Š

[𝒖 𝑔1

] + 𝔼

𝑇𝑆𝑂𝑁

[𝒖 𝑔2

] + 𝔼

𝐷𝐢

[𝝈 π‘ π‘Ÿ

] ≤ β„’ π‘‘β„Ž

(7)

In (7), 𝔼

W

, 𝔼

TSON

and 𝔼

DC

are the expected delays in the wireless access, TSON network and the DC infrastructures assuming that 𝑒 𝑔1

, 𝑒 𝑔2 and 𝜎 π‘ π‘Ÿ

physical resources have been allocated to the VI.

5.1.2.3. Model Extension: Planning with resilience considerations

In this subsection, a modeling approach using SLP suitable for the optimal planning of resilient

VIs formed over optical network infrastructure is proposed and implemented, based on the analysis presented in [Anastasopoulos12]. The resilience scheme considered is that of 1:1 protection for optical network resources. Specifically, in case of a single optical link failure demands are routed to their destination through a secondary (protection) path.

Similarly to the basic model, the VI Planning problem with resilience considerations is formulated using a network that is composed of one resource layer that contains the physical infrastructure and will produce as an output the virtual infrastructure layer. Again, in the PI a set of randomly selected Source-Destination pairs is considered, with each source generating demands that need to be routed at the destination through a set of virtual links. In the proposed planning algorithm, a possible failure of an optical link is treated by forwarding demands to their destination via alternative paths. Therefore, in order to protect the planned network from a possible failure of a physical layer link g , a link re-establishment mechanism is introduced that routes demands to their destination via alternative paths. Now let 𝑒 ′ 𝑔𝑑

be the protection capacity

66

that should be reserved in the PI link g at time t . In order to identify the optimal VONs with resilience considerations, the objective function given in (8) should be minimized:

Min 𝐹 = ∑ ∑ πœ‘ π‘”πœ… 𝑔 πœ…

(𝑒 π‘”πœ…

+ 𝑒′ π‘”πœ…

) + ∑ ∑ 𝜎 π‘ π‘Ÿ 𝑠 π‘Ÿ

(β„Ž 𝑑

) (8)

5.1.3. Preliminary Results

The performance of the proposed VI planning scheme across the multiple domains involved is studied using the architecture illustrated in Figure 1. For the PI, a macro-cellular network with regular hexagonal cell layout has been considered similar to that presented in [AUER], consisting of 12 sites, each with 3 sectors and 10MHz bandwidth, operating at 2.1 GHz. The inter-site distance (ISD) has been set to 500m to capture to scenario of a dense urban network deployment. Furthermore, a 2×2 MIMO transmission has been considered, while the users are uniformly distributed over the serviced area. Each site can process up to 115 Mbps and its power consumption ranges from 885 to 1087W, under idle and full load, respectively [AUER].

For the computing resources, three Basic Sun Oracle Database Machine Systems have been considered where each server can process up to 28.8Gbps of uncompressed flash. The

physical TSON topology assumed is illustrated in the lower layer of Figure 21, where the

dimensions of the optical rings are below 5 km and the supported data rate is 8.68Gbps. The power consumption of the TSON equipment is measured to be 50W for the EDFAs and100mW for the PLZT chip is. Finally, the measured power consumption for the drivers is 5W.

Traffic offloading to local

“Cloudlets”

TSON Edge

Nodes

Traffic offloading to regional DCs

Core

Mobile users

Metro-Core

Data

Center a)

LTE - Wireless Access

Metro-optical network b)

Figure 22: Comparison between various traffic offloading schemes: a) the Cloudlet approach, b) the CONTENT approach

The performance of the CONTENT based solution is compared to the Cloudlet approach

[KUMAR] in which data from mobile users are offloaded to local nano-data centers (Figure 22).

As already discussed by various researchers (see e.g. [KUMAR], [Wolski], for computation offloading to be beneficial for the mobile device, the total energy consumed in the mobile terminal for transmitting and receiving data to the local DC, should be at least equal to the total energy that is consumed for data processing in the mobile device itself. A simple example illustrating the power benefits drawn when data offloading is adopted for an application that requires the execution of I

instructions is presented in Figure 23. Assuming that

S

M

, S

C

and S

D are used to denote the processing speed in Instructions per Second (IPS) of the mobile system, the Cloudlet and the regional DC, respectively, then, the same task requires I/S

M

seconds on

67

the mobile system, I/S

C

seconds on the cloudlet and I/S

D

seconds on the DC. Besides processing delay, in case of traffic offloading to the local cloudlet an additional delay, namely D

T

, is introduced due to data transmission over the wireless medium. However, when data are offloaded to a regional DC, packets need to traverse both the wireless access and the TSON, thus, leading to a total transmission delay equal to D

T

+ D

Q

, where D

Q

is the delay introduced by the TSON network. Furthermore, as long as power consumption is concerned, the mobile system consumes P

C

(watt) for data processing, P 𝑖𝑑𝑙𝑒

(watt) while being in idle mode, and P

𝑇

(watt) during the phase of data transmission/reception, with P

𝑇

> P

C

> P 𝑖𝑑𝑙𝑒

[KUMAR].

P

T

Offloading in cloudlet

Without offloading

Offloading using TSON

P

C

P idle

D

T

Service time in cloudlets service time in mobile device

D

Q

Queuing delays (TSON)

D

T

Transmission delays

P

T

Power consumption (tranmission mode)

P

C

Power consumption (processing mode)

P idle

Power consumption (idle mode) time

Figure 23: Energy Consumption for various computation offloading strategies

At this point it should be mentioned that D

Q

β‰ͺ D

T

. This is depicted in Figure 24 where the end-

to-end delay for mobile cloud services and the cloudlet approach is provided, when adopting the proposed architecture. This delay does not take into consideration the propagation delay in

TSON as given the dimensions of the optical rings (<5km) is insignificant. It is observed that when the same resources are reserved in the wireless and the IT domain for the two cases, less than 2ms additional delay is introduced by the TSON network. Considering that the minimum packet delay in LTE networks is measured to be of the order of 100ms [Fitchard] the additional

2ms delay introduced by TSON can be considered to be negligible. Even though, an additional contribution of latency is added by TSON, this effect can be compensated for by allocating extra

68

resources in the DC domain, as in the proposed approach the available DCs offer much higher processing power at lower cost compared to the cloudlet case.

3.5

x 10

-3

3

2.5

2

1.5

1

0.5

Proposed

Cloudlet

0

0 2 4 6

Simulation time

8 10

Figure 24: Comparison in terms of delay between the proposed architecture and the cloudlet (common delays in wireless access are omitted)

To study how the end user mobility and the traffic parameters affect the utilization and power consumption of the planned VI a new metric is introduced, namely the service-to-mobility factor

[FANG]. The service-to-mobility factor is defined as the fraction of the service holding time over the cell residence time. It is assumed that the cell residence time is exponentially distributed with parameter η and the service holding time is Erlang distributed with parameters (m, n).

Figure 25, illustrates the total power consumption of the converged infrastructure (wireless

access, wireless backhaul, optical network and IT resources) when applying the proposed and the cloudlet approach. Comparing these two schemes, it is observed that the proposed architecture consumes significantly lower energy (lower operational cost) to serve the same amount of demands compared to the cloudlet. This is due to that, in the former approach fewer

IT servers are activated to serve the same amount of demands. It is also observed that the wireless access technology is responsible for 43% of the overall power consumption, while the optical network consumes less than 7% of the energy.

69

Figure 25: Impact of traffic load on power consumption for the proposed and the Cloudlet scheme

9000

8000

7000

6000

5000

4000

0

1

2

3

Service-to-mobility factor

4

5

5000

6000

1000

2000

3000

4000

Average demands/source

(Mbps)

Figure 26: Impact of mobility and traffic load on the total power consumption (𝐦 = πŸ“)

To further investigate how the service characteristics affect the optimum planned VI, the impact of both traffic load and end user mobility on the total power consumption (operational cost) of

the VI is studied. Figure 26 shows that the total power consumption increases for higher end

user mobility as expected. More specifically, when mobility is higher (lower service-to-mobility factor) additional resources are required to support the VI in the wireless access domain.

However it is interesting to observe that this additional resource requirement also propagates in the optical metro network and the IT domain. The additional resource requirements, across the various infrastructure domains, are imposed in order to ensure availability of resources in all domains involved (wireless access and backhauling, optical metro network and DCs) to support the requested service and enable effectively seamless and transparent end-to-end connectivity between mobile users and the computing resources.

70

100

80

60

40

DC

Metro Optical

Wireless Backhaul

Wireless Access

20

0 wo

3 with wo with

1

Call to mobility factor wo

0 with

Figure 27: Impact of resilience of the requested resources [Anastasopoulos12]

The impact of resilience on the additional resources required to facilitate 1:1 protection for all

virtual resources is illustrated in Figure 27. It is observed that the protection mechanism

introduces an increase of the additional allocated resources in the order of 20%. Note that, in the obtained results protection of the computing resources is not considered.

5.1.4. Optimization of network resource utilization through VM migration to handle user mobility

An additional consideration in the context of the CONTENT proposed approach is how to efficiently handle end user mobility. In view of this, we investigate how to mitigate the impact of user mobility on the additional resources required to support it. To this end, we model and evaluate how a key cloud technology, that is Virtual Machine (VM) migration, is beneficial to support mobile Cloud users, specifically during their handoff from one cell to another. This work has been reported at [GEORGAKILAS] User mobility is expressed through the handoff probability and the service-to-mobility factor, as this is presented above. The uncertainty of mobility is modelled through Stochastic Linear Programming (SLP) models. Appropriate deterministic and stochastic problems are solved and their results are compared to demonstrate the impact of stochastic planning in the resources savings and how VM migration is beneficial compared to inter-DC communication.

Figure 28 illustrates the topology assumed. The Data Centers and the cellular network are

interconnected through a WDM optical metro network. The DC capacity is 27 Gbps. Demands are randomly generated at wireless nodes following a uniform spatial distribution. We assume that each demand needs one capacity unit in each network domain and both optical and wireless links have sufficient capacity to accommodate all the demands. Moreover, each demand needs one VM.

When trying to identify the optimal virtual infrastructure the objective is to assign minimum network resources in terms of capacity units across both the wireless backhauling and the

71

optical metro network. Traditional multi-domain infrastructure planning models are usually deterministic and as such they do not take into consideration the impact of the uncertainty

(associated with the user mobility) to the various infrastructure domains beyond the wireless domain. This way, they rely on resource over-provisioning across all domains. In the proposed approach, the random nature of mobility is addressed by adopting a stochastic optimization approach where additional resources are planned as required, only for a subset of the cellular users based on the comparison of the handoff probability with a parametric threshold.

Figure 28: Wireless-Optical-DC Topology

In order for each demand to be assigned a connection, the optimization problem needs to establish a path from the origin node to the destination DC reserving all the required resources in both network (wireless and optical capacity units) and DC domains (VMs). We define this as the first-stage problem. The destination DC is selected through anycast routing. In a secondstage, we need to also reserve the required resources associated to the handoff of the mobile user to any of its neighbouring cells. Therefore, a new connection is required that will enable the mobile user to seamlessly continue to be serviced. We define as second-stage, the optimization problem that based on the first-stage decision, computes a path for the user assuming that the he will perform one handoff to each one of its neighbouring cells. We define two policies to handle the handoff of a mobile user to a neighbouring cell:

1) by planning a second-stage path from the new origin node (neighbouring cell) to the DC that already serves the user. This policy is denoted as handoff policy 1 ( β„Ž1 ) and

2) by planning a suitable VM migration to any available DC and the required path, to further minimize the objective function. This policy is denoted as handoff policy 2 ( β„Ž2 ).

This process is performed for every neighbouring cell (one hop) and for both policies we consider that first-stage and second-stage planned resources remain established. For β„Ž1 , we assume that no additional VMs are required for the paths originating from the neighbouring cells. For β„Ž2 , we assume that additional VMs are deployed only when second-stage paths select a different DC than the one already serving the first-stage path.

72

After capturing the user mobility as a stationary stochastic process through the service-tomobility factor and the handoff probability, we use the Sample Average Approximation (SAA) method [ VERWEIJ ] to solve the relevant problems formulated as two-stage stochastic routing problems. We solve all problems for integer 𝜌 values in (1,5) and for two threshold values 0.05 and 0.1 against which we compare the handoff probability for each demand. Only demands that have larger handoff probability than the threshold value ( πœ“ 𝑑𝑐

= 1 ) are considered to require the additional resources to support the handoff. Through first-stage variables ( π‘₯ 𝑑𝑠𝑝

) that represent the path for demand 𝑑 to DC s and second-stage variables ( π‘₯Μ‚ 𝑑𝑠𝑐𝑝̂

) that represent the path for demand 𝑑 originating from neighbouring cell 𝑑 to DC 𝑠 , we minimize the total number of links for each path, both for the wireless and the optical domain. Each SAA problem is solved for 𝑀 =

10 samples, each one of size 𝑁 = 200 . We use binary flow variables, since all demands 𝑑 in 𝐷 are assumed to require one unit of capacity. The following constraints have to be satisfied: Both first-stage and second-stage path flows should be enough to accommodate all demands‘ volume β„Ž 𝑑

:

∑ ∑ π‘₯ 𝑑𝑠𝑝 𝑠 𝑝

≥ β„Ž 𝑑 and

∑ ∑ π‘₯ 𝑑𝑠𝑐𝑝̂

≥ β„Ž 𝑑

π‘“π‘œπ‘Ÿ 𝑐 ∈ 𝑁𝐸(𝑐 𝑑

) 𝑠 𝑝̂ where 𝑁𝐸(𝑐 𝑑

) is the set of neighbouring cells of the cell where demand 𝑑 originates at.

In the second-stage path flow conservation constraints, the policies β„Ž1 and β„Ž2 are differentiated, since in β„Ž1 problems the destination 𝑠 of π‘₯Μ‚ 𝑑𝑠𝑐𝑝̂

is the DC 𝑠 of the first-stage problem. The utilization of DC 𝑠 is computed as the sum of first and second-stage path flows reaching 𝑠 , whereas second-stage paths whose first-stage corresponding paths are served in the same DC are ignored. Optical and wireless links utilizations are computed through the corresponding path flow variables:

∑ ∑ [∑ 𝛿 𝑒𝑑𝑠𝑝 π‘₯ 𝑑𝑠𝑝 𝑑 𝑠 𝑝

+ ∑ ∑ 𝛿 𝑒𝑑𝑠𝑝𝑐 π‘₯ 𝑑𝑠𝑐𝑝̂

] 𝑐 𝑐∈𝑁𝐸(𝑐 𝑑

) 𝑝

≤ 𝑦 𝑒 where 𝛿 𝑒𝑑𝑠𝑝

= 1 if first-stage candidate path p for demand d towards DC s contains edge e, 0 otherwise and 𝛿 𝑒𝑑𝑠𝑝𝑐

We formulate and solve two deterministic problems π‘‘β„Ž1 and π‘‘β„Ž2 and two stochastic problems π‘ β„Ž1 and π‘ β„Ž2 , corresponding to handoff polices β„Ž1 and β„Ž2 respectively.

73

Initial Results

Figure 29 illustrates the network resource savings compared to the resources required by the

solution of problem π‘‘β„Ž1 for 10, 20 and 30 total demands volume. π‘‘β„Ž1 requires the highest amount of network resources since: a) it over-estimates the resources needed to support mobility of end users as it requires additional resources to be assigned for all demands and all the cells neighbouring to the cell occupied by the user and b) the first handoff policy β„Ž1 requires paths from neighbouring cells to reach the DC that is already serving the first-stage path of the demands, resulting in longer paths. Network resource savings of the order of 12% are achieved by the deterministic problem π‘‘β„Ž2 , since VM migration enables the choice of a DC that is closer to the new cell where the mobile user has moved. For the stochastic problems, we observe that π‘ β„Ž1 achieves significant savings from 7 to 30% across the different values of service-to-mobility factor and for threshold equal to 0.1. Although β„Ž1 is applied, the savings are higher even compared to π‘‘β„Ž2 for higher 𝜌 values i.e. reduced mobility. However, this is not the case for the lower threshold value of 0.05, where π‘ β„Ž1 has a benefit up to 12%. For problem π‘ β„Ž2 where VM migration is enabled and there is no restriction on the DC destination of the second-stage path, the benefits in network resource savings lie between 23 and 43% for handoff policy β„Ž1 and 27-

62% for β„Ž2 . Similar results are observed across all loading conditions.

Figure 29: Network resources savings (optical+wireless capacity units) across mobility factor ρ and demands volume

Figure 30 illustrates the excess DC resource requirements in terms of the total number of VMs

required. The excess DC resources required for the various problems under consideration are computed in comparison to the problem π‘‘β„Ž1 . Problem π‘‘β„Ž1 requires as many VMs as the total demand volume, since second-stage paths lead to the same DC as the first-stage paths and no additional VMs are deployed. Problem π‘‘β„Ž2 requires a 38% of additional resources since all handoffs are assigned second-stage paths and additional VMs as needed. Problems π‘ β„Ž1 are not demonstrated since due to the handoff policy β„Ž1 also require the same amount of resources

74

as π‘‘β„Ž1 . We observe that for a threshold value equal to 0.0), problem π‘ β„Ž2 requires 20 to 33% additional VMs to be deployed across the values of service-to-mobility factor examined. For π‘‘β„Ž = 0.1

, π‘ β„Ž2 achieves more savings as expected, since it requires 11-28% excess DC resources. Again, savings in excess DC resources remain similar across all demands sets examined.

Figure 30: Excess DC resources (VMs) across mobility factor ρ and demands volume

5.1.5. Evaluation of Service Provisioning over the Virtual Infrastructure

In the previous section, a mixed integer linear programming scheme suitable for the planning of

MOVNOs over heterogeneous network infrastructures has been presented and validated using simulation results. The main objective of that scheme was to minimize a specific objective for the planned VIs subject to a set of constraints. Even though this scheme can offer significant benefits over the existing approaches, its main limitation is that the VIs have been designed based on the assumption that the traffic matrix that they are expected to support is known in advance for their entire planning horizon. On the other hand, in a real system configuration, service requests are expected to arrive at the MOVNOs at random time instants while, at the same time, the corresponding volume and type of service requests will be uncertain. Hence, in order to ensure optimal performance and resource efficiency, MOVNOs need to be appropriately operated in order to address the very dynamic and unpredictable traffic profiles and service characteristics they are expected to support.

To achieve these goals, the performance of the planning schemes described above will be evaluated under various online service provisioning scenarios. Emphasis will be given on quantifying the impact of time granularity mismatch between, a MOVNO’s planning horizon and the duration of services that it is expected to support, on the efficient operation of the converged infrastructures. Another key issue that will be also analyzed includes the impact that the concept of virtualization across heterogeneous network domains has on the blocking rate. Even though, the adoption of the MOVNOs paradigm can assist in reducing delays when reconfiguring or

75

establishing new services, however, it may also reduce the degree of capacity sharing in the

PIs, thus, leading to an increase of the blocking rate.

5.2. Plan for Experimental evaluation

Besides the evaluation of the CONTENT architecture based on theoretical analysis through modeling and simulations, the CONTENT architecture will be evaluated through the development (integration and deployment) of specific realization concepts through representative pilots targeting realistic scenarios.

More specifically, a primitive concept realization, named as “ Mobile broadband services by

Mobile Optical Virtual Network Operator (MOVNO) in a multi-operator environment ” defined in

D2.2 will serve as the guide of the early experimental evaluation of the proposed architecture.

The “Mobile Optical Virtual Network Operator (MOVNO) in a multi-operator environment” use case has been chosen as the representative from the “Infrastructure and network sharing” category, while the “Mobile broadband-enabled cloud services by MOVNO” use case has been chosen for the cloud service provisioning category.

We also note that that the goal of all the activities performed in WP5 is the evaluation of the

CONTENT architecture through proof of principle demonstrations. Although some actions like for example “Requirements specification” of the selected pilot use cases can be studied independently, the majority of actions will be part of an iterative development lifecycle. Iterative development was preferred by the classic waterfall lifecycle since the latter is used in cases when we do not experience changes during the development period.

Figure 31: Iterative development lifecycle

Changes may occur as the project evolves since new APIs or services may be embedded in the

CONTENT platform, existing services may be modified and/or new technologies may be available to enhance the platform. The project consortium closely follows the advancements in the cloud computing technology, in the SDN framework and the Network Functions Virtualizaton

(NFV), that are related to the project’s goals and can potentially be exploited by the proposed framework.

The experimental evaluation of the CONTENT framework will rely on the accomplishment of the following Actions:

76

ο‚·

Requirements specification of pilot use cases

This action is related to the specification of the various requirements that need to be fulfilled by the application of the pilot scenarios, in order to efficiently demonstrate the full capabilities of the overall CONTENT platform. This action is also related to the D2.1 where a detail description of the service requirements and the functionalities that the proposed architecture will support is made.

ο‚·

Detailed description of the various actions taken during the implementation of each discrete pilot case.

The pilot scenarios will be broken into subsets of simple tasks where each subtask will be considered accordingly. In cases where the requirements of the selected pilot cases will require the design of reusable components, the common tasks will be examined in such way that the service reusability is promoted.

ο‚·

Prototype Solution Development and the CONTENT Testbed Deployment

In this stage, we will develop and deploy prototype solution in the corresponding testbeds (TSON and NITOS) and perform the initial testing and evaluation of the pilot cases in the CONTENT testbed. This phase aims at providing initial insights that will result in further enhancements of the prototype solution.

ο‚·

Realistic Deployment and Performance Evaluation

The realistic deployment and testing of the selected pilot cases will use the CONTENT testbed and prototype solutions developed in the previous steps. Moreover, in this phase, we seek to evaluate the integrated solution and quantify its efficiency in terms of the perceived OoE and QoS, through testing under realistic conditions in the concepts under consideration. Dedicated evaluation activities will accompany the deployment of the pilots and will measure quantitative and qualitative aspects of take ‐ up and usage.

Maturity of the technical solution

The CONTENT platform will combine all the respective assets of virtualized wireless – optical infrastructures and share their potential, while the CONTENT architecture would offer a technological structure defining what a virtual integrated solution should propose. The design/deployment/evaluation procedures in WP5 aim to demonstrate the efficiency of the proposed architecture and the maturity of the technical solution.

6. Conclusions

CONTENT is focusing on a next generation ubiquitous converged network infrastructure. The infrastructure model proposed is based on the IaaS paradigm and aims at providing a technology platform interconnecting geographically distributed computational resources that can support a variety of Cloud and mobile Cloud services. The proposed architecture addresses the

77

diverse bandwidth requirements of future cloud and mobile cloud services by integrating advanced optical network technologies offering fine granularity with state-of-the-art wireless access network technology (LTE/WiFi), supporting end user mobility. The concept of virtualization across the technology domains is adopted as a key enabling technology to support the CONTENT vision.

This deliverable report provides an initial description of the CONTENT architecture considering the specifications imposed by the identified relevant business models and service requirements.

This involves both a high level functional description of the CONTENT architecture as well as a detailed description of the proposed layered structure. This includes the details of the individual layers involved Physical Infrastructure Layer, Infrastructure Management Layer, Virtual

Infrastructure Control Layer and Orchestrated End-to-End Service layer. In addition a detailed description of the interaction between the different layers is provided. This deliverable also includes the description of a modelling/simulation framework that is being developed to evaluate the proposed architecture and identify planning and operational methodologies to allow global optimization of the integrated converged infrastructure. In this context optimization is aiming towards an infrastructure that can support increased functionality, capacity, flexibility scalability and reduced cost and energy consumption. Some initial modelling results are also presented.

Finally a plan for the CONTENT experimental evaluation is also presented.

7. References

[ADVA] White Paper, LTE-Capable Mobile Backhaul, July 2012. Available online at http://www.advaoptical.com/en/resources/white-papers/lte-capable-mobilebackhaul.aspx

[ALCATEL]

[ALCATEL13]

[Aljabari10]

[Amaya13]

Strategic White Paper: The LTE Network Architecture: A comprehensive

Tutorial- Alcatel-Lucent

France Telecom-Orange and AlcatelLucent deploy world’s first live 400 Gbps per wavelength optical link, AlcatelLucent press release Feb 2013. http://www.orange.com/en/press/press-releases/press-releases-2013/France-

Telecom-Orange-and-Alcatel-Lucent-deploy-world-s-first-live-400-Gbps-perwavelength-optical-link

Ghannam Aljabari , Evren Eren, “Virtual WLAN: Extension of Wireless

Networking into Virtualized Environments”, International Journal of "Computing"

Research Institute of Intelligent Computer Systems, Ternopil National Economic

University, 2010

First Demonstratio n of Software Defined Networking (SDN) over Space Division

Multiplexing (SDM) Optical Networks. ECOC PD13

[Anastasopoulos12] M. P. Anastasopoulos, A. Tzanakaki, G. Zervas, B.-R. Rofoee, R. Nejabati,

D.Simeonidou, "Virtualization over Converged Wireless, Optical and IT Elements in Support of Resilient Cloud and Mobile Cloud Services," in Proc. of ECOC 2012

[Anastasopoulos13] M. P. Anastasopoulos, A. Tzanakaki, D. Simeonidou, “Stochastic Planning of

Dependable Virtual Infrastructures over Optical Datacenter N etworks”, IEEE/OSA

J. Opt. Commun. Netw., vol. 5, no. 9, September 2013,

78

[Auer]

[AUTOBAHN]

[Bhanage10]

[BONFIRE]

[Castelli02]

[CISCO-2013]

[Click]

G. Auer and V. Giannini, "Cellular Energy Efficiency Evaluation Framework", in

Proc. Vehicular Tech. Conf., Hungary, May. 2011

AutoBAHN project web-site, http://autobahn.geant.net

Bhanage, G.; Vete, D.; Seskar, I.; Raychaudhuri, D.; "SplitAP: Leveraging

Wireless Network Virtualization for Flexible Sharing of WLANs," Global

Telecommunications Conference (GLOBECOM 2010), 2010 IEEE, vol., no., pp.1-6, 6-10 Dec. 2010

EU FP7 BonFIRE project, http://www.bonfire-project.eu

Matthew J. Castelli. Network Sales and Services Handbook. Cisco Press.

November 27, 2002. 978-1-58705-090-9.

“Cisco Visual Networking Index: Global Mobile Data Traffic Forecast Update,

2012 –2017”, White Paper, Feb. 2013

Kohler, Eddie, et al. "The Click modular router." ACM Transactions on Computer

Systems (TOCS) 18.3 (2000): 263-297.

[CLOUD-FORMATION] AWS Cloud Formation, API reference, http://docs.aws.amazon.com/AWSCloudFormation/latest/APIReference/Welcome

.html

[CLOUDSTACK]

[con2011]

CloudStack web site: http://cloudstack.apache.org

[CONTENT-D2.1]

[CONTENT-D2.2]

[CONTENT-D4.1]

[CSO-ID] http://www.informationweek.com/mobility/convergence/the-convergence-of-3g4gand-wi-fi/231901241

D. Christofi et al, “Service Requirements”, D2.1, January 2013

D. Christofi et al, “Use Case Scenarios and Business Models”, D2.1, August

2013

J. Ferrer Riera et al, “Definition of the virtualization strategy and design of domain specific resources/services virtualization in CONTENT”, D4.1, August 2013

D. Dhody, Y. Lee, N. Ciulli, L. Contreras, O. Gonzales de Dios, “Cross Stratum

Optimization enabled Path Computation”, IETF draft, February 2013

[Develder12]

[DINH]

[Ekstrom]

[FANG]

Chris Develder, et al, "Optical Networks for Grid and Cloud Computing

Applications," IEEE proceedings 2012, Digital Object Identifier:

10.1109/JPROC.2011.2179629

H. Dinh, C. Lee, D. Niyato, P. Wang, "A survey of mobile cloud computing: architecture, applications, and approaches," Wirel. Commun. Mob. Comput. Oct.

2011

Ekstrom, H., "QoS control in the 3GPP evolved packet system," Communications

Magazine, IEEE , vol.47, no.2, pp.76,83, February 2009

Y. Fang, I. Chlamtac, “Analytical Generalized Results for Handoff Probability in

Wireless Networks,” IEEE Transactions on Communications, Vol.50, No.3, pp.

369-399, Mar. 2002

79

[FEHSKE]

[Fitchard]

[FLEXISCALE]

[FLOODLIGHT]

[GEANT]

[GEORGAKILAS]

[Gerstel12]

[GEYSERS]

A. J. F ehske, P. Marsch, and G. P. Fettweis, “Bit per Joule Efficiency of

Cooperating Base Stations in Cellular Networks”, in Proc. IEEE Globecom, Dec.

2010.

K. Fitchard, “Analysis: A closer look at BTIG's 4G speed tests”, Connected

Planet, Apr. 2011. http://www.flexiscale.com/

Floodlight web site: http://www.projectfloodlight.org/floodlight/

GEANT network, http://www.geant.net

K. N. Georgakilas, M. P. Anastasopoulos, A. Tzanakaki, G. Zervas and D.

Simeonidou, “Planning of Converged OpticalWireless Network and DC

Infrastructures in Support of Mobile Cloud Services”, ECOC 2013

Elastic Optical Networking: A New Dawn for the Optical Layer, IEEE Commag 12

Generalized Architecture for Dynamic. Infrastructure Services, FP7 European project, http://www.geysers.eu

[GEYSERS]

[Geysers-d22]

[Ghosh] https://www.geysers.eu/

EC FP7 GEYSERS Deliverable D2.2: GEYSERS overall architecture & interfaces specification and service provisioning workflow.

Arunabha Ghosh, Jun Zhang, Jeffrey G. Andrews, Rias Muhamed,

“Fundamentals of LTE”, Prentice Hall Communications Engineering and

Emerging Technologies Series, 2010

[GOGRID]

[Gross]

[Jinno13]

[Juniper12] http://www.gogrid.com/

D. Gross, H .M. Harris, “Fundamentals of Queueing Theory”,Wiley. ISBN 0-471-

32812-X, 1998.

OpenStack Heat web site: http://wiki.openstack.org/Heat [HEAT]

[Heavy12]http://www.overturenetworks.com/sites/default/files/white_paper/HeavyReading_WP_MetroSD

N_EdgeVirtualization_12-18-12.pdf

[IEEE802_3]

[Intune11]

IEEE 802.3 Ethernet Working Group Communication, “IEEE 802.3™ Industry

Connections Ethernet Bandwidth Assessment” http://www.ieee802.org/3/ad_hoc/bwa/BWA_Report.pdf, 2012

“Verisma ivx 8000, Optical Packet Switch and Transport” Intune Networks, 2011

[Jinno09] M.Jinno et al. Spectrum-Efficient and Scalable Elastic Optical Path Network:

Architecture,Benefits, and Enabling Technologies. 2009. IEEE Communications

Magazin. M. A.

[Jinno12] Jinno, M., et al. Introducing elasticity and adaptation into the optical domain toward more efficient and scalable optical transport networks .2012, Innovations for Future Networks and Services,.

Virtualization in Optical Networks from Network Level to Hardware Level Journal of Optical Communications and Networking, Vol. 5, Issue 10, pp. A46-A56 (2013)

Juniper Networks, “Cloud Ready Data Center Network Design Guide”, http://www.juniper.net/us/en/local/pdf/design-guides/8020014-en.pdf, 2012.

80

[Kokku10b]

[Kumar]

[Lehr04]

[Li2008]

[LICL]

[LIGHTNESS]

[LTE]

[MAINS]

[MCN]

[MUN]

[Nejabati11]

[NEUTRON]

[NIST-REF]

[OCCI]

[OMF10]

[OpenDaylight]

[OPENFLOW]

[OPENNAAS]

[OpenNaaS]

[OPENSTACK]

Ravi Kokku, Rajesh Mahindra, Honghai Zhang and Sampath Rangarajan,

"Cellular Wireless Resource Slicing for Active RAN Sharing"

K. Kumar and Yung-Hsiang Lu, Cloud computing for mobile users: Can offloading computation save energy, IEEE Computer Magazine, pp.51-56, 2010

William Lehr, Marvin Sirbu and Sharon Gillett (2004). "Municipal Wireless

Broadband Policy and Business Implications of Emerging Technologies". Paper presented at Competition in Networking: Wireless and Wireline, London Business

School, April 13-14.

Yi Li, Lili Qiu, Yin Zhang, Ratul Mahajan, and Eric Rozner. Predictable

Performance Optimization for Wireless Networks. In Proc. of ACM SIGCOMM,

Seattle, WA, USA, August 2008

J.A. García-Espín, J. Ferrer Riera, S. Figuerola, M. Ghijsen, Y. Demchemko, J.

Buysse, M. De Leenheer, C. Develder, F. Anhalt and S. Soudan, "Logical

Infrastructure Composition Layer, the GEYSERS holistic approach for infrastructure virtualisation", in Proc. TERENA Networking Conference (TNC

2012), Reykjavík, Iceland, 21-24 May 2012 http://www.ict-lightness.eu

“The LTE Network Architecture A comprehensive tutorial”, Alcatel Luced,

Technical White paper, 2009

MAINS project website, http://www.ist-mains.eu/

MCN project web site, http://www.mobile-cloud-networking.eu/site/

K. Mun, “Mobile Cloud Computing Challenges”, TechZine Magazine, http://www2.alcatel-lucent.com/techzine/mobile-cloud-computing-challenges/

Reza Nejabati, Eduard Escalona, Sh uping Peng, Dimitra Simeonidou, “Optical

Network Virtualization”, ONDM 2011.

OpenStack Neutron web site: https://wiki.openstack.org/wiki/Neutron

P. Mell, and T. Grance: The NIST Definition of Cloud Computing,

Recommendations of the National Institute of Standards and Technology.

Special Publication 800-145. September 2011.

R. Nyren, A. Edmonds, A. Papaspyrou, T. Metsch, “Open Cloud Computing

Interface - Core”, GFD-P-R.183, OCCI WG, June 2011

Thierry Rakotoarivelo, Maximilian Ott, Guillaume Jourjon, and Ivan Seskar. 2010.

OMF: a control and management framework for networking test-beds. SIGOPS

Oper. Syst. Rev. 43, 4 (January 2010) http://opendaylight.org

Open Networking Foundation, “OpenFlow Switch Specification”, version 1.4.0,

August 2013 http://www.opennaas.org http://opennaas.org

OpenStack web site: http://www.openstack.org

81

[Peng11]

[RFC1157]

[RFC3414]

[RFC3416]

[RFC4741]

[RFC6241]

[RFIC2013]

[ROKE]

[RYU]

[SAIL]

[SAIL-D52]

[SATAYAN]

[Sesia]

[STACKMATE]

[stand12]

[STATE-PCE-ID]

[STRAUSS]

[SUN]

Shuping Peng, Reza Nejabati, Siamak Azodolmolky, Eduard Escalona, and

Di mitra Simeonidou “An Impairment-aware Virtual Optical Network Composition

Mechanism for Future Internet”, Optics Express, Vol. 19, Issue 26, pp. B251´-

B259, Dec. 2011.

.Case, M. Fedor, M. Schoffstall, J. Davin, Simple Network Management Protocol

(SNMP), RFC 1157, May 1990

U. Blumenthal, B. Wijnen, User-based Security Model (USM) for version 3 of the

Simple Network Management Protocol (SNMPv3), IETF RFC 3414, December

2002

R. Presuhn, J. Case, K. McCloghrie, M. Rose, S. Waldbusser, ersion 2 of the

Protocol Operations for the Simple Network Management Protocol (SNMP), IETF

RFC 3416, December 2012

Enns, R., NETCONF Configuration Protocol. IETF RFC 4741, December 2006

Enns, R., Bjorklund, M., Schoenwaelder, J., Bierman, A. Network Configuration

Protocol (NETCONF). IETF RFC 6241. June 2011.

Cellular vs. WiFi: Future convergence or an utter divergence? " Radio Frequency

Integrated Circuits Symposium (RFIC), 2013 IEEE , vol., no., pp.I,II, 2-4 June

2013

White Paper, Roke Manor Research LTD, LTE MAC Scheduler & Radio Bearer

QoS, http://www.roke.co.uk/resources/white-papers/0485-LTE-Radio-Bearer-

QoS.pdf

Ryu web site: http://osrg.github.io/ryu/

Scalable and Adaptive Internet Solutions, FP7 European project, http://www.sailproject.eu/

Murray P., et al. “Deliverable D-5.2 - Cloud Networking Architecture Description”,

January 2012

M. Satyanarayanan, P. Bahl, R.Caceres and N. Davies, “The Case for VM-Based

Cloudlets in Mobile Computing.” IEEE Pervasive Computing 8 (4), pp.14-23, Oct.

2009

Stefania Sesia, Issam Toufik, Matthew Baker,“LTE - The UMTS Long Term

Evolution: From Theory to Practice”, Wiley, 2011

Stackmate web site: https://github.com/chiradeep/stackmate

IEEE 802.11: Wireless LAN Medium Access Control (MAC) and Physical Layer

(PHY) Specifications. (2012 revision). IEEE-SA.

E. Crabbe, J. Medved, I. Minei, R. Varga, “PCEP Extensions for Stateful PCE”,

IETF draft, March 2013 http://www.ict-strauss.eu/en/

“Sun Oracle Data Base Machine Data Sheet”,

82

[Tellbas]

[TOSCA]

[TREMA]

[TZANAKAKI11]

Tellabs Application Note, Meet the Needs of LTE Backhaul with the Tellabs 8600

Smart routers, http://www.tellabs.com/products/8000/tlab8600_ltebackhaul_an.pdf

Topology and Orchestration Specification for Cloud Applications version 1.0,

Candidate OASIS Standard, June 2013

Trema web site: http://trema.github.io/trema/

A. Tzanakaki et. al, , “Energy efficiency in integrated IT and optical network infrastructures: the GEYSERS approach,” in proc. of IEEE INFOCOM 2011,

Workshop Green Commun. and Netw., 343 –348

[TZANAKAKI-A-13]

[TZANAKAKI-O-13]

A. Tzanakaki, M. P. Anastasopoulos, G. S. Zervas, B. R. Rofoee, R. Nejabati, D.

Simeonidou, “Virtualization of Heterogeneous Wireless-Optical Network and IT infrastructures in support of Cloud and Mobile Cloud Services, IEEE

Communications Magazine, August 2013

A. Tzanakaki, M. P. Anastasopoulos, and K. Georgakilas, Dynamic Virtual

Optical Networks Supporting Uncertain Traffic Demands, IEEE/OSA J. Opt.

Commun. Netw., vol. 5, no. 10, October 2013, Invited

[TZANAKAKI14]

[VALANCIOUS]

Anna Tzanakaki, Markos Anastasopoulos, Kostas Georgakilas, Giada Landi,

Giacomo Bernini, Nicola Ciulli, Jordi Ferrer Riera, Eduard Escalona, Joan A.

García-Espin, Xavier Hesselbach, Shuping Peng, Reza Nejabati, and Dimitra

Simeonidou, Damian Parniewicz, Bartosz Belter, Juan Rodriguez Martinez,

“Planning of Dynamic Virtual Optical Cloud Infrastructures: The GEYSERS approach”, IEEE Communications Magazine, Special Issue on “Advances in

Network Planning”, January 2014

V. Valancius et.al, “Greening the internet with nano data centers”. In Proceedings of the 5th international conference on Emerging networking experiments and technologies (CoNEXT '09). ACM, New York, NY, USA, 37-48 https://github.com/Juniper/contrail-vrouter [vrouter]

[vswitch]

[Wolski]

[YAN12]

[ZERVAS]

http://openvswitch.org

R. Wolski et al., “Using Bandwidth Data to Make Computa¬tion Offloading

Decisions,” Proc. IEEE Int’l Symp. Parallel and Distributed Processing (IPDPS

08), 2008, pp. 1-8.

Y. Yan, Y. Qin, G. Zervas, B. Rofoee, D. Simeonidou, “High Performance and

Flexible FPGABased Time Shared Optical Network (TSON) Metro Node”, in proc. of ECOC 2012

G.S. Zervas, J. Triay, N. Amaya, Y. Qin, C. Cervelló-Pastor, and D. Simeonidou,

"Time Shared Optical Network (TSON): a novel metro architecture for flexible multi-granular services," Opt. Express 19, B509-B514 (2011) http://www.oracle.com/us/solutions/datawarehousing/039569.pdf

83

8. Acronyms

Acronym

BoD

CAPEX

CMS

CONTENT

CSP

CP

CT

DC

DS

DCN

DIP

EDFA

EPS

ESS

FCAPS

FPGA

GMPLS

IaaS

IML

IT

ISO

MME

MNI

MP2MP

MOVNO

MP2P

MVNO

Term

Bandwidth-on-Demand

Capital Expenditures

Cloud Management System

Convergence of Wireless Optical Network and IT Resources in support of Cloud Services

Cloud Service Provider

Control Plane

Connectivity Technology

Data Centre

Distribution System

Data Centre Network

Datacenter Infrastructure Provider

Erbium Doped Fibre Amplifier

Evolved Packet System

Extended Service Set

Fault, Configuration, Accounting, Performance, Security,

Field-Programmable Gate Array

Generalized Multi-Protocol Label Switching

Infrastructure as a Service

Infrastructure Management Layer

Information Technology

International Standards Organization

Mobility Management Entity

Management to Network Interface

MultiPoint to MultiPoint

Mobile Optical Virtual Network Operator

MultiPoint to Point

Mobile Virtual Network Operator

84

LAN

LTE

OAM

OFDM

OpEx

OBS

OCCI

OCNI

OCS

OIP

OPS

OSNR

P2MP

PCE

PCEP

PI

PIC

PLZT

PIP

QoE

QoS

REST

ROI

SAE

SDN

SLA

SLAE

SNMP

NaaS

NCL

NetCONF

NITOS

Network as a Service

Network Control Layer

Network Configuration Protocol

Network Implementation Test-bed using Open Source platforms

Local Area Network

Long Term Evolution

Operation and Management

Orthogonal Frequency Division Multiplexing

Operational Expenditures

Optical Burst Switching

Open Cloud Computing Interface

Open Cloud Networking Interface

Optical Circuit Switching

Optical Infrastructure Provider

Optical Packet Switching

Optical Signal Noise Ratio

Point to MultiPoint

Path Computation Element

Path Computation Element Protocols

Physical Infrastructure

Photonic Integrated Circuits

Polarized Lead Zirconium Titanate

Physical Infrastructure Provider

Quality of Experience

Quality of Service

REpresentational State Transfer

Return on Investment

System Architecture Evolution

Software Defined Networking

Service Level Agreement

Sub Lambda Assignment Engine

Simple Network Management Protocol

85

VM

VO

VON

VPN

WDM

WIP

WLAN

WP

SP

TOR

TSON

UNI

UE

VI

VLAN

Service Provider

Top-of-Rack

Time Shared Optical Networks

User-to-Network Interface

User Equipment

Virtual Infrastructure

Virtual Local Area Network

Virtual Machine

Virtual Operator

Virtual Optical Network

Virtual Private Network

Wavelength Division Multiplexing

Wireless Infrastructure Provider

Wireless Local Area Network

Work Package

86

Download