Programmable Virtual Networks From Network Slicing To Network Virtualization Ali Al-Shabibi Open Networking Laboratory Outline • Define FlowVisor – It’s design goal – It’s success – It’s limitation • Describe and define Network Virtualization • Introduce the OpenVirteX (formerly known as NetVisor), which provides programmable virtual networks Why FlowVisor? Evaluating new network services is hard Experimenters want to control the behaviour of their network New services may require changes to switch software Also require access to real world traffic Good ideas rarely get deployed OK… Why is it hard? Current Virtualization à la FlowVisor • Network Slice = Collection of sliced switches, links, and traffic or header space • Each slice associated to a controller • Transparent slicing, i.e., every slice believes it has full and sole control of datapath FV enforces traffic and slice isolation Not a generalized virtualization Great! What about real traffic? • FlowVisor allows users to opt-in to services in real-time – Individual flows can be delegated to a slice by a user – Admins can add policy to slice dynamically VoIP Slice Video Slice Web Slice All the rest FlowVisor Sprinkle some resource limits • Slicing resources includes: – Specifying the link bandwidth – Maximum number of forwarding rules – Fraction of switch CPU FlowSpace: Which slice controls which packet? Mapping Packets to Slices FlowVisor Where does it live? • Sits between switches and controllers • Speaks OpenFlow up and down. • Acts like a proxy to switches and controllers • Datapaths and controllers run unmodified What kind of magic is this? It this action allowed? Who controls this packet? PacketIn from datapath Message Handling - PacketIn Yes Send to appropriate slice. Is LLDP? PacketIn Drop if controller is not connected. No Extract match structure and match FlowSpace match Are actions allowed? No Log exception. Yes Drop if controller is not connected. No match Has packet been send to a slice? Yes Done Send to slice. No Insert a drop rule. Message Handling - FlowMod FlowMod Slice Actions Slicing permitted? Yes Extract match struct and intersect FlowSpace Intersections Has slice permissions? No Intersections Zero rewrites? No Done Yes Log exception For each intersection, rewrite original flowmod with flowspace info. No Send Error. Log exception FlowVisor Highlights • Demonstrations: – Open Networking Summit ’12 and ’13 – GENI GEC 9 – Best demo at SIGCOMM ’09 • Deployments : – – – – GENI OFELIA Stanford Production Network In use at NEC and Ericsson labs, as well as other vendors • 3 releases in the past year – 1.0 release downloaded over 70 times in one day KSU U of Wisconsin U of Utah Clemson BBN NYSERNet CENIC AT&T Comcast Vendors Rutgers APNIC Commercial Network Ops Georgia Tech R&E Networks University Research FlowVisor Downloaders Release 1.0 Goldman Sachs EarthLink Cisco PSINet Aruba RCN NEC Ericsson FlowVisor Summary • FlowVisor introduces the concept of a network slice • Not a complete virtualization solution. • Originally designed to test new network services on production taffic • But, it’s really only a Network Slicer! FlowVisor provides network slicing but not a complete network virtualization. What should Network Virtualization be? At least what I think ;) • Conceptually introduces virtual network which is decoupled from physical network • Should not change the abstractions we know and love of physical networks • Should provide some new one: Instantiation, deletion, service deployment, migration, etc. What is Network Virtualization? VPN Overlays VLAN None of these give VRF you a virtual network MPLS TRILL They merely virtualize one aspect of a network Topology Virtualization • Virtual links • Virtual nodes • Decoupled from physical network Address Virtualization Policy Virtualization • Virtual Addressing • Maintain current abstractions • Add some new ones • Who controls what? • What guarantees are enforced? Network Virtualization vs. Network Slicing Slicing • Sorry, you can’t. • You need to discriminate traffic of two networks with something other than the existing header bits • Thus no address or complex topology virtualization Network virtualization • Virtual nets are completely independent • Virtual nets are distinguished by the tenant id • Complete address and topology virtualization Virtualization State of the Art • Functionality implemented at the edge • Use of tunneling techniques, such as STT, VXLAN, GRE • Network core is not available for innovation • Closed source controller controls the behaviour of the network • Provides address and topology virtualization, but limited policy virtualization. • Moreover, the topology looks like only one big switch Big Switch Abstraction E1 SWITCH 1 E3 E1 E2 E2 • A single switch greatly limits the flexibility of the E5 network controller • Cannot specify your own routing policy. • What if you want a tree topology? SWITCH 2 E4 E6 E3 E4 E5 E6 Current Virtualization vs OpenVirteX Current Virtualization Solutions • Networks are not programmable • Functionality implemented at the edge • Network core is not available for innovation • Must provision tunnels to provide virtual topology • Address virtualization provided by encapsulation OpenVirteX • Each virtual network is handed to a controller for programming. • Edge & core available for innovation • Entire physical topology may/can be exposed to the downstream controller. • Address virtualization provided by remapping/rewriting header fields • Both dataplanes and controllers can be used unmodified. OpenVirteX OpenVirtex All problems in computer science can be solved by another level of indirection. - David Wheeler Ultimate Goal Network)OS) Network)OS) Network)OS) VM) Virtual)Network) Maps) Topology,)Address)Space)and) Control)Func>on)Mapping) Physical)Network) Map) OpenVirteX NetVisor) physical)network) Address Space Virtualisation virtual)IP)space) Network)OS) Control traffic address translation virtual)IP) NetVisor) physical)IP) VM) edge) switch) NetVisor) Address)Space)Mapping)) physical)IP) physical)IP) virtual)IP) virtual)IP) physical)network) Data Datatraffic traffic address translation address mapping physical)IP)space) source'physical'IP' tenant'ID' MSB' transformed' virtual'source'IP' 32'bits' des1na1on'physical'IP' tenant'ID' LSB' transformed' virtual'des1na1on'IP' 32'bits' Topology Virtualization - Abstractions • Expose physical topology to tenants • Virtual link: collapse multi-hop path into one-hop link • Approach is also valid for proactive rules OpenVirtex Abstractions (contd.) ... – Allow OpenVirteX admin to control routing within virtual switch virtual switch ... • Virtual switch: collapse ports dispersed over network into a switch • Big switch is virtual switch with all edge ports • Use separate controller for each virtual switch virtual physical core ports edge ports VM OpenVirteX Interaction with the Real-World NetVisor OpenVirtex OpenVirteX API Mapping to Quantum OpenVirteX OpenStack Management System Nova Nova plugin VM1 Quantum plugin VM2 vSwitch Quantum Quantum plugin Other Components Quantum plugin virtual switch VM3 OpenFlo w Physical Network OpenVirteX API Mapping to Quantum OpenVirteX Create Network API Quantum ✔ Attach Port API ✔ Create vRouter API ✔ Configure Topology API Via the Router extension High Level Features • Support for more generalized network virtualization as opposed to slicing – Address virtualization: use extra bits or clever use of tenant id in header – Topology virtualization: on demand topology • Integrate with cloud using OpenStack – Via the Quantum plugin • Support any OF 1.x version, simultaneously • Support for scale, HA and security-features. – Incorporate right building blocks from other OSS Just finised implementing a prototype Current Status • Quick and dirty prototype implemented • Provides Address space virtualisation/isolation • Two topology abstractions: – Virtual Link – Virtual Switch • Current implementation not intended to scale or provide any significant performance – It’s a proof of concept Future Challenges • Traffic engineering, e.g., load balancing • Reliability, e.g., disjoint paths • The above needs special attention when offering topology abstractions – They may even be severely impacted. • Physical topology changes • Tenant may ask for reconfiguration of virtual network • Extremely challenging to get right Conclusion • FlowVisor 1.0 will remain to be supported • OpenVirteX is still in the design phase – But our clear goal is to deliver programmable virtual networks. • An initial proof of concept may be available in Q3 2013. • Contributions to FlowVisor and OpenVirteX are greatly appreciated and welcomed. Thanks! Questions?