Ciena_India_presso

advertisement
http://openflowswitch.org
pac.c
Packet & Circuit Convergence
with OpenFlow
Saurav Das, Guru Parulkar, & Nick McKeown
Stanford University
http://www.openflowswitch.org/wk/index.php/PAC.C
Ciena India, April 2nd 2010
Internet has many problems
Plenty of evidence and documentation
Internet’s “root cause problem”
It is Closed for Innovations
2
We have lost our way
Routing, management, mobility management,
access control, VPNs, …
App
App
App
Operating
System
Specialized Packet
Forwarding Hardware
Million of lines
of source code
5400 RFCs
Barrier to entry
500M gates
10Gbytes RAM
Bloated
Power Hungry
IPSec
Firewall
Router
Software
Control
OSPF-TE
RSVP-TE
HELLO
HELLO
HELLO
Hardware
Datapath
Many complex functions baked into the infrastructure
OSPF, BGP, multicast, differentiated services,
Traffic Engineering, NAT, firewalls, MPLS, redundant layers, …
An industry with a “mainframe-mentality”
Glacial process of innovation made
worse by captive standards process
Idea
Standardize
Wait 10 years
• Driven by vendors
• Consumers largely locked out
• Glacial innovation
Deployment
Change is happening in non-traditional markets
App
App
App
Network Operating System
Ap
p
Ap
p
Ap
p
Operating
System
Ap
p
Specialized Packet
Forwarding Hardware
Ap
p
Ap
p
Ap
p
Ap
p
Operating
System
Ap
p
Specialized Packet
Forwarding Hardware
Operating
System
Ap
p
Specialized Packet
Forwarding Hardware
Ap
p
Ap
p
Operating
System
Ap
p
Ap
p
Ap
p
Operating
System
Specialized Packet
Forwarding Hardware
Specialized Packet
Forwarding Hardware
The “Software-defined Network”
2. At least one good operating system
Extensible, possibly open-source
3. Well-defined open API
App
App
App
Network Operating System
1. Open interface to hardware
Simple Packet
Forwarding
Hardware
Simple Packet
Forwarding
Hardware
Simple Packet
Forwarding
Hardware
Simple Packet
Forwarding
Hardware
Simple Packet
Forwarding
Hardware
Trend
App
App
App
Windows
Windows
Windows
(OS)
(OS)
(OS)
Linux
Linux
Linux
App
App
App
Mac
Mac
Mac
OS
OS
OS
Virtualization layer
x86
(Computer)
Computer Industry
Controller11
NOX
Controller
(Network OS)
Controller
Controller
Network
OS
22
Virtualization or “Slicing”
OpenFlow
Network Industry
Simple common stable hardware substrate below+ programmability + strong isolation
model + competition above = Result : faster innovation
The Flow Abstraction
Exploit the flow table in switches, routers, and chipsets
Flow 1.
Rule
(exact & wildcard)
Action
Statistics
Flow 2.
Rule
(exact & wildcard)
Action
Statistics
Flow 3.
Rule
(exact & wildcard)
Action
Statistics
Flow N.
Rule
(exact & wildcard)
Default Action
Statistics
e.g. Port, VLAN ID, e.g. unicast, mcast, Count packets & bytes
L2, L3, L4, …
map-to-queue, drop Expiration time/count
OpenFlow Switching
Controller
OpenFlow Switch
sw Secure
Channel
hw Flow
Table
• Add/delete flow entry
• Encapsulated packets
• Controller discovery
A Flow is any combination of above 10
fields
described in the Rule
Flow Example
Routing
Controller
A Flow is the fundamental
unit of manipulation within a switch
Rule
Action
Statistics
OpenFlow
Protocol
Rule
Action
Statistics
Rule
Action
Statistics
OpenFlow is Backward Compatible
Ethernet Switching
SwitchMAC
Port src
MAC Eth
dst type
VLAN IP
ID
Src
IP
Dst
IP
Prot
TCP TCP
sport dport
Action
*
00:1f:..*
*
*
*
*
port6
SwitchMAC
Port src
MAC Eth
dst type
VLAN IP
ID
Src
*
*
*
*
*
IP Routing
*
*
*
*
IP
IP
TCP TCP
Action
Dst Prot sport dport
5.6.7.
*
*
*
port6
8
Application Firewall
SwitchMAC
Port src
*
*
*
MAC Eth
dst type
*
VLAN IP
ID
Src
IP
Dst
IP
Prot
TCP TCP
sport dport
Action
*
*
*
*
drop
*
22
OpenFlow allows layers to be combined
Flow Switching
SwitchMAC
Port src
port3
MAC Eth
dst type
VLAN IP
ID
Src
00:2e.. 00:1f.. 0800
vlan1
IP
Dst
IP
Prot
1.2.3.4 5.6.7.8 4
TCP TCP
Action
sport dport
17264 80
port6
VLAN + App
SwitchMAC
Port src
*
*
MAC Eth
dst type
*
*
VLAN IP
ID
Src
IP
Dst
IP
Prot
TCP TCP
Action
sport dport
vlan1
*
*
*
*
80
port6,
port7
Port + Ethernet + IP
SwitchMAC
Port src
port3
MAC Eth
dst type
00:2e.. *
0800
VLAN IP
ID
Src
IP
Dst
*
5.6.7.8
*
IP
Prot
4
TCP TCP
Action
sport dport
*
*
port 10
A Clean Slate Approach
Goal: Put an Open platform in hands of
researchers/students to test new ideas at scale
Approach:
1. Define OpenFlow feature
2. Work with vendors to add OpenFlow to their
switches
3. Deploy on college campus networks
4. Create experimental open-source software
- researchers can build on each other’s work
14
OpenFlow Hardware
Juniper MX-series
HP Procurve 5400
Quanta LB4G
NEC IP8800
WiMax (NEC)
WiFi
Cisco Catalyst 6k
Arista 7100 series
(Fall 2009)
Ciena CoreDirector
(Fall 2009)
OpenFlow Deployments
Research and Production Deployments
on commercial hardware
Juniper, HP, Cisco, NEC, (Quanta), …
• Stanford Deployments
– Wired: CS Gates building, EE CIS building, EE Packard
building (soon)
– WiFi: 100 OpenFlow APs across SoE
– WiMAX: OpenFlow service in SoE
• Other deployments
– Internet2
– JGN2plus, Japan
– 10-15 research groups have switches
Nationwide OpenFlow Trials
UW
Univ
Wisconsin
Princeton
Stanford
NLR
Indiana
Univ
Rutgers
Internet2
Clemson
Georgia
Tech
Production deployments
before end of 2010
Motivation
IP & Transport Networks (Carrier’s view)
C
D
• are separate networks managed and operated
C
independently
D
C
D
C
D
• resulting in duplication of functions and
resources in multiple layers
C
C
• and significantDC capex and opex burdens
D
C
D
D
D
… well known
C
D
D
D
D
Motivation
… Convergence is hard
… mainly because the two networks have
very different architecture which makes
integrated operation hard
… and previous attempts at convergence
have assumed that the networks remain the same
… making what goes across them bloated and complicated
and ultimately un-usable
We believe true convergence will come about from
architectural change!
UCP
C
D
C
D
C
D
C
D
C
Flow
Network
C
D
C
D
C
D
C
D
D
D
D
D
D
pac.c
Research Goal: Packet and Circuit Flows Commonly
Controlled & Managed
Simple,
network
of Flow
Switches
Flow
Network
… that switch at different granularities: packet, time-slot, lambda & fiber
OpenFlow & Circuit Switches
Packet Flows
Switch MAC
Port src
MAC Eth
dst
type
VLAN IP
ID
Src
IP
Dst
IP
Prot
TCP
TCP
sport dport
Action
Exploit the cross-connect table in circuit switches
Circuit Flows
In
Port
VCG Starting Signal
In
22
Lambda
Time-Slot Type
Out
Port
VCG Starting Signal
Out
22
Lambda
Time-Slot Type
The Flow Abstraction presents a unifying abstraction
… blurring distinction between underlying packet and circuit
and regarding both as flows in a flow-switched network
22
pac.c Example
IP 11.12.0.0
VLAN 1025
IP 11.13.0.0
TCP 80
+ VLAN2, P1
+ VLAN2, P2
VLAN2
VCG 3
VCG3
P1 VC4 1
P2 VC4 4
P1 VC4 10
+ VLAN7, P2
VLAN7
VCG5
VCG5
P3 STS192 1
OpenFlow
(software)
R
A
S
OpenFlow
(software)
R
A
S
IN
Packet
Packet Switch Fabric
OUT
TDM
VCG3
VCG5
Switch Fabric
GE
ports
Circuit
Switch Fabric
TDM
ports
Unified Architecture
App
App
App
App
Networking
Applications
NETWORK OPERATING SYSTEM
OPENFLOW Protocol
Packet
Switch
Circuit
Switch
Unifying
Abstraction
Packet & Circuit
Switch
Unified
Control
Plane
Underlying Data
Plane Switching
Example Network Services
• Static “VLANs”
• New routing protocol: unicast, multicast,
multipath, load-balancing
• Network access control
• Mobile VM management
• Mobility and handoff management
• Energy management
• Packet processor (in controller)
• IPvX
• Network measurement and visualization
• …
25
Converged packets &
dynamic circuits opens
up new capabilities
Network
Recovery
Congestion
Routing
Control
Traffic
QoS
Engineering Power
Mgmt
VPNs
Discovery
26
Example Application
Congestion
Control
..via Variable Bandwidth Packet Links
OpenFlow Demo at SC09
We demonstrated ‘Variable Bandwidth Packet Links’ at
SuperComputing 2009
• Joint demo with Ciena Corp.
• Ciena CoreDirector switches
• packet (Ethernet) and circuit switching (SONET TDM)
fabrics and interfaces
• native support of OpenFlow for both switching technologies
• Network OS controls both switching fabrics
• Network Application establishes
• packet & circuit flows
• and modifies circuit bandwidth in response to packet flow needs
http://www.openflowswitch.org/wp/2009/11/openflow-demo-at-sc09/
OpenFlow Demo at SC09
OpenFlow Testbed
OpenFlow
Controller
OpenFlow Protocol
NetFPGA based
OpenFlow packet switch
NF1
to OSA
E-O
NF2
O-E
GE
25 km SMF
GE
AWG
1X9 Wavelength
Selective Switch (WSS)
to OSA
WSS based OpenFlow
circuit switch
192.168.3.12
192.168.3.15
Video Clients
λ1 1553.3 nm
λ2 1554.1 nm
GE to DWDM SFP
convertor
192.168.3.10
Video Server
Lab Demo with Wavelength Switches
OpenFlow packet switch
OpenFlow packet switch
25 km SMF
GE-Optical
GE-Optical
Mux/Demux
Openflow Circuit Switch
pac.c next step:
A larger demonstration of
capabilities enabled by
converged networks
Demo Goals
• The next big demo of
capabilities @GEC8 (July 20th)
• merge aggregation demo (SIGCOMM’09) with
• UCP & dynamic circuits demo (SC’09)
• and provide differential treatment to aggregated packet flows
• OpenFlow allows for the
of
• enabling packet flow aggregation based on any of the packet headers
and without any encapsulation, tagging etc.
• enabling circuit flows of varying bandwidths, from 50 Mbps – 40 Gbps
• By merging the two, we can demonstrate
•
•
control
• best effort packet (over shared static ckt) for apps like http, ftp, smtp
• low bandwidth, min propagation delay paths for applications like VoIP
• variable bandwidth (BoD) service for applications like streaming video
• possible extensions include varying levels of
• re-routing packet flows, protected circuit flows etc.
Demo Topology
App
App
App
App
NETWORK OPERATING SYSTEM
E
T
H
PKT
E P
T K
H T
T
D
M
S
O
N
E
T
S
O
N
E
T
E
T
H
S
O
N
E
T
E
T
H
PKT
E
T
H
T P
E
D K T
M T H
E
T
H
PKT
E
T
H
E
T
H
PKT
E
T
H
T P
E
D K T
M T H
E
T
H
E
T
H
PKT
E
T
H
PKT
E
T
H
Demo Methodology
App
App
App
App
NETWORK OPERATING SYSTEM
E
T
H
PKT
E P
T K
H T
T
D
M
S
O
N
E
T
S
O
N
E
T
E
T
H
S
O
N
E
T
E
T
H
PKT
E
T
H
T P
E
D K T
M T H
E
T
H
PKT
E
T
H
E
T
H
PKT
E
T
H
T P
E
D K T
M T H
E
T
H
E
T
H
PKT
E
T
H
PKT
E
T
H
Step 1: Aggregation into Fixed Circuits
App
App
App
App
NETWORK OPERATING SYSTEM
Aggregation
E
T
H
PKT
E P
T K
H T
T
D
M
S
O
N
E
T
S
O
N
E
T
E
T
H
T P
E
D K T
M T H
E
T
H
E
T
H
PKT
E
T
H
PKT
E
T
H
S
O
N
E
T
E
T
H
PKT
E
T
H
E
T
H
PKT
T P
E
D K T
M T H
Into static ckts
E
T
H
E
T
H
PKT
E
T
H
… for best-effort traffic:
http, smtp, ftp etc.
Step 2: Aggregation into Dynamic Circuits
App
App
App
App
NETWORK OPERATING SYSTEM
Streaming video flow
E
T
H
PKT
E P
T K
H T
T
D
M
S
O
N
E
T
S
O
N
E
T
E
T
H
S
O
N
E
T
E
T
H
PKT
E
T
H
E
T
H
PKT
T P
E
D K T
M T H
Initially muxed
Increasing streaming
into static ckts
video traffic
E
T
H
E
T
H
PKT
E
T
H
T P
E
D K T
M T H
E
T
H
E
T
H
PKT
E
T
H
PKT
E
T
H
Step 2: Aggregation into Dynamic Circuits
App
App
App
App
NETWORK OPERATING SYSTEM
..leads to video flows being aggregated
E
T
H
PKT
E P
T K
H T
T
D
M
..& packed into
a dynamically
created circuit
S
O
N
E
T
E
T
H
S
O
N
E
T
T P
E
D K T
M T H
E
T
H
PKT
E
T
H
T P
E
D K T
M T H
E
T
H
E
T
H
PKT
E
T
H
PKT
E
T
H
..that bypasses
intermediate
packet switch
S
O
N
E
T
E
T
H
PKT
E
T
H
E
T
H
PKT
E
T
H
Step 2: Aggregation into Dynamic Circuits
App
App
App
App
NETWORK OPERATING SYSTEM
E
T
H
PKT
E P
T K
H T
T
D
M
S
O
N
E
T
S
O
N
E
T
E
T
H
T P
E
D K T
M T H
.. even greater
increase in video
traffic
S
O
N
E
T
E
T
H
PKT
E
T
H
.. results in dynamic
increase of circuit
bandwidth
E
T
H
PKT
E
T
H
E
T
H
PKT
E
T
H
T P
E
D K T
M T H
E
T
H
E
T
H
PKT
E
T
H
PKT
E
T
H
Step 3: Fine-grained control
App
App
App
App
NETWORK OPERATING SYSTEM
.. aggregated over dynamic low-b/w
circuit with min propagation delay
E
T
H
PKT
E P
T K
H T
T
D
M
S
O
N
E
T
S
O
N
E
T
E
T
H
T P
E
D K T
M T H
E
T
H
E
T
H
PKT
E
T
H
PKT
E
T
H
S
O
N
E
T
E
T
H
PKT
E
T
H
T P
E
D K T
M T H
E
T
H
PKT
E
T
H
.. VoIP flows
E
T
H
PKT
E
T
H
Step 3: Fine-grained control
App
App
App
App
NETWORK OPERATING SYSTEM
E
T
H
PKT
E P
T K
H T
T
D
M
S
O
N
E
T
S
O
N
E
T
E
T
H
S
O
N
E
T
E
T
H
PKT
E
T
H
.. removal of
dynamic circuit
T P
E
D K T
M T H
.. decreasing
video traffic
E
T
H
PKT
E
T
H
E
T
H
PKT
E
T
H
T P
E
D K T
M T H
E
T
H
E
T
H
PKT
E
T
H
PKT
E
T
H
Step 4: Network Recovery
App
App
App
App
NETWORK OPERATING SYSTEM
E
T
H
PKT
E P
T K
H T
T
D
M
S
O
N
E
T
S
O
N
E
T
E
T
H
T P
E
D K T
M T H
E
T
H
E
T
H
PKT
E
T
H
PKT
E
T
H
S
O
N
E
T
E
T
H
PKT
E
T
H
T P
E
D K T
M T H
Circuit flow recovery, via
1. previously allocated backup
circuit (protection) or
2. dynamically created circuit
(restoration)
E
T
H
PKT
E
T
H
E
T
H
PKT
E
T
H
Packet flow recovery
via rerouting
Demo References
http://openflow.smugmug.com/OpenFlow-Videos/AggregationDemo/9651006_JGGzo#651126002_QybPc-L-LB
http://www.openflowswitch.org/wk/index.php/PAC.C
pac.c
business models
Demo Motivation
• It is well known that Transport Service Providers
dislike giving up manual control of their networks
• to an automated control plane
• no matter how intelligent that control plane may be
• how to convince them?
• It
is also well known that converged operation of
packet & circuit networks is a good idea
• for those that own both types of networks – eg AT&T, Verizon
• BUT what about those who own only packet networks –eg Google
• they do not wish to buy circuit switches
• how to convince them?
• We
believe the answer to both lies in virtualization
(or slicing)
Demo Goals
• The 3rd big demo of OpenFlow capabilities with circuit
switches
• potentially targeted for SuperComputing 2010
(November15th)
• Goal# 1: To demonstrate OpenFlow as a unified
virtualization platform for packet and circuit switches.
• Goal# 2: To demonstrate a deployment scenario for
converged packet and circuit networks, owned by
different service providers.
• essentially a technical/business model
• which TSPs can be comfortable with
• and which ISPs can buy into
Basic Idea: Unified Virtualization
C
C
OpenFlow Protocol
C
FLOWVISOR
OpenFlow Protocol
CK
P
CK
CK
P
CK
CK
P
P
Deployment Scenario: Different SPs
ISP ‘A’ Client
Controller
C
Private Line
Client Controller
C
ISP ‘B’ Client
Controller
C
OpenFlow Protocol
Under Transport Service
Provider (TSP) control
FLOWVISOR
OpenFlow Protocol
CK
Isolated
Client
Network
Slices
P
CK
CK
P
CK
CK
P
P
Single
Physical
Infrastructure
of Packet &
Circuit
Switches
Demo Topology
App
App
App
App
ISP# 1’s NetOS
E
T
H
T
D
M
S
O
N
E
T
S
O
N
E
T
T P
E
D K T
M T H
PKT
E P
T K
H T
App
ISP# 2’s NetOS
S
O
N
E
T
E
T
H
PKT
E
T
H
App
E
T
H
E
T
H
Internet Service Provider’s
(ISP# 1) OF enabled network
with slice of TSP’s network
E
T
H
PKT
TSP’s private line customer
E
T
H
T P
E
D K T
M T H
E
T
H
E
T
H
PKT
E
T
H
PKT
E
T
H
PKT
E
T
H
Transport Service Provider’s
(TSP) virtualized network
Internet Service Provider’s (ISP# 2)
OF enabled network with another
slice of TSP’s network
Demo Methodology
We will show:
1. TSP can virtualize its network with the FlowVisor while maintaining operator
control via NMS/EMS.
a) The FlowVisor will manage slices of the TSP’s network for ISP customers,
where { slice = bandwidth + control of part of TSP’s switches }
b) NMS/EMS can be used to manually provision circuits for Private Line
customers
2. Importantly, every customer (ISP# 1, ISP# 2, Pline) is isolated from other
customer’s slices.
1. ISP#1 is free to do whatever it wishes within its slice
a) eg. use an automated control plane (like OpenFlow)
b) bring up and tear-down links as dynamically as it wants
2. ISP#2 is free to do the same within its slice
3. Neither can control anything outside its slice, nor interfere with other slices
4. TSP can still use NMS/EMS for the rest of its network
ISP #1’s Business Model
ISP# 1 pays for a slice = { bandwidth + TSP switching
resources }
1. Part of the bandwidth is for static links between its edge
packet switches (like ISPs do today)
2. and some of it is for redirecting bandwidth between the
edge switches (unlike current practice)
3. The sum of both static bandwidth and redirected
bandwidth is paid for up-front.
4. The TSP switching resources in the slice are needed by the
ISP to enable the redirect capability.
ISP# 1’s network
E
T
H
PKT
E
T
H
E
T
H
E
T
H
PKT
..and spare bandwidth in the slice
E P
T K
H T
T
D
M
S
O
N
E
T
T P
E
D K T
M T H
E
T
H
Packet (virtual) topology
S
O
N
E
T
Notice the spare
interfaces
PKT
E
T
H
PKT
S
O
N
E
T
T P
E
D K T
M T H
E
T
H
PKT
E
T
H
E
T
H
PKT
E
T
H
E
T
H
Actual topology
E
T
H
ISP# 1’s network
E
T
H
PKT
E
T
H
E
T
H
E
T
H
PKT
S
O
N
E
T
T P
E
D K T
M T H
E
T
H
T
D
M
Packet (virtual) topology
S
O
N
E
T
E P
T K
H T
PKT
E
T
H
PKT
E
T
H
S
O
N
E
T
T P
E
D K T
M T H
E
T
H
PKT
E
T
H
E
T
H
PKT
E
T
H
Actual topology
ISP# 1 redirects bw between the spare interfaces to dynamically create new links!!
E
T
H
ISP #1’s Business Model Rationale
Q. Why have spare interfaces on the edge switches?
Why not use them all the time?
A. Spare interfaces on the edge switches cost less than
bandwidth in the core
1. sharing expensive core bandwidth between cheaper edge
ports is more cost-effective for the ISP
2. gives the ISP flexibility in using dynamic circuits to create
new packet links where needed, when needed
3. The comparison is between (in the simple network shown)
a)
3 static links + 1 dynamic link = 3 ports/edge switch + static &
dynamic core bandwidth
b) vs. 6 static links = 4 ports/edge switch + static core bandwidth
c) as the number of edge switches increase, the gap increases
ISP #2’s Business Model
ISP# 2 pays for a slice = { bandwidth + TSP switching
resources }
1. Only the bandwidth for static links between its edge packet
switches is paid for up-front.
2. Extra bandwidth is paid for on a pay-per-use basis
3. TSP switching resources are required to provision/teardown extra bandwidth
4. Extra bandwidth is not guaranteed
ISP# 2’s network
E
T
H
PKT
E
T
H
E
T
H
PKT
E
T
H
E
T
H
PKT
E
T
H
Packet (virtual) topology
E
T
H
PKT
E
T
H
Only static link bw paid for up-front
S
O
N
E
T
S
O
N
E
T
T P
E
D K T
M T H
T
D
M
S
O
N
E
T
E P
T K
H T
T P
E
D K T
M T H
E
T
H
PKT
Actual topology
E
T
H
PKT
E
T
H
ISP# 2 uses variable bandwidth packet links ( our SC09 demo )!!
E
T
H
ISP #2’s Business Model Rationale
Q. Why use variable bandwidth packet links? In other words
why have more bandwidth at the edge (say 10G) and pay
for less bandwidth in the core up-front (say 1G)
A. Again it is for cost-efficiency reasons.
1. ISP’s today would pay for the 10G in the core up-front
and then run their links at 10% utilization.
2. Instead they could pay for say 2.5G or 5G in the core,
and ramp up when they need to or scale back when
they don’t – pay per use.
Demonstrating Isolation
TSP provisions private
line and uses up all the
spare bw on the link
E
T
H
PKT
E
T
H
S
O
N
E
T
ISP #2 can still vary
bw on this link
S
O
N
E
T
T P
E
D K T
M T H
T
D
M
The switches inform the
ISP# 2’s controller, that the
non-guaranteed extra
bandwidth is no longer
available on this link (may
be available elsewhere)
S
O
N
E
T
E P
T K
H T
ISP# 2’s NetOS
T P
E
D K T
M T H
FlowVisor would block
ISP#2’s attempts on this link
E
T
H
PKT
Actual topology
Private line
customer
E
T
H
PKT
E
T
H
E
T
H
Demo References
• FlowVisor Technical Report
http://openflowswitch.org/downloads/technicalreports/
openflow-tr-2009-1-flowvisor.pdf
• Use of spare interfaces (for ISP# 1)– OFC 2002 paper
• Variable bandwidth packet links (for ISP# 2)
http://www.openflowswitch.org/wp/2009/11/openflowdemo-at-sc09/
Summary
• OpenFlow is a large clean-slate program with many
motivations and goals
• convergence of packet & circuit networks is one such goal
• OpenFlow simplifies and unifies across layers and
technologies
• packet and circuit infrastructures
• electronics and photonics
• and enables new capabilities in converged networks
• with real circuits
• or virtual circuits
Download