Reality Check on Implementing SDN

advertisement
Panel: Reality Check on Implementing SDN
Kuang-Ching (KC) Wang, Clemson University
William Brockelsby, North Carolina State University
Tripti Sinha, University of Maryland
Slide 1
Slide 1
Introductions
• Moderator: KC Wang, Clemson University
– Associate Professor and Networking CTO
– Director, Center of Excellence for Next Generation
Computing & Creativity
• William Brockelsby, NC State University
– Lead Network Architect
• Tripti Sinha, University of Maryland
– Assistant VP and CTO
– Executive Director, MAX
Slide 2
Panel Discussion Questions
• Is SDN ready for production deployment on
university campuses?
• How do you see SDN, once mature and in place,
impact university faculty, students, staff, and IT?
• Where, if any, should further investment be to
further SDN deployment on university campuses?
Slide 3
Clemson University
CC-NIE: Clemson NextNet
Slide 4
Clemson NextNet – a Campus Wide Science DMZ
ISP
ISP
ISP
SCLR
Internet2
Internet2
AL2S
C-Light
•
•
•
•
Big Switch Controller (partnership)
Dell/Force10, Pica8
20 buildings 40/10 Gbps
I2 AL2S via Brocade MLXe32
Border
Science
DMZ
Core
Buildings
Enterprise
Compu ng
Advanced Compu ng
Palme o/CloudLab/GENI/CRI
Clemson NextNet
Slide 5
Our Proposed Focus
• Production acquisition, deployment, operation
• Build campus “SDN Team” (IT + faculty + students)
• Focus on science use cases
– Bioengineering – Transfer HPC overlay for da Vinci surgical robots
– Genomics – Transfer large data sets from NCBI/NIH
– Social Media – Transfer cross-country search results
– Visualization/Video – Transfer large graphic/video data
Slide 6
What Worked, What Did Not
 Up and running – mixed vendor HW + Big Switch Controller
 Brings faculty, students, and IT, across universities, closer than ever – new
partnerships are here to stay
 Showed what’s possible – new NSF projects proposed/funded in multiple
domains
 I2 AL2S (also a SDN) is heavily used
X Painful for engineers – switch firmware bugs, vendor-dependent OpenFlow
behavior, no production-grade campus SDN controller, limited demand –
witness growth over 3 years, not “production ready”
X What’s available with SDN today has not physically benefited science
 not yet, but significant progress already though, getting close
 Other than network researchers, science users care about
 speed (SDN or not)
 workflow (SDN to be developed & requires inter-campus SDN coordination/federation)
Slide 7
Most Significant Benefits of CC-NIE:
Bringing People, Schools, Research Areas Closer
Feltus Researches Compu ng Solu ons to Complex Gene cs Problems and Engages Many NSF
REACT
Slide 8
Production SDN Indeed is Coming
University IT is Transforming
• While deployed CC-NIE network needed lots of “care”, network
engineers accumulated lots of insights of SDN realities & gotchas.
• In parallel, IT satisfactorily evaluated and adopted vendor white-box
SDN data center fabric and monitoring fabric
– production experience, fast feature speed, close vendor relationship
– BUT completely new model for network planning, provisioning, security
– breaking silos, shift of responsibilities
• SDN is still useful to sustain innovations already underway, even if it
may be in smaller footprint, less than abundant traffic, and remains
painful for network engineers for a while.
– New mindset on campus – researchers + engineers as day-to-day norm
Slide 9
North Carolina State University
CC-NIE: Data Intensive e-Science and SDN
Slide 10
Goals
•Eliminate bandwidth bottlenecks that hinder the
success of researchers across many different
disciplines in several physical locations on campus
•Leverage SDN functionality of new hardware to
develop a virtual science DMZ
•Connect our SDN environment to the regional SDN
environment being developed by MCNC (provides
access to I2 AL2S)
Slide 11
Eliminate Bandwidth Bottlenecks
•Upgrade research building uplinks from 1Gb/s to
10Gb/s and deploy 10Gb/s aggregation switches
where needed
•Deploy 1Gb/s switches with 10Gb/s uplinks to
replace legacy 10/100 Mb/s switches
•Make direct 10Gb/s connections available where
needed (research specific server rooms)
Slide 12
Deploy a Hybrid Architecture
•New switches can operate in “classic” mode for
typical use cases
•Specific ports can operate in SDN/OpenFlow mode
to permit development of new use cases (virtual
science DMZ)
•Hybrid architecture allows experimentation with new
technology with lower risk
Slide 13
SDN Application
•Permit dynamic virtual circuit provisioning on ports
participating in SDN on hybrid switches
•Leverage Q-in-Q and MPLS L2 VPNs to connect
“islands of SDN” transparently on campus in
hardware at line rate
•Develop a virtual science DMZ by deploying
dedicated “friction free” paths by utilizing SDN in
buildings and Q-in-Q and MPLS on the backbone
Slide 14
Slide 15
Current Status
•Procured 48 1Gb/s switches with 10Gb/s ports; 4
10Gb/s aggregation switches
•All but 3 switches are physically deployed, continuing
to patch researchers into infrastructure
•In process of deploying SDN based virtual circuit
controller for virtual science DMZ use case
•Working with regional network provider to prepare for
integration into regional SDN (access to I2 AL2S)
Slide 16
Project Review
•Developed excellent opportunities to start and
continue discussions between faculty, staff and
students
•Hybrid architecture permits immediate benefit with
low risk adoption of SDN based use cases
•Minor difficulty with SDN/OpenFlow: Some hardware
and software challenges
•Overall an excellent opportunity for our campus:
Thank You!
Slide 17
University of Maryland
Tripti Sinha
Slide 18
Slide 18
SDN focused work sponsored by:
NSF Award No: 1246386
CC-NIE Integration: SDNX - Enabling End-to-End Dynamic Science DMZ
Lead Organization: UMD
Co-PIs: GW
NSF Award No: 1340984
CC-NIE Integration: High Performance Computing with Data and
Networking Acceleration (HPCDNA)
Lead Organization: UMD
Slide 19
Reality Check on Implementing SDN: SDN’s Play
Commercial Infrastructures
R&E Infrastructures, Science DMZs
Creative One-offs
Slide 20
Reality Check on Implementing SDN
•
•
Commercial Infrastructures
– SDN in commercial datacenters
– SDN in private self-built custom networks (Google)
– Next: SDN in commercial service providers
– Future: SDN in the Enterprise
R&E Infrastructures, Science DMZ
– Internet2's nationwide Openflow network and their OESS/FSFW software
suite
– ESnet and their dynamic provisioning system OSCARS
– Some regional networks have deployed the same or similar technologies
– Campus ScienceDMZs
– Campus core networks and datacenters (in support of ScienceDMZs)
Slide 21
How Maryland and MAX are employing SDN
Slide 22
UMD and MAX SDN Focus Areas
•
SDN across the MAX regional and inside the “ScienceDMZ Facility”
– SDN as an enabling function to coordinate and orchestrate access to compute
and storage services in user specific ways
•
•
Creative One-offs in managing security
– Applying new approaches to threat identification and response
– SDN controller being used as a “more flexible policy based router”
(allowing access to trusted flows and denial to untrusted flows)
– SDN technologies have a more palatable cost in keeping up with higher
network speeds compared to expensive proprietary hardware based
security appliances
Adoption of SDN across the core enterprise will likely be slow
– Campus networks are complex environments serving multiple needs
quite well
– Adoption rate will only increase when compelling use cases see SDN as
an enabler (like the ScienceDMZ). No compelling forcing function yet.
Slide 23
Reality Check on Implementing SDN
• Some unexpected positive byproducts of SDN work
– Bringing communities together (scientists and information
technologists)
– Recognizing that enablers need a closer coupling with
domain experts
– Being creative with SDN -- looking beyond datacenters
and Science DMZs
Slide 24
Download