20111003-boyd-NDDI

advertisement
October 3, 2011
Advanced Network Services
Tomorrow
Advanced Network Services –
Today and Tomorrow
• Advanced Network
Services - Today
–
–
–
–
Yesterday!
Current Services
Operations Status
Upgrade Overview
2 – 3/19/2016, © 2009 Internet2
• Advanced Network
Services - Tomorrow
–
–
–
–
Today!
Initiatives
Current development
Next steps
Seven strategic focus areas
Advanced network and
network services leadership
Internet2 Net+:
services “above the network”
U.S. UCAN
National/Regional collaboration
Global reach and leadership
Research community development
and engagement
Industry partnership development
and engagement
3 – 3/19/2016, © 2011 Internet2
The New Internet2 Network
••
••
••
••
Upgraded
100
Gbps
IP/MPLS/ION
Juniper T1600’s
New 17,500
mile
community
ownedNetwork
20+ yearwith
IRU10network
Upgraded
peering
service6500
network
with 6with
Juniper
MX960’ssites
88 wave 8.8
Tbps Ciena
optronics
55 add/drop
Deployment
of afrom
new Sunnyvale
Layer 2 service
on NDDI/OS3E
network
Just completing
to Chicago
to Washington
to New York
Enhanced
programs
and support
Remainderresearch
of the network
delivered
by this time next year
Agenda
• Research Partnership and Engagement
– NDDI/OS3E
– Campus Support for Data Intensive Science
– Performance Initiatives / Performance Portal
• Global Reach and Leadership
– R&E Networking in the Global Arena
– International Exchange Points
– NSF-Funded International Links (IRNC)
5 – 3/19/2016, © 2011 Internet2
Network Development and
Deployment Initiative (NDDI)
Partnership that includes Internet2, Indiana University, &
the Clean Slate Program at Stanford as contributing
partners. Many global collaborators interested in
interconnection and extension
Builds on NSF's support for GENI and Internet2's BTOPfunded backbone upgrade
Seeks to create a software defined advanced-servicescapable network substrate to support network and
domain research [note, this is a work in progress]
Components of the NDDI Substrate




30+ high-speed Ethernet switches deployed across
the upgraded Internet2 network and interconnected
via 10G waves
A common control plane being developed by IU,
Stanford, and Internet2
Production-level operational support
Ability to support service layers & research slices
48 x 10G SFP+
4 x 40G QSFP+
1.28 Tbps non-blocking
1 RU
The NDDI Control Plane




The control plane is key to placing the forwarding
behavior of the NDDI substrate under the control of
the community and allowing SDN innovations
Eventual goal to fully virtualize control plane to
enable substrate slices for community control,
research and service development
Will adopt open standards (e.g., OpenFlow)
Available as open source (Apache 2.0 License)
Layer-2 WAN
(OS³E)
Feature
Layer-3 WAN
(OSRF)
Feature
New
Service
At-Scale Testbed
(GENI)
Experiment Experiment Experiment
NDDI Substrate
Today
Future
Open Science, Scholarship and
Services Exchange (OS3E)



An example of a community defined network service
built on top of the NDDI substrate.
The OS3E will connect users at Internet2 POP’s with each
other, existing exchange points and other collaborators
via a flexible, open Layer 2 network.
A nationwide distributed Layer 2 “exchange”

Persistent Layer 2 VLANs with interdomain support

Production services designed to support the needs of
domain science (e.g., LHCONE, etc.)
Will support open interdomain standards

Initially IDC, eventually NSI
Available as open source (Apache 2.0 License)


OS3E Service Description
• This service is being developed in response to the request from the
community as expressed in the report from the NTAC and
subsequent approval by the AOAC.
• Service Description
– Best effort service
– National Ethernet Exchange Service (Layer 2)
• User Provisioned VLANs (WebUI or API)
• Time-Limited and Persistent VLANs
– Different price points for hairpin service and inter-node service
– Open access policy
– Underlying wave infrastructure will be augmented as needed using the
same general approach as used in the IP network.
– Inter-Domain Provisioning
– Built on SDN: Open/Flexible Platform to Innovate
OSE Key Features
• Develop once, implement on many different
switches
• VLAN Provisioning Time: < 1 sec
• Automated Failover to Backup Path
• Controller Redundancy
• Auto Discovery of New Switches & Circuits
• Automated Monitoring
3/19/2016
12
OS3E Use Cases




Dedicated Bandwidth for Large File Transfers or
Other Applications
Layer-2 Connectivity Between Testbeds
Redundancy / Disaster Recovery Connectivity
Between Internet2 Members
Connectivity to Other Services

e.g. Net+ Services
OS3E / NDDI Timeline
April, 2011
May-September
Early Program Announcement
Hardware, Controller selection
Substrate development
October, 2011 First Deployment and National Demo
Link Policy & funding discussion
Next site group selection
iNDDI engagement
November, 2011
Expanded Deployment
Inter-domain capabilities
December, 2011
Initial release of NDDI software
January, 2011
Large scale national deployment
Support for Network Research



OS3E: Layer-2 Interconnect for Testbeds and
Experiments
OS3E: Open Platform for Evolving the Network
NDDI substrate control plane key to supporting
network research



At-scale, high performance, researcher-defined network
forwarding behavior
Virtual control plane provides the researcher with the
network “LEGOs” to build a custom topology employing a
researcher-defined forwarding plane
NDDI substrate will have the capacity and reach to
enable large testbeds
Making NDDI global…

Substrate will support IDC (i.e., it will be interdomain capable)



Expect interconnection with other OpenFlow
testbeds as a first step (likely staticly)
While the initial investors are US-based, NDDI
seeks global collaborators on the substrate
infrastructure as well as control plane features
Currently collecting contact information for
those interested in being a part of NDDI

please send e-mail to nddi-intl@internet2.edu
“Open”
• Although it may be disruptive to existing business models, we
are committed to extending a policy-free approach
• Each individual node should function like an “exchange point”
in terms of policy, cost, capabilities
• Fully distributed exchange will operate as close to exchange
point as possible given constraints: i.e. transport has
additional associated costs
– Inter-node transport scalability and funding needs discussion
– Initially, an open, best effort service
– Potential to add a dedicated priority queuing feature
• Internet2 would like to position this service on the forefront
of pushing “open” approaches in distributed networks.
NDDI & OS3E
18 – 3/19/2016, © 2011 Internet2
NDDI & OS3E
19 – 3/19/2016, © 2011 Internet2
NDDI & OS3E
20 – 3/19/2016, © 2011 Internet2
NDDI / OS3E Implementation Status
• Deployment
– NEC G8264 switch selected for initial deployment
– 4 nodes installed (NYC, DC, Chicago, LA)
– 5th node (Seattle) by SC
• Software
– NOX OpenFlow controller selected for initial implementation
– Software functional to demo Layer 2 VLAN service (OS3E) over
OpenFlow substrate (NDDI) by FMM
– Software functional to peer with ION (and other IDCs) by SC11
– Software to peer with SRS OpenFlow demos at SC11
– Open source software package to be made available in 2012
21 – 3/19/2016, © 2011 Internet2
3/19/2016
22
OS3E / NDDI Demo
Getting Connected:
•Deployment of additional switches on demand.
•Internet2 NOC will deploy switch and circuits, and tie them
into the OS3E software
•After switch deployment, work with Internet2 NOC to get
OS3E account(s), workgroup created
•Begin creating circuits using OS3E web interface or API
Demo
23 – 3/19/2016, © 2011 Internet2
OS3E / NDDI Demo
Try out the OS3E UI: http://os3e.net.internet2.edu/
user: os3e password: os3edemo
24 – 3/19/2016, © 2009 Internet2
OS3E Costs and Fees
• We understand the costs.
• There will likely be graduated fees:
– A fee for connectors only wishing to peer with other
connectors on the same switch.
– A greater fee for connectors wishing to utilize the
network interconnecting these exchange facilities.
• It is hard at this point to suggest exact fees, they
could be adapted depending on the adoption
levels.
• This discussion is more about gathering
information from the community.
Agenda
• Research Partnership and Engagement
– NDDI/OS3E
– Campus Support for Data Intensive Science
– Performance Initiatives / Performance Portal
• Global Reach and Leadership
– R&E Networking in the Global Arena
– International Exchange Points
– NSF-Funded International Links (IRNC)
26 – 3/19/2016, © 2011 Internet2
ION
• ION
– Shared VLAN service across the Internet2 backbone
– Implemented as combination dedicated / scavenger service atop the
Layer 3 infrastructure
– Implements IDC protocol
– Implemented with OSCARS and perfSONAR-PS
• What’s new (October, 2011)
– Modified ION to be a persistent VLAN service
– Integrated with ESnet SDN, GÉANT AUTObahn, and USLHCnet as part
of DICE-Dynamic Service
• What’s planned (late 2011)
– As DYNES service rolls out, ION is the backbone linking the various
regional deployments
– Peer ION with OS3E in a few locations, run both services in parallel
– Campus / Regional IDCs can also peer with OS3E
27 – 3/19/2016, © 2011 Internet2
DYNES
• DYNES (NSF #0958998) = Enable dynamic circuit services
end to end
– Deploy equipment at the regional and campus levels
– Based on OSCARS to control circuits, FDT to move data,
perfSONAR to monitor performance
– Funding – May 2010 – May 2013
– Emphasis to enable this service for scientific use
• Current Status
– Through with our first deployment group, into testing of the
software and hardware
– Configuring second group, shipments have started
– Third group in planning stages
28 – 3/19/2016, © 2011 Internet2
DYNES Projected Topology (Fall 2011)
• Based on applications accepted
• Showing peerings to other Dynamic Circuit Networks (DCN)
29 – 3/19/2016, © 2011 Internet2
DYNES Deployment Status
• Group A – Fully deployed, still undergoing testing
(Caltech, Vanderbilt, UMich, MAX, Rutgers, UDel, JHU,
SOX, AMPATH)
• Group B – Configuring now, deployment expected fall
(TTU, UTA, UTD, SMU, UH, Rice, LEARN, MAGPI,
UPenn, MREN, UChicago, UWisc, UIUC, FIU)
• Group C – Late Fall/Winter configuration expected,
deployment and testing into next year (UCSD, UCSC,
UNL, OU, UIowa, NOX, BU, Harvard, Tufts, FRGP,
UColorado)
30 – 3/19/2016, © 2011 Internet2
LHCONE Status
• LHCONE is a response to the changing dynamic of
data movement in the LHC environment.
• It is composed of multiple parts:
– North America, Transatlantic Links, Europe
– Others?
• It is expected to be composed of multiple
services
– Multipoint service
– Point-to-point service
– Monitoring service
LHCONE Multipoint Service
• Initially created as a shared Layer 2 domain.
• Uses 2 VLANs (2000 and 3000) on separate
transatlantic routes in order to avoid loops.
• Enables up to 25G on the Trans-Atlantic routes for
LHC traffic.
• Use of dual paths provides redundancy.
LHCONE Multipoint Service
3/19/2016
33
LHCONE Multipoint service in
L3 TransAtl Thoughts
North
America
Sonet Circuit
OC-192 Sonet Interface
Undetermined interface type
10 Gig WAN PHY Interface
MAN LAN
Brocade Switch vlan 2000
Bonded
ACE Circuits
Core
Director
vlan 3000
Amsterdam
vlan 2000
USLHC
Core
Director
T1600
Geant
vlan 3000
USLHCNET Circuit
USHC
Core
Director
NetherLight
vlan 2000
vlan 3000
T1600
New York
VPLS
Routing
Instance
vlan 2000
vlan 3000
T1600
Chicago
LHCONE Point-to-Point Service
• Planned point-to-point service
• Suggestion: Build on efforts of DYNES and DICEDynamic service
• DICE-Dynamic service being rolled out by ESnet,
GÉANT, Internet2, and USLHCnet
– Remaining issues being worked out
– Planned commencement of service: October, 2011
– Built on OSCARS (ESnet, Internet2, USLHCnet) and
AUTOBAHN (GÉANT), using IDC protocol
3/19/2016
35
LHCONE Monitoring Service
• Planned monitoring service
• Suggestion: Build on efforts of DYNES and
DICE-Diagnostic service
• DICE-Diagnostic service, being rolled out by
ESnet, GÉANT, and Internet2
– Remaining issues being worked out
– Planned commencement of service: October, 2011
– Built on perfSONAR
3/19/2016
36
DYNES/ION and LHCONE
• Simple to integrate DYNES/ION and LHCONE point-topoint service
• Possible to integrate DYNES/ION and LHCONE
multipoint service?
– DYNES / LHCONE Architecture team discussing ways to
integrate DYNES functionality with LHCONE
– It is expected that point to point connections through
DYNES would work …
– Possible to position DYNES as an ‘onramp’ or ‘gateway’ to
the multipoint service?
• Glue a dynamic connection from a campus (through a regional)
into the VLAN 2000/3000
• Requires some adjustments to the DYNES end-site addressing and
routing configurations to integrate into LHCONE multipoint layer2
environment
• Would allow smaller T3 sites in the US instant access as soon as
they get their DYNES gear.
37 – 3/19/2016, © 2011 Internet2
Campus Support for
Data Intensive Science
• Current Network Regime: A big wall around a
network of laptops administered by students
– Breaks the end-to-end model
– Performance tuned for small flows
– Security in the net
• Augmented Network Regime:
–
–
–
–
3/19/2016
Area in the network where science is supported
Reinstate the end-to-end model
Performance tuned for large flows
Security at the node, not in the net
38
Network Issues for
Data Intensive Science
• Flow Type Co-Mingling
• “Fair” Transport Protocols (Congestion
Control)
• Lack of Network Awareness by Application /
Middleware
• Firewall Limitations
• Network Elements with Small Buffers
3/19/2016
39
Network Solutions for
Data Intensive Science
• Dedicated transfer nodes
– High performance systems
• Suite of software
• Specialized performance-focused configurations
• Dedicated transfer facilities
– Dedicated transfer nodes
– Associated networking equipment
– Networking connections
• Network Solutions
– Internet2 IP service, NDDI, OS3E, ION
3/19/2016
40
Implementing Network Support for
Data Intensive Science
Fig
ure
1
LS
TI
Sol
uti
on
Op
tio
n
Sp
ace
3/19/2016
41
Agenda
• Research Partnership and Engagement
– NDDI/OS3E
– Campus Support for Data Intensive Science
– Performance Initiatives / Performance Portal
• Global Reach and Leadership
– R&E Networking in the Global Arena
– International Exchange Points
– NSF-Funded International Links (IRNC)
42 – 3/19/2016, © 2011 Internet2
Performance Architecture
Analysis &
Visualization
Analysis &
Visualization
API
Measurement
Infrastructure
Measurement
Infrastructure
API
Data
Collection
Performance
Tools
43 – 3/19/2016, © 2009 Internet2
Performance Infrastructure
• perfSONAR is performance middleware,
designed to integrate performance monitoring
tools across a wide range of networks
• perfSONAR is or will soon be widely deployed
across campuses, regional networks,
backbone networks, and transoceanic links
• Layer 2 and Layer 3 data gathering tools are or
will soon be widely deployed across multiple
networks
3/19/2016
44
Performance Use Cases
• CIO/CEO wants to see global view of network
activity and performance comparisons with other
networks.
• End user wants to look at a network weather map
to determine if there are currently ‘storms’ in the
area.
• End user wants to evaluate if a specific
applicationthey want to use is likely to work or
not.
• Network engineer wants to diagnose local or
inter-domain network performance problems.
3/19/2016
45
Performance: What’s Missing?
• Analysis and visualization tools
• Collective support for Inter-domain
performance problems
– Help desk
– Training classes
3/19/2016
46
Performance:
A vision for the future
• We intend to bring together the missing
components in the form of a comprehensive
performance program
– Performance portal
– Monitoring as an integral component of Campus
Support for Data Intensive Science
– Data collection integrated into network services
– Community engagement in collective problem of
end-to-end performance
3/19/2016
47
Agenda
• Research Partnership and Engagement
– NDDI/OS3E
– Campus Support for Data Intensive Science
– Performance Initiatives / Performance Portal
• Global Reach and Leadership
– R&E Networking in the Global Arena
– International Exchange Points
– NSF-Funded International Links (IRNC)
48 – 3/19/2016, © 2011 Internet2
Seven strategic focus areas
Advanced network and
network services leadership
Internet2 Net+:
services “above the network”
U.S. UCAN
National/Regional collaboration
Global reach and leadership
Research community development
and engagement
Industry partnership development
and engagement
49 – 3/19/2016, © 2011 Internet2
R&E Networking in the Global Arena
• Campuses and researchers feel the need to think
globally, not locally
– Requires a strategic focus by Internet2
• Global architecture needs to be more cohesive
– Work intentionally in partnership on global
network capacity
• Services need to be global in scope
– Support for Data intensive science
– Telepresence
– International Campuses
50 – 3/19/2016, © 2011 Internet2
Agenda
• Research Partnership and Engagement
– NDDI/OS3E
– Campus Support for Data Intensive Science
– Performance Initiatives / Performance Portal
• Global Reach and Leadership
– R&E Networking in the Global Arena
– International Exchange Points
– NSF-Funded International Links (IRNC)
51 – 3/19/2016, © 2011 Internet2
MAN LAN
• New York Exchange Point
• Ciena Core Director and Cisco 6513
• Current Connections on the Core Director:
– 11 OC-192’s
– 9 1 Gig
• Current Connection on the 6513
– 16 10G Ethernets
– 7 1G Ethernet
MAN LAN Roadmap
• Switch upgrade:
– Brocade MLXe-16 was purchased with:
• 24 10G ports
• 24 1 G ports
• 2 100G ports
– Internet2 and ESnet will be connected at 100G.
• The Brocade will allow landing transatlantic
circuits of greater then 10G.
• An IDC for Dynamic circuits will be installed.
– Comply with GLIF GOLE definition
MAN LAN Services
• MAN LAN is an Open Exchange Point.
• 1 Gbps, 10 Gbps, and 100 Gbps interfaces on the
Brocade switch.
– 40 Gbps could be available by 2012.
• Map dedicated VLANs through for Layer2 connectivity
beyond the ethernet switch.
• With the Brocade the possibility of higher layer
services should there be a need.
– This would include OpenFlow being enabled on the
Brocade.
• Dynamic services via an IDC.
• perfSONAR-ps instrumentation.
Agenda
• Research Partnership and Engagement
– NDDI/OS3E
– Campus Support for Data Intensive Science
– Performance Initiatives / Performance Portal
• Global Reach and Leadership
– R&E Networking in the Global Arena
– International Exchange Points
– NSF-Funded International Links (IRNC)
55 – 3/19/2016, © 2011 Internet2
Agenda
• Research Partnership and Engagement
– NDDI/OS3E
– Campus Support for Data Intensive Science
– Performance Initiatives / Performance Portal
• Global Reach and Leadership
–
–
–
–
R&E Networking in the Global Arena
Manhattan Landing Exchange Point (MAN LAN)
Washington DC International Exchange Point (WIX)
NSF-Funded International Links (IRNC)
56 – 3/19/2016, © 2011 Internet2
IRNC Program
• International Research Network Connections (IRNC) is an NSF
Office of Cyberinfrastructure program to enable collaboration,
in the research and education community, on a global scale.
– These grants facilitate hardware and software solutions to foster
access to remote instruments, data, and computational resources
located throughout the world.
• Internet2 was awarded 2 IRNC Special Projects awards in May
of 2010:
– Integrate IRNC-funded links into distributed monitoring
framework available across R&E networks
– Integrate IRNC-funded links into dynamic circuit framework
available across R&E networks
57 – 3/19/2016, © 2011 Internet2
GLIF 2011 - Topology
58 – 3/19/2016, © 2011 Internet2
Advanced Network Services Tomorrow
October 3rd, 2011, Internet2 Fall Member Meeting
Eric Boyd, Internet2
For more information, visit http://www.internet2.edu/
59 – 3/19/2016, © 2011 Internet2
IRNC SP:IRIS
• NSF Grant # 0962704
• IRIS will provide a software framework to simplify the task
of end-to-end network performance monitoring and
diagnostics
– Based on the widely deployed perfSONAR-PS infrastructure
and protocols
– Facilitates broader deployment of perfSONAR enabled
resources; increasing the likelihood of diagnostic resources
being available along the end-to-end paths
• Integrates with existing deployments on R&E networks
(ESnet, GÉANT, Internet2, Regional and NRENs) as well as
those maintained by scientific VOs (USATLAS, LHCOPN,
eVLBI, REDDnet)
• Will work with IRNC ProNet awardees to customize
deployment for target network functionality
• Target end date is April 2013
60 – 3/19/2016, © 2011 Internet2
IRNC SP:DyGIR
• NSF Grant # 0962705
• DyGIR will provide a component based solution for
scheduling dynamic circuits on IRNC ProNet infrastructure
– Utilizes the OSCARS software suite, developed by ESnet
– Integrates circuit statistics and networking monitoring via
the perfSONAR-PS framework
• Capabilities will integrate with existing backbone networks
(ESnet SDN, GÉANT AutoBAHN, Internet2 ION), as well as
emerging campus deployments (DYNES – an NSF MRI
Funded effort).
• Will work with IRNC ProNet awardees to customize
deployment for target network functionality
• Target end date is April 2013
61 – 3/19/2016, © 2011 Internet2
IRNC Outreach to ProNET Awardees
•
ACE
–
–
•
•
AmLight
–
–
–
–
–
–
•
Participated in Joint demonstration of DYNES and RNP’s Dynamic Circuit Infrastructure at GLIF
IDC available in the AMPATH exchange for use on International Links
perfSONAR-PS monitoring available
GLORIAD
–
•
perfSONAR-PS test points to be available, along with periodic monitoring to select locations
within GEANT
Exploring what makes sense from a dynamic circuit network perspective. Given proximity to
MAN-LAN, a MAN-LAN supported IDC may make an ACE specific one unnecessary.
Discussed current DYGIR and IRIS activities
Currently have ability to create VLANs on infrastructure. Looking to integrate with dynamic
circuit networks over next couple of years.
Exploring current GLORIAD peerings and heavy users to determine what use cases could
benefit from dynamic circuits.
Exploring best way to capitalize on perfSONAR – specifically looking at ways to publish passive
flow data currently collected by GLORIAD using perfsONAR protocols
TransLight/Pacific Wave
–
–
Established working group to install perfSONAR-PS monitoring software at endpoints and
participants in the region
Evaluating dynamic capabilities
TransPAC3
–
–
Have traded topology information and are determining what switches in the infrastructure to put
under dynamic control. Will likely peer with ION and JGN2.
Currently has perfSONAR-PS test points, and periodic tests with APAN
62 – 3/19/2016, © 2011 Internet2
Download