20040721-Research-Ndousse

advertisement
U.S. Department of Energy
Office of Science
Mathematical, Informational, and Computational Sciences
(MICS Division)
ESCC Meeting
July 21-23, 2004
Network Research Program
Update
Thomas D. Ndousse
Program Manager
U.S. Department of Energy
Program Goals
Office of Science
What's new
•
New SciDAC and MICS Network Research Projects
1.
2.
3.
4.
5.
6.
7.
8.
Ultra-Science Network Testbed – Base funding
ESnet MPLS Testbed – Base funding
Application Pilot Projects (Fusion Energy, Biology, Physics)
GMPLS Control Plane
GridFTP Lite (Generalized File Transfer Protocol)
Transport protocols for switched dedicated links
Cyber security: IDS and group security
Data grid wide area network monitoring for LHC
Gap
(Network-enabled storage systems)
•
•
•
Leadership Class National Supercomputer
Budget Reduction in FY-04 & FY-05 Budget
SC Network PI meeting in late September, 2004
U.S. Department of Energy
Revised Program Focus
Office of Science
Previous Focus
•
•
•
•
R&D on fundamental networking issues
Single and small group of investigators
Limited emphasis on technology transfer and integration
Limited emphasis on network, middleware, and applications integration
New Focus
• Applied research, engineering, and testing
• Experimental networking using UltraNet and MPLS testbed
• Integrated applications, middleware, an networks prototypes
• Leadership-class supercomputing
– Impact on network research
– Impact on research testbeds
– Impact on inter-agency network coordination activities
U.S. Department of Energy
Network Research Program Elements
Office of Science
Program Elements

R&D, E: Research, Development and Engineering

ANRT: Advanced Network Research Testbeds (ANRT)

ECPI: Early Career Principal Investigators

SBIR: Small Business innovation Research
U.S. Department of Energy
FY-03/04 Network Research Program Budget
Office of Science
FY-2/3 Budget by Focus areas
Year
FY-03/04
SciDAC
$2.0M
MICS
$4.5M
Total
$6.5M
Year
FY-04/5
SciDAC
$2.0M
MICS
$2.5M
Others
11%
Network
Measurement
31%
Network
Security
32%
High-Speed
Protocols
26%
FY-04 Budget by Program Elements
ECPI
8%
Testbeds
25%
R&D
Total
$4.5M
67%
U.S. Department of Energy
Implementation of Office of Science Networking Recommendations – I
(Very High-Speed Data Transfers)
Office of Science
Data, data, data, data everywhere!
•
Many science areas such high energy physics, computational biology, climate
modeling, astro-physics, etc., predict a need for multi-Gbits/sec data transfer
capabilities in the next 2 years
Program Activities
•
•
•
•
Scalable TCP protocol enhancements for share networks
Scalable UDP for share networks and dedicated circuits
Alternative TCP/UDP transport Protocols
Bandwidth on-demand technologies
•
•
GridFTP lite
Ultra high-speed network components
U.S. Department of Energy
Implementation of Office of Science Networking Recommendations – II
(Diverse SC Network Requirements)
Office of Science
Problem
•
Many science areas such high energy physics, computational biology, climate modeling,
astro-physics, etc., predict a need for multi-Gbits/sec data transfer capabilities in the
next 2 years
Program Vision
High-Performance Middleware
TCP
UDP
Variants
TCP
Variants
Others
Logical Network Layer
Packet-Switched
Links
Circuit-Switched
Links
Optical Layer
Hybrid-Switched Links
Control and Signaling Plane
High-End Science Applications
U.S. Department of Energy
Implementation of Office of Science Networking
Recommendations - III
Office of Science
Advanced Research Network




Experimental optical inter-networking
On-demand bandwidth/DWDM circuits
Ultra high protocol development/testing
GMPLS
Ultra-Science Network, 20 Gbps
High-Impact Science Network
•
•
•
•
Connect few high-impact science sites
Ultra high-speed IP network technologies
Reliable and secure services
QoS/MPLS for on-demand bandwidths
ESnet QoS/MPLS Testbed, 5 Sites
Production Networks




Connects all DOE sites
7x24 and highly reliable
Advanced Internet capability
Predominantly best-effort
ESnet
U.S. Department of Energy
Impact of MPLS and Ultra-Science Networks Testbeds
Office of Science
•
Category A Sites- Sites w/local fiber
arrangements
•
Category B Sites- Sites w/local fiber
arrangements (T3 to OC-12)
1.
2.
3.
4.
5.
6.
7.
FNAL
ANL
ORNL
PNNL
NERCS
LBL
SLAC
1.
2.
3.
4.
5.
BNL --- Tier 1 - ATLAS
JLAB
GA
Princeton
MIT
OC-12/OC-192
OC-12/OC-192
OC-12/OC-192
OC-12/OC-192
OC-48/OC-192
OC-48/OC-192
OC-12/OC-192
--- Tier 1 - CMS
--- Leadership Computing
--- EMSL Computing
--- Flagship Computing
--- BABAR Data Source
1.
Use UltraNet to link site with local fiber
connectivity
1.
Use MPLS to establish LSPs to link sites with highimpact applications
2.
Develop dynamic provisioning technologies to
manage DWDM circuits
2.
3.
Develop and test advanced transport protocols for
high-speed data transfers over DWDM links
Use MPLS to provide guaranteed end-to-end QoS
to high-impact applications
3.
Link LSPs with dynamics GMPLS circuits established
U.S. Department of Energy
Advanced Research Network Testbeds: (QoS+MPLS)
Office of Science
Goal
•
To develop advanced network technologies to provide guaranteed on-demand end-to-end bandwidth to
selected high-impact science applications
Technical Activities
•
•
•
•
•
Deployment of site QoS technologies at selected DOE sites
Integrate QoS with local grid infrastructure
Deployment of MPLS in ESnet core network
Integrate MPLS integration with GMPLS
Integrate on-demand bandwidth technologies to application
Target Science Applications
•
•
•
High Energy (CMS & ATLAS) – High-speed data transfer
Fusion Energy - remote control of scientific instruments
Nuclear Physics – remote collaborative visualization
U.S. Department of Energy
Initial MPLS Deployment in ESnet
Office of Science
Site Technology:
Core technologies:
Core Technologies:
QoS
MPLS
MPLS & GMPLS
CERN
PNNL
NERCS
SLAC
Caltech
GA
Starlight
Starlight
QoS/MPLS
GMPLS
BNL
FNAL
FNAL
ORNL
JLab
GMPLS Site
QoS MPLS
U.S. Department of Energy
Ultra-Science Network Testbed: Topology
Upgrade: 20 Gbps backbone
Office of Science
CERN
PNNL
StarLight
BNL
NERSC
LBL
ESnet 10 Gbps
20 Gbps
SLAC
JLab
Sunnyvale
CalTech
FNAL
CalTech
SOX
ORNL
Major Nodes
•
•
•
•
•
StarLight/FNAL
SOX/ORNL
Seattle/PNNL
Sunnyvale/SLAC
Sunnyvale/Caltech
DOE University Partners
10 Gbps ESnet Links
10 Gbps UltraNet Link
under discussion
DOE National Lab
U.S. Department of Energy
Ultra-Science Network Testbed: Activities
Office of Science
Dynamic Provisioning
•
•
•
•
•
Development data circuit-switched technologies
IP control plane based on GMPLS
Integration of QoS, MPLS, and GMPLS
Inter-domain control plane signaling
Bandwidth on-demand technologies
Ultra High-Speed Data Transfer Protocols
•
•
•
High-speed transport protocols for dedicated channels
High-speed data transfer protocols for dedicated channels
Layer data multicasting
Ultra High-Speed Cyber Security
•
•
•
Ultra high-speed IDS
Ultra high-speed firewalls and alternatives
Control plane security
U.S. Department of Energy
UltraNet funded Projects and Laboratory initiatives
Office of Science
UltraNet/GMPLS Institutions
•
FNAL
Fiber Starlight/UltraNet
•
ORNL
Fiber to Atlanta and Starlight/UltraNet
•
SLAC
Fiber to Sunnyvale/UltraNet (under discussion)
•
PNNL
Fiber connection to Seattle/UltraNet
•
Caltech
DWDM link to Sunnyvale/UltraNet
UltraNet QoS/MPLS
•
Fusion Energy:
•
ATALS Project:
•
CMS Project:
GA, NERCS, Princeton
BNL, CERN, U. Michigan
FNAL, CERN, UCSD
Funded Projects: Application Development
•
•
•
•
•
•
FANL
PNNL
ORNL
GA
BNL
Explore very high-speed transfer of LHC data on UltraNet
Remote visualization of computational biology on UltraNet
Astrophysics real-time data visualization on UltraNet & CHEETAH
Wide Area Network QoS using MPLS
Exploring QoS/MPLS for LHC data transfers
U.S. Department of Energy
Collaborations
Office of Science
Inter-Agency Collaboration
•
CHEETAH
NSF:
•
DRAGON
NSF:
•
OMNINet
NSF:
•
UltraNet
DOE:
•
HOPI
Internet2
Dynamic Provisioning – Control plane interoperability
Application
- Astrophysics (TSI)
Dynamic Provisioning – Control plane interoperability
All-optical network technology
Dynamic Provisioning – Control plane interoperability
All-optical network technology
Dynamic Provisioning – Control plane interoperability
Hybrid Circuit/packet switched network
-
Collaboration Issues
•
•
•
•
•
•
•
Control plane architecture and interoperability
Optical service definitions and taxonomy
Inter-domain circuit exchange services
GMPLS and MPLS (ESnet & Internet2) integration
Testing of circuit-based transport protocols
Integration of network-intensive applications
Integration with Grid applications
U.S. Department of Energy
UltraNet Operations and Management
Office of Science
Management Team
Engineering Team
1.
2.
3.
Research Team – Awards Pending
1.
2.
3.
•
•
•
UltraNet Engineering
ESnet Engineering representatives
Application Developers representatives
Network Research PIs
Application Prototyping PIs
Other Research Networks
UltraNet Engineering
ESnet Engineering Rep
ESCC Rep
Management Responsibilities *
1.
2.
3.
Prioritize experiments on UltraNet
Schedule testing
Develop technology transfer strategies
* Needs to be integrated into the Office of Science networking governance model
articulated in the roadmap workshop
U.S. Department of Energy
Network Technologies for Leadership Class Supercomputing
Office of Science
• Leadership super computer being built at ORNL
• National resource
• Access from university, national labs, and industry is a major challenge
• Impact of leadership class supercomputer on Office of science networking
• Network technologies for leadership class supercomputer
• Inter-agency networking coordination issues
U.S. Department of Energy
Network Technologies for Leadership Class Supercomputing
Office of Science
• Leadership super computer being built at ORNL
• National resource
• Access from university, national labs, and industry is a major challenge
• Impact of leadership class supercomputer on Office of science networking
• Network technologies for leadership class supercomputer
• Inter-agency networking coordination issues
U.S. Department of Energy
Computing and Communications:
The “impedance” mismatch: computation and communication
Office of Science
1.E+09:1P
Supercomputer peak performance
100 Gbps
projected
Backbone performance
1.E+08: 100T
Achievable end-to-end performance by applications
Earth Simulator 37T
80 Gbps
ASCI White: 12T
1.E+07: 10T
ASCI Blue Mountain: 3T
1.E+06: 1T
40 Gbps
Intel Paragon 150G
SONET
40 Gbps
1.0E+05
10 Gbps
2.5 Gbps
SONET
SONET
1.0E+04
10 Gbps
0.6 Gbps
Cray Y-MP:400M
1.0E+03: 1G
T1
Cray 1: 133M
T3
SONET
1 GigE Ethernet
100 Mbps Ethernet
0.15 Gbps
SONET
1 Gbps
10 Mbps Ethernet
1.E+02: 100M
1960
1970
1980
1990
Rule of thumb: The bandwidth must be adequate to transfer Petabyte/day
~ 200Gbps - NOT on the evolutionary path of backbone, much less application throughput
2000
2010
U.S. Department of Energy
Office of Science
Q&A
Download