ppt 4.6MB - Star Tap

advertisement
The Advanced Networks and Services
Underpinning the Large-Scale Science of
DOE’s Office of Science
The Evolution of Production Networks
Over the Next 10 Years
to Support Large-Scale International Science
An ESnet View
William E. Johnston, wej@es.net
ESnet Manager and Senior Scientist
Lawrence Berkeley National Laboratory
www.es.net
1
 DOE Office of Science Drivers for Networking
•
The DOE Office of Science supports more than 40% of all US
R&D in high-energy physics, nuclear physics, and fusion
energy sciences (http://www.science.doe.gov)
•
This large-scale science that is the mission of the Office of
Science depends on high-speed networks for
o
o
o
o
o
Sharing of massive amounts of data
Supporting thousands of collaborators world-wide
Distributed data processing
Distributed simulation, visualization, and computational steering
Distributed data management
•
The role of ESnet is to provides networking that supports and
anticipates these uses for the Office of Science Labs and their
collaborators
•
The issues were explored in two Office of Science workshops
that formulated networking requirements to meet the needs of
the science programs (see refs.)
2
g, 90
Feb, 90
Aug, 90
b, 91
Feb, 91
g, 91
Aug, 91
b, 92
Feb, 92
g, 92
Aug, 92
b, 93
Feb, 93
g, 93
Aug, 93
b, 94
Feb, 94
g, 94
Aug, 94
b, 95
Feb, 95
g, 95
Aug, 95
b, 96
Feb, 96
g, 96
Aug, 96
b, 97
Feb, 97
g, 97
Aug, 97
b, 98
Feb, 98
g, 98
Aug, 98
b, 99
Feb, 99
g, 99
Aug, 99
b, 00
Feb, 00
g, 00
Aug, 00
b, 01
Feb, 01
g,01
01
Aug,
b, 02
Feb, 02
g, 02
Aug, 02
b, 03
Feb, 03
g, 03
Aug, 03
b, 04
Feb, 04
g, 04
Aug, 04
b, 05
Feb, 05
0
b, 90
TByte/Month
TBytes/Month
May,Science
2005
Increasing Large-Scale
Collaboration
is Reflected in Network Usage
• As of May, 2005 ESnet is transporting about 530 Terabytes/mo.
• ESnet traffic has increased by 10X every 46 months, on average,
since 1990
600
500
ESnet Monthly Accepted Traffic
Feb., 1990 – May, 2005
400
300
200
100
Large-Scale Science Has Changed How the Network is Used
TBytes/Month
ESnet Top 100 Host-to-Host Flows,
Feb., 2005
12
10
8
6
4
2
DOE LabInternational R&E
Lab-U.S. R&E
(domestic)
Total ESnet traffic
Feb., 2005 = 323 TBy
in approx.
6,000,000,000 flows
All other flows
(< 0.28 TBy/month
each)
Lab-Lab
(domestic)
Lab-Comm.
(domestic)
Top 100 flows = 84 TBy
0
 A small number of large-scale science users now account for
a significant fraction of all ESnet traffic
 Over the next few years this will grow to be the dominate use of the
network
4
Large-Scale Science Has Changed How the Network is Used
•
These flows are primarily bulk data transfer at this
point and are candidates for circuit based services
for several reasons
o
Traffic engineering – to manage the traffic on the
backbone
o
Guaranteed bandwidth is needed to satisfy deadline
scheduling requirements
o
Traffic isolation will permit the use of efficient, but TCP
unfriendly, data transfer protocols
5
Virtual Circuit Network Services
•
•
A top priority of the science community
Today
o
•
In the near future
o
o
o
•
Primarily to support bulk data transfer with deadlines
Support for widely distributed Grid workflow engines
Real-time instrument operation
Coupled, distributed applications
To get an idea of how circuit services might be used
to support the current trends, look at the one year
history of the flows that are currently the top 20
o
Estimate from the flow history what would be the
characteristics of a circuit set up to manage the flow
6
TeraByes/mo.
20
18
16
14
12
10
8
6
4
2
0
SLAC - CERN
IN2P3 (IT) - SLAC
FNAL - São Paulo Analysis Center
(BR)
NetNews
Rutherford Lab (UK) - SLAC
LANL - U. Md.
SLAC - U. Victoria (CA)
SLAC - Rutherford Lab (UK)
SLAC - Rutherford Lab (UK)
INFN (IT) - SLAC
SLAC - U. Victoria (CA)
SLAC - Rutherford Lab (UK)
SLAC - Rutherford Lab (UK)
INFN (IT) - SLAC
FNAL - IN2P3 (FR)
SLAC - INFN (IT)
SLAC - INFN (IT)
SLAC - IN2P3 (FR)
LIGO - CalTech
Source and Destination of the Top 20 Flows, Sept. 2005
7
What are Characteristics of Today’s Flows – How “Dynamic” a Circuit?
LIGO – CalTech
Over 1 year the “circuit” duration is about 3 months
9/23/2004
10/23/2004
11/23/2004
12/23/2004
1/23/2005
2/23/2005
3/23/2005
4/23/2005
5/23/2005
6/23/2005
7/23/2005
8/23/2005
9/23/2005
Gigabytes/day
1450
1350
1250
1150
1050
950
850
750
650
550
450
350
250
150
50
-50(no data)
8
What are Characteristics of Today’s Flows – How “Dynamic” a Circuit?
SLAC - IN2P3 (FR)
Over 1 year “circuit” duration is about 1 day to 1 week
9/23/2004
10/23/2004
11/23/2004
12/23/2004
1/23/2005
2/23/2005
3/23/2005
4/23/2005
5/23/2005
6/23/2005
7/23/2005
8/23/2005
9/23/2005
Gigabytes/day
950
900
850
800
750
700
650
600
550
500
450
400
350
300
250
200
150
100
50
0
-50 (no data)
9
Between ESnet, Abilene, GÉANT, and the connected regional R&E
networks, there will be dozens of lambdas in production networks
that are shared between thousands of users who want to use
virtual circuits – Very complex inter-domain issues
similar situation
in GÉANT and the
European NRENs
Abilene
ESnetAbilene
x-connects
ESnet
similar situation
in US regionals
US R&E environment
OSCARS: Virtual Circuit Service
•
Despite the long circuit duration, these circuits cannot be
managed by hand – too many circuits
o
•
•
There must automated scheduling, authorization, path analysis and
selection, and path setup = management plane and control plane
Virtual circuits must operate across domains
o
End points will be on campuses or research institutes that are served
by ESnet, Abilene’s regional networks, and GÉANT’s regional
networks – typically five domains to cross to do end-to-end system
connection
o
There are many issues here that are poorly understood

A collaboration between Internet2/HOPI, DANTE/GÉANT, and ESnet
is building a prototype-production, interoperable service
ESnet virtual circuit project: On-demand Secure Circuits and
Advance Reservation System (OSCARS) (Contact Chin Guok
(chin@es.net) for information.)
11
What about lambda switching?
• Two factors argue that this is a long ways out for production
networks
1) There will not be enough lambdas available to satisfy the need
- Just provisioning a single lambda ring around the US (7000miles 11,000km) is still about $2,000,000 even on R&E networks
– This should drop by a factor of 5 -10 over next decade
2) Even if there were a “lot” of lambdas (hundreds?) there are thousands
of large-scale science users
- Just considering sites (and not scientific groups) there are probably
300 major research science research sites in the US and a
comparable number in Europe
- So, lambdas will have to be shared for the foreseeable future
– Multiple QoS paths per lambda
– Guaranteed minimum level of service for best effort traffic when
utilizing the production IP networks
– Allocation management
» There will be hundreds to thousands of contenders with different
science priorities
12
References – DOE Network Related Planning Workshops

1) High Performance Network Planning Workshop, August 2002
http://www.doecollaboratory.org/meetings/hpnpw

2) DOE Science Networking Roadmap Meeting, June 2003
http://www.es.net/hypertext/welcome/pr/Roadmap/index.html
3) DOE Workshop on Ultra High-Speed Transport Protocols and Network
Provisioning for Large-Scale Science Applications, April 2003
http://www.csm.ornl.gov/ghpn/wk2003
4) Science Case for Large Scale Simulation, June 2003
http://www.pnl.gov/scales/
5) Workshop on the Road Map for the Revitalization of High End Computing, June
2003
http://www.cra.org/Activities/workshops/nitrd
http://www.sc.doe.gov/ascr/20040510_hecrtf.pdf (public report)
6) ASCR Strategic Planning Workshop, July 2003
http://www.fp-mcs.anl.gov/ascr-july03spw
7) Planning Workshops-Office of Science Data-Management Strategy, March &
May 2004
o
http://www-conf.slac.stanford.edu/dmw2004
13
The Full Talk
14
ESnet Today Provides Global High-Speed Internet Connectivity for
DOE Facilities and Collaborators
ESnet Science Data Network
(SDN) core
Japan (SINet)
Australia (AARNet)
Canada (CA*net4
Taiwan (TANet2)
Singaren
CA*net4
France
GLORIAD
(Russia, China)
Korea (Kreonet2
MREN
Netherlands
StarTap
Taiwan (TANet2,
ASCC)
SINet (Japan)
Russia (BINP)
CERN
(USLHCnet
CERN+DOE funded)
GÉANT
- France, Germany,
Italy, UK, etc
LIGO
PNNL
ESnet IP core
MIT
BNL
JGI
LBNL
NERSC
SLAC
TWC
LLNL
SNLL
FNAL
ANL
AMES
Lab DC
Offices
PPPL
MAE-E
YUCCA MT
Equinix
OSC GTN
NNSA
PAIX-PA
Equinix, etc.
KCP
OSTI
LANL
ARM
GA
42 end user sites
Office Of Science Sponsored (22)
NNSA Sponsored (12)
Joint Sponsored (3)
Other Sponsored (NSF LIGO, NOAA)
Laboratory Sponsored (6)
commercial and R&E peering points
ESnet core hubs
JLAB
SNLA
ORNL
ORAU
NOAA
SRS
Allied
Signal
ESnet IP core: Packet over
SONET Optical Ring and Hubs
high-speed peering points with Internet2/Abilene
International (high speed)
10 Gb/s SDN core
10G/s IP core
2.5 Gb/s IP core
MAN rings (≥ 10 G/s)
OC12 ATM (622 Mb/s)
OC12 / GigEthernet
OC3 (155 Mb/s)
45 Mb/s and less
 DOE Office of Science Drivers for Networking
•
The DOE Office of Science supports more than 40% of all US
R&D in high-energy physics, nuclear physics, and fusion
energy sciences (http://www.science.doe.gov)
•
This large-scale science that is the mission of the Office of
Science depends on networks for
o
o
o
o
o
Sharing of massive amounts of data
Supporting thousands of collaborators world-wide
Distributed data processing
Distributed simulation, visualization, and computational steering
Distributed data management
• The role of ESnet is to provide networking that supports these
uses for the Office of Science Labs and their collaborators
•
The issues were explored in two Office of Science workshops
that formulated networking requirements to meet the needs of
the science programs (see refs.)
16
CERN / LHC High Energy Physics Data Provides One of
Science’s Most Challenging Data Management Problems
(CMS is one of several experiments at LHC)
Online System
~PByte/sec
Tier 0 +1
human
~100
MBytes/sec
event
simulation
event
reconstruction
CERN LHC CMS detector
15m X 15m X 22m, 12,500 tons, $700M.
2.5-40 Gbits/sec
Tier 1
German
Regional
Center
French
Regional
Center
Tier2 Center
Tier2 Center
Tier2 Center
Tier2 Center
Tier2 Center
Tier 2
~0.6-2.5 Gbps
Tier 3
Institute
Institute Institute Institute
~0.25TIPS
Courtesy
Harvey
Newman,
CalTech
FermiLab, USA
Regional
Center
~0.6-2.5 Gbps
analysis
Physics data
cache
Italian Center
100 - 1000
Mbits/sec
Tier 4
Workstations
• 2000 physicists in 31 countries are
involved in this 20-year experiment in
which DOE is a major player.
• Grid infrastructure spread over the US
and Europe coordinates the data analysis
LHC Networking
•
This picture represents the MONARCH model – a
hierarchical, bulk data transfer model
•
Still accurate for Tier 0 (CERN) to Tier 1 (experiment
data centers) data movement
•
Not accurate for the Tier 2 (analysis) sites which are
implementing Grid based data analysis
18
Example: Complicated Workflow – Many Sites
19
Distributed Workflow
•
Distributed / Grid based workflow systems involve
many interacting computing and storage elements
that rely on “smooth” inter-element communication
for effective operation
•
The new LHC Grid based data analysis model will
involve networks connecting dozens of sites and
thousands of systems for each analysis “center”
20
Example: Multidisciplinary Simulation
Ecosystems
Species Composition
Ecosystem Structure
Energy
Water
Aerodynamics
Soil
Water
Snow
Intercepted
Water
Disturbance
Fires
Hurricanes
Vegetation
Ice Storms
Dynamics
Windthrows
(Courtesy Gordon Bonan, NCAR: Ecological Climatology: Concepts and Applications. Cambridge University Press, Cambridge, 2002.)
Years-To-Centuries
Watersheds
Surface Water
Subsurface Water
Geomorphology
Hydrologic
Cycle
Days-To-Weeks
Nutrient Availability
Minutes-To-Hours
A “complete”
Chemistry
Climate
CO2, CH4, N2O
Temperature, Precipitation,
approach to
ozone,
aerosols
Radiation, Humidity, Wind
climate
Heat
CO2 CH4
Moisture
N2O VOCs
modeling
Momentum
Dust
involves many
Biogeophysics
Biogeochemistry
Carbon Assimilation
interacting
Decomposition
models and data
Mineralization
Microclimate
that are provided
Canopy Physiology
by different
Phenology
Hydrology
groups at
Bud Break
Leaf Senescence
different
locationsEvaporation
Gross Primary
Species Composition
Transpiration
Production
Ecosystem Structure
(Tim Killeen,
Snow Melt
Plant Respiration
Nutrient Availability
Infiltration
Microbial Respiration
Water
NCAR) Runoff
21
Distributed Multidisciplinary Simulation
•
Distributed multidisciplinary simulation involves
integrating computing elements at several remote
locations
o
Requires co-scheduling of computing, data storage, and
network elements
o
Also Quality of Service (e.g. bandwidth guarantees)
o
There is not a lot of experience with this scenario yet, but
it is coming (e.g. the new Office of Science
supercomputing facility at Oak Ridge National Lab has a
distributed computing elements model)
22
Projected Science Requirements for Networking
Science Areas
considered in the
Workshop [1]
Today
End2End
Throughput
(not including Nuclear
Physics and
Supercomputing)
5 years
End2End
Documented
Throughput
Requirements
5-10 Years
End2End
Estimated
Throughput
Requirements
Remarks
High Energy
Physics
0.5 Gb/s
100 Gb/s
1000 Gb/s
high bulk throughput
with deadlines (Grid
based analysis
systems require QoS)
Climate (Data &
Computation)
0.5 Gb/s
160-200 Gb/s
N x 1000 Gb/s
high bulk throughput
SNS NanoScience
Not yet
started
1 Gb/s
1000 Gb/s
remote control and
time critical
throughput (QoS)
Fusion Energy
0.066 Gb/s
(500 MB/s
burst)
0.198 Gb/s
(500MB/
20 sec. burst)
N x 1000 Gb/s
time critical
throughput (QoS)
Astrophysics
0.013 Gb/s
(1 TBy/week)
N*N multicast
1000 Gb/s
computational
steering and
collaborations
Genomics Data &
Computation
0.091 Gb/s
(1 TBy/day)
100s of users
1000 Gb/s
high throughput and
steering
23
ESnet Goal – 2009/2010
SEA
• 10 Gbps enterprise IP traffic
• 40-60 Gbps circuit based transport
Europe
CERN
Aus.
CERN
Europe
ESnet
Science Data Network
(2nd Core – 30-50 Gbps,
National Lambda Rail)
Japan
Japan
CHI
SNV
NYC
DEN
Europe
AsiaPac
DC
Metropolitan
Area
Rings
Aus.
SDG
ALB
ESnet IP Core
(≥10 Gbps)
ATL
ESnet hubs
New ESnet hubs
Metropolitan Area Rings
Major DOE Office of Science Sites
High-speed cross connects with Internet2/Abilene
Production IP ESnet core
Science Data Network core
Lab supplied
Major international
10Gb/s
10Gb/s
30Gb/s
40Gb/s
24
Aug, 90
Feb, 90
Aug, 90
Feb, 91
Feb, 91
Aug, 91
Aug, 91
Feb, 92
Feb, 92
Aug, 92
Aug, 92
Feb, 93
Feb, 93
Aug, 93
Aug, 93
Feb, 94
Feb, 94
Aug, 94
Aug, 94
Feb, 95
Feb, 95
Aug, 95
Aug, 95
Feb, 96
Feb, 96
Aug, 96
Aug, 96
Feb, 97
Feb, 97
Aug, 97
Aug, 97
Feb, 98
Feb, 98
Aug, 98
Aug, 98
Feb, 99
Feb, 99
Aug, 99
Aug, 99
Feb, 00
Feb, 00
Aug, 00
Aug, 00
Feb, 01
Feb, 01
Aug,01
Aug, 01
Feb, 02
Feb, 02
Aug, 02
Aug, 02
Feb, 03
Feb, 03
Aug, 03
Aug, 03
Feb, 04
Feb, 04
Aug, 04
Aug, 04
Feb, 05
Feb, 05
0
Feb, 90
TByte/Month
TBytes/Month
Observed Drivers for the Evolution of ESnet
ESnet is currently transporting About 530 Terabytes/mo.
and this volume is increasing exponentially – ESnet traffic has increased
by 10X every 46 months, on average, since 1990
600
500
ESnet Monthly Accepted Traffic
Feb., 1990 – May, 2005
400
300
200
100
25
Observed Drivers: The Rise of Large-Scale Science
 A small number of large-scale science users now account for
a significant fraction of all ESnet traffic
TBytes/Month
ESnet Top 100 Host-to-Host Flows, Feb., 2005
12
10
DOE LabInternational R&E
Lab-U.S. R&E
(domestic)
8
6
4
2
0
Total ESnet traffic
Feb., 2005 = 323 TBy
in approx.
6,000,000,000 flows
All other flows
(< 0.28 TBy/month
each)
Lab-Lab
(domestic)
Lab-Comm.
(domestic)
Top 100 flows = 84 TBy
26
Traffic Evolution over the Next 5-10 Years
•
The current traffic pattern trend of the large-scale
science projects giving rise to the top 100 data flows
that represent about 1/3 of all network traffic will
continue to evolve
•
This evolution in traffic patterns and volume is driven
by large-scale science collaborations and will result
in large-scale science data flows overwhelming
everything else on the network in 3-5 yrs. (WEJ predicts)
The top 100 flows will become the top 1000 or 5000 flows
o These large flows will account for 75-95% of a much
larger total ESnet traffic volume as
o
- the remaining 6 billion flows will continue to account for the
remainder of the traffic, which will also grow even as its fraction of
the total becomes smaller
27
Virtual Circuit Network Services
•
Every requirements workshop involving the science
community has put bandwidth-on-demand as the
highest priority – e.g. for
o
Massive data transfers for collaborative analysis of
experiment data
o
Real-time data analysis for remote instruments
o
Control channels for remote instruments
o
Deadline scheduling for data transfers
o
“Smooth” interconnection for complex Grid workflows
28
What is the Nature of the Required Circuits
•
Today
o
•
•
Primarily to support bulk data transfer with deadlines
In the near future
o
Support for widely distributed Grid workflow engines
o
Real-time instrument operation
o
Coupled, distributed applications
To get an idea of how circuit services might be used
look at the one year history of the flows that are
currently the top 20
o
Estimate from the flow history what would be the
characteristics of a circuit set up to manage the flow
29
TeraByes/mo.
20
18
16
14
12
10
8
6
4
2
0
SLAC - CERN
IN2P3 (IT) - SLAC
FNAL - São Paulo Analysis Center
(BR)
NetNews
Rutherford Lab (UK) - SLAC
LANL - U. Md.
SLAC - U. Victoria (CA)
SLAC - Rutherford Lab (UK)
SLAC - Rutherford Lab (UK)
INFN (IT) - SLAC
SLAC - U. Victoria (CA)
SLAC - Rutherford Lab (UK)
SLAC - Rutherford Lab (UK)
INFN (IT) - SLAC
FNAL - IN2P3 (FR)
SLAC - INFN (IT)
SLAC - INFN (IT)
SLAC - IN2P3 (FR)
LIGO - CalTech
Source and Destination of the Top 20 Flows, Sept. 2005
30
What are Characteristics of Today’s Flows – How “Dynamic” a Circuit?
LIGO – CalTech
Over 1 year the “circuit” duration is about 3 months
9/23/2004
10/23/2004
11/23/2004
12/23/2004
1/23/2005
2/23/2005
3/23/2005
4/23/2005
5/23/2005
6/23/2005
7/23/2005
8/23/2005
9/23/2005
Gigabytes/day
1450
1350
1250
1150
1050
950
850
750
650
550
450
350
250
150
50
-50(no data)
31
What are Characteristics of Today’s Flows – How “Dynamic” a Circuit?
SLAC - IN2P3 (FR)
Over 1 year “circuit” duration is about 1 day to 1 week
9/23/2004
10/23/2004
11/23/2004
12/23/2004
1/23/2005
2/23/2005
3/23/2005
4/23/2005
5/23/2005
6/23/2005
7/23/2005
8/23/2005
9/23/2005
Gigabytes/day
950
900
850
800
750
700
650
600
550
500
450
400
350
300
250
200
150
100
50
0
-50 (no data)
32
What are Characteristics of Today’s Flows – How “Dynamic” a Circuit?
SLAC - INFN (IT)
Over 1 year “circuit” duration is about 1 to 3 months
9/23/2004
10/23/2004
11/23/2004
12/23/2004
1/23/2005
2/23/2005
3/23/2005
4/23/2005
5/23/2005
6/23/2005
7/23/2005
8/23/2005
9/23/2005
Gigabytes/day
700
650
600
550
500
450
400
350
300
250
200
150
100
50
0
-50 (no data)
33
What are Characteristics of Today’s Flows – How “Dynamic” a Circuit?
FNAL - IN2P3 (FR)
Over 1 year “circuit” duration is about 2 to 3 months
9/23/2004
10/23/2004
11/23/2004
12/23/2004
1/23/2005
2/23/2005
3/23/2005
4/23/2005
5/23/2005
6/23/2005
7/23/2005
8/23/2005
9/23/2005
Gigabytes/day
500
450
400
350
300
250
200
150
100
50
0
-50 (no data)
34
What are Characteristics of Today’s Flows – How “Dynamic” a Circuit?
INFN (IT) - SLAC
Over 1 year “circuit” duration is about 3 weeks to 3 months
9/23/2004
10/23/2004
11/23/2004
12/23/2004
1/23/2005
2/23/2005
3/23/2005
4/23/2005
5/23/2005
6/23/2005
7/23/2005
8/23/2005
9/23/2005
Gigabytes/day
500
450
400
350
300
250
200
150
100
50
0
-50 (no data)
35
Characteristics of Today’s Circuits – How “Dynamic”?
•
•
These flows are candidates for circuit based
services for two reasons
o
Traffic engineering – to manage the traffic on the IP
production backbone
o
To satisfy deadline scheduling requirements
o
Traffic isolation to permit the use of efficient, but TCP
unfriendly, data transfer protocols
Despite the long circuit duration, this cannot be
managed by hand – too many circuits
o
There must automated scheduling, authorization, path
analysis and selection, and path setup
36
Virtual Circuit Services - What about lambda switching?
• Two factors argue that this is a long ways out for production
networks
1) There will not be enough lambdas available to satisfy the need
- Just provisioning a single lambda ring around the US (7000miles 11,000km) is still about $2,000,000 even on R&E networks
– This should drop by a factor of 5 -10 over next 5 -10 years
2) Even if there were a “lot” of lambdas (hundreds?) there are thousands
of large-scale science users
- Just considering sites (and not scientific groups) there are probably
300 major research science research sites in the US and a
comparable number in Europe
- So, lambdas will have to be shared for the foreseeable future
– Multiple QoS paths per lambda
– Guaranteed minimum level of service for best effort traffic when
utilizing the production IP networks
– Allocation management
» There will be hundreds to thousands of contenders with different
science priorities
37
OSCARS: Guaranteed Bandwidth Service
•
•
Virtual circuits must operate across domains
o
End points will be on campuses or research institutes that
are served by ESnet, Abilene’s regional networks, and
GÉANT’s regional networks – typically five domains to
cross to do end-to-end system connection
o
There are many issues here that are poorly understood
o
An ESnet – Internet2/HOPI – DANTE/GÉANT
collaboration
ESnet virtual circuit project: On-demand Secure
Circuits and Advance Reservation System
(OSCARS) (Contact Chin Guok (chin@es.net) for information.)
38
resource
manager
site A
• To address all of the
issues is complex
-There are many
potential restriction
points
-There are many users
that would like priority
service, which must be
rationed
resource
manager
bandwidth
broker
allocation
manager
path manager
(dynamic,
global view of network)
resource
manager
policer
policer
authorization
user
system1
shaper
OSCARS: Guaranteed Bandwidth Service
user
system2
site B
39
ESnet 2010 Lambda Infrastructure and LHC T0-T1 Networking
Vancouver
CANARIE
Seattle
Toronto
Boise
BNL
Clev
New York
Denver
KC
Pitts
FNAL
Wash DC
GÉANT-1
Sunnyvale
Chicago
CERN-1 CERN-2
CERN-3
TRIUMF
Raleigh
Phoenix
Albuq.
Tulsa
San Diego
Atlanta
Dallas
NLR PoPs
Jacksonville
El Paso Las Cruces
ESnet IP core hubs
ESnet SDN/NLR hubs
Tier 1 Centers
Cross connects with Internet2/Abilene
New hubs
GÉANT-2
LA
San Ant.
Houston
Pensacola
Baton Rouge
ESnet Production IP core (10-20 Gbps)
ESnet Science Data Network core (10G/link)
(incremental upgrades, 2007-2010)
Other NLR links
CERN/DOE supplied (10G/link)
International IP connections (10G/link)
40
Abilene* and LHC Tier 2, Near-Term Networking
Vancouver
CERN-1 CERN-2
CERN-3
TRIUMF
CANARIE
Seattle
Toronto
Boise
Clev
BNL
New York
Denver
KC
Pitts
FNAL
Wash DC
GÉANT-1
Sunnyvale
Chicago
Raleigh
Phoenix
Albuq.
Tulsa
San Diego
Dallas
Atlanta
GÉANT-2
LA
Jacksonville
El Paso Atlas Tier 2 Centers
Pensacola
Las
Cruces
NLR PoPs
• University
of Texas at Arlington
Baton Rouge
Houston
• University
CMS Tier
2 Centers
ESnet of
IP Oklahoma
core hubsNorman
San
Ant.
• University of New Mexico Albuquerque • MIT
ESnet Production IP core (10-20 Gbps)
• Langston
• University of Florida at Gainesville
< 10G
connections
Abilene
ESnetUniversity
SDN/NLR hubs
ESnet Science Data
Network
core to
(10G/link)
• University of Chicago
• University of Nebraska at Lincoln
10G connections to USLHC or ESnet
(incremental upgrades,
2007-2010)
Tier University
1 CentersBloomington
• Indiana
• University of Wisconsin at Madison
Abilene/GigaPoP nodes
Other NLR links
• Boston
• Caltech
CrossUniversity
connects with Internet2/Abilene
USLHC(10G/link)
nodes
CERN/DOE supplied
• Harvard University
• Purdue University
New hubs
• University
of Michigan
• University of California SanInternational
Diego
* WEJ
projection (10G/link)
of future Abilene 41
IP connections
Between ESnet, Abilene, GÉANT, and the connected regional R&E
networks, there will be dozens of lambdas in production networks
that are shared between thousands of users who want to use
virtual circuits – Very complex inter-domain issues
similar situation
in Europe
Abilene
ESnetAbilene
x-connects
ESnet
similar situation
in US regionals
US R&E environment
ESnet Optical Networking Roadmap
• Dedicated virtual circuits
• Dynamic virtual circuit allocation
• GMPLS
2005
2006
2007
2008
2009
2010
• Interoperability between GMPLS circuits,
VLANs, and MPLS circuits (Layer 1-3)
• Interoperability between VLANs and MPLS circuits
(Layer 2 & 3)
• Dynamic provisioning of MPLS circuits (Layer 3)
43
Tying Domains Together (1/2)
• Motivation:
o
For a virtual circuit service to be successful, it must
- Be end-to-end, potentially crossing several administrative domains
- Have consistent network service guarantees throughout the circuit
• Observation:
o
Setting up an intra-domain circuit is easy compared with coordinating an interdomain circuit
• Issues:
o
Cross domain authentication and authorization
- A mechanism to authenticate and authorize a bandwidth on-demand (BoD) circuit
request must be agreed upon in order to automate the process
o
Multi-domain Acceptable Use Policies (AUPs)
- Domains may have very specific AUPs dictating what the BoD circuits can be used
for and where they can transit/terminate
o
Domain specific service offerings
- Domains must have way to guarantee a certain level of service for BoD circuits
o
Security concerns
- Are there mechanisms for a domain to protect itself? (e.g. RSVP filtering)
44
Tying Domains Together (2/2)
•
Approach:
o
Utilize existing standards and protocols (e.g. GMPLS, RSVP)
o
Adopt widely accepted schemas/services (e.g X.509 certificates)
o
Collaborate with like-minded projects (e.g. JRA3 (DANTE/GÉANT),
BRUW (Internet2/HOPI) to:
1. Create a common service definition for BoD circuits
2. Develop an appropriate User-Network-Interface (UNI) and NetworkNetwork-Interface (NNI)
45
Download