Creating 21st Century Communications Services and Technology

advertisement
Creating 21st Century Communications Services and
Technology: Applications, Technology, and Global
Facilities
Joe Mambretti, Director, (j-mambretti@northwestern.edu)
International Center for Advanced Internet Research (www.icair.org)
Northwestern University
Director, Metropolitan Research and Education Network (www.mren.org)
Co-Director, StarLight, PI-iGENI, PI-OMNINet (www.startap.net/starlight)
APA Broadband Planning and Big Data Meeting
April 16, 2013
Introduction to iCAIR:
Accelerating Leading Edge Innovation
and Enhanced Global Communications
through Advanced Internet Technologies,
in Partnership with the Global Community
•
•
•
Creation and Early Implementation of Advanced Networking
Technologies - The Next Generation Internet All Optical Networks,
Terascale Networks, Networks for Petascale Science
Advanced Applications, Middleware, Large-Scale Infrastructure, NG
Optical Networks and Testbeds, Public Policy Studies and Forums
Related to NG Networks
Three Major Areas of Activity: a) Basic Research b) Design and
Implementation of Prototypes c) Operations of Specialized
Communication Facilities (e.g., StarLight)
Advanced Communications Research Topics
• Many Current Topics Could Be Considered
“Grand Challenges” In Communications
– Scaling the Internet from A Service For 1-2 Billion
Individuals (Current) to 4-6 Billion (Future) and Beyond
– Improving the Current Internet (Creating a “Better
Internet,” Removing Limitations, Adding Capabilities,
Increasing Security, Reliability, etc.)
– Migrating Services from Layer 3 Only to Multi-Layer
Services, Including L2.5, L2, L1, e.g., Lightpaths
– Creating the “Internet of Things” (Currently 5 Billion
Devices Are Connected – Soon 20 Billion)
– Migration the Internet From Data and Image Services To
Rich Multi-Media Services
– Empowering *Edge* Processes, Applications, and Users
• Creating a Fundamentally New Architecture and
Technology That Allows for Accomplishing All of
These Goals
Macro Network Science Themes
• Transition From Legacy Networks To Networks That
Take Full Advantage of IT Architecture and Technology
• Extremely Large Capacity (Multi-Tbps Streams)
• High Degrees of Communication Services
Customization
• Highly Programmable Networks
• Network Facilities As Enabling Platforms for Any Type
of Service
• Network Virtualization
• Highly Distributed Processes
Motivation for New Communications Architecture
•
•
•
Traditional Networking Architecture and Technology Are Oriented
to Supporting A Relatively Few Communications Modalities e.g.,
Voice, Video, Common Data, for a Very Long Time (Many Years...).
Traditional Networking Infrastructure Is Too Rigid To
Accommodate Changes Quickly
Traditional Services Are Essentially Based on 19th Century Utility
Models of Service and Infrastructure, Which -– Severely Restrict the Inherent Potential of Digital Technology
– Cannot Meet Many Emerging Requirements for 21st Century Services
•
•
A Fundamentally New Architectural Model is Required
A New Architecture Replaces The Traditional Network With a New
Communication Services Foundation – a Highly Distributed
Facility That Can Support Multiple Networks With Different
Characteristics Each Supporting Multiple Highly Differentiated
Services
Paradigm Shift – Ubiquitous Services Based on Large Scale
Distributed Facility vs Isolated Services Based on Separate
Component Resources
Traditional Provider Services:
Invisible, Static Resources,
Centralized Management,
Highly Layered
Distributed Programmable Resources,
Dynamic Services,
Visible & Accessible Resources,
Integrated As Required, Non-Layered
Invisible Nodes,
Elements,
Hierarchical,
Centrally Controlled,
Fairly Static
Unlimited Services, Functionality,
Limited Services, Functionality,
Flexibility, Expandability
Flexibility, Expandability
Releasing the Fully Potential of Digital Technologies
A Next Generation Architecture: Distributed Facility
Enabling Many Types Network/Services
Environment: VO
FinancialNet
SensorNet
Environment: Real Org1
Environment: Intelligent
Power Grid Control
HPCNet
TransLight
Environment: Real Org
Commodity
R&DNet Internet
GovNet
Environment: RFIDNet
RFIDNet
Environment:
Large Scale System Control
PrivNet
MediaGridNet
Environment: Financial Org
Environment:
Control Plane
Environment: Lab
BioNet
Environment: Global App
Environment: Real Org2
Environment: Gov Agency
MedNet
Environment: Bio Org
Environment: Sensors
Environment:
International Gaming Fabric
A Next Generation Architecture: Distributed Facility Enabling Many Types
Network/Services Enabled By Capacity + Programmability
Environment: Financial Org
Environment: VO
FinancialNet
SensorNet
Environment: Real Org1
Environment: Intelligent
Power Grid Control
HPCNet
TransLight
Environment: Real Org
Commodity
R&DNet Internet
GovNet
Environment: RFIDNet
Environment: Bio Org
Environment: Lab
MediaNets
New: Ad Hoc Individual Networks
Network Factories!
Personal
Networks
Environment:
Control Plane
Other
PrivNet
BioNet
Environment: Global App
Environment: Real Org2
Environment: Gov Agency
MedNet
RFIDNet
Environment:
Large Scale System Control
Environment: Sensors
Environment:
International Gaming Fabric
Fairly New!
Large Scale Data Intensive Science Motivates the
Creation of Next Generation Communications
•
•
•
•
•
•
•
Large Scale, Data (and Compute) Intensive Sciences Encounter
Technology Challenges Many Years Before Other Domains
Resolving These Issues Creates Solutions That Later Migrate To
Other Domains
30+ Year History of Communication Innovations Has Been Driven
Primarily By Data and Compute Intensive Sciences
Best Window To the Future = Examining Requirements of Data and
Compute Intensive Science Research
Science Is Transitioning From Using Only Two Classic Building
Blocks, Theory and Experimentation To Also Utilizing a Third –
Modeling and Simulation – With Massive Amounts of Data
Petabytes, Exabytes, Zettabytes
For Communications, Data Volume Capacity Not Only Issue, But a
Major Issue
NG Digital
Sky Survey
ATLAS
ALMA: Atacama
Large Millimeter
Array
www.alma.nrao.edu
DØ (DZero)
www-d0.fnal.gov
ANDRILL:
Antarctic
Geological
Drilling
www.andrill.org
BIRN: Biomedical
Informatics Research
Network
www.nbirn.net
GEON: Geosciences
Network
www.geongrid.org GLEON: Global Lake
Ecological
Observatory
Network
www.gleon.org
CAMERA
metagenomics
camera.calit2.net
IVOA:
International
Virtual
Observatory
www.ivoa.net
OSG
www.opensciencegrid.org
Globus Alliance
www.globus.org
LHCONE
www.lhcone.net
OOI-CI
ci.oceanobservatories.org
ISS: International
Space Station
www.nasa.gov/statio
n
WLCG
lcg.web.cern.ch/LCG/publi
c/
LIGO
www.ligo.org
Carbon Tracker
www.esrl.noaa.gov/
gmd/ccgg/carbontrack
er
CineGrid
www.cinegrid.org
Pacific Rim
Applications and
Grid Middleware
Assembly
www.pragmagrid.net
SKA
www.skatelescope.o
rg
TeraGrid
www.teragrid.org
Sloan Digital Sky
Survey
www.sdss.org
Compilation By Maxine Brown
XSEDE
www.xsede.org
Comprehensive
Large-Array
Stewardship System
www.class.noaa.gov
Petascale Computational Science
For Decades, Computational Science
Has Driven Network Innovation
Today –
Petascale Computational Science
National Center for Supercomputing Applications, UIUC
HPC Cloud Computing
DOE Magellan Initiative: Testbed
To Explore Cloud Computing
For Science
Multiple HPC Cloud Computing Testbeds
Specifically Designed for Science Research
At Scale Experimentation
Integerated With High Performance Networks
XSEDE
•
•
•
Extreme Science and Engineering Discovery Environment (XSEDE)
Goal: Create a Distributed Computational Science Infrastructure to
Enable Distributed Data Sharing and High-Speed Computing for
Data Analysis and Numerical Simulations
Builds on Prior Distributed TeraGrid
TeraGrid=> XSEDE
Concept of Private Network As Backplane To
Distributed Computational Environment Continues In Next Iteration
Open Science Grid: Selected Investigations
Gravity Wave Modeling
DNA Modeling
Nutrino Studies
This Distributed Facility
Supports Many Sciences
Usage
HEP = Staggering Amounts of Data
BaBar 0.3
PetaByte/year
(2001)
CDF or D0 Run II
0.5 PetaByte/year
(2003)
KTeV 50
TeraBytes /year
(1999)
SLD 3 TB /year
(1998)
Run I (CDF or D0)
20 TB /year (1995)
LHC Mock Data Challenge
1 PetaByte/year (~2005)
CMS or ATLAS
2 PetaBytes/year
(~2008)
In 1977 the Upsilon (bottom quark) was discovered at
Fermilab by experiment E288 led by now Nobel laureate Leon
Lederman
The experiment took about 1 million events and recorded the
raw data on ~ 300 magnetic tapes for about 6 GB of raw data
L3 5 TB /year (1993)
E791 50 TB /year
(1991)
EMC 400GB /year
(1981)
Source: Fermi Lab
Large Hadron Collider at CERN
LHC Open Network Environment
Fermi National Accelerator Laboratory
35 Mile LAN PHY
to StarLight
Magnetic Fusion Energy
New Sources
Of Power
Source: DOE
Source: DOE
ITER (Formally- International Thermonuclear
Experimental Reactor)
• ITER Is One of the World’s Largest and Most Ambitious
International Science Project Extremely Data Intensive
Fusion Energy Research
KSTAR, or Korea Superconducting Tokamak Advanced Research:
Magnetic Fusion Device At the National Fusion Research Institute in
Daejon, South Korea. KSTAR Is Providing Major Contributions To ITER.
Spallation Neutron Source (SNS) at ORNL
Neutron Beams Are Directed At
Different Types of Materials
To Investigate Their Atomic Properties,
Including Structures
Source: DOE
Real-Time Global e-Very Long Baseline Interferometry
DRAGON (Dynamic Resource Allocation via GMPLS Optical Networks)
Real-time e-VLBI data
correlation from
telescopes in USA,
Sweden, the
Netherlands, UK and
Japan
http://dragon.maxgigapop.net
• Mid Atlantic Crossroads (MAX)
GigaPoP, USA
• Information Sciences Institute,
USA
• Westford Observatory, MIT
Haystack, USA
• Goddard Geophysical and
Atmospheric Observatory,
NASA, USA
• Kashima, NiCT, Japan
• Onsala, Sweden
• Jodrell Bank, UK
• JIVE, The Netherlands
• Westerbork, Observatory/
ASTRON, The Netherlands
Onsala Observatory
The Australia to Netherlands Lightpath
Route
Credit: EXPReS
eVLBI JIVE-Arecibo Project
Square Kilometer Array
Today: Prototypes – Future: Austrialia and South Africa
Square Kilometer Array
Increasing Accuracy in Hurricane Forecasts
Ensemble Runs With Increased Resolution
5.75 Day Forecast of Hurricane Isidore
Source:
Smarr
Higher Resolution Research Forecast
Operational Forecast
Resolution of National Weather Service NASA Goddard Using Ames Altix
4x
Resolution
InterCenter
Improvement Networking
is
Bottleneck
Intense
RainBands
Resolved
Eye Wall
Source: Bill Putman, Bob Atlas, GFSC
Climate Modeling
Source: DOE
SPICE: Part of UK e-Science Initiative
Interactive
Molecular Dynamics
Simulation
SC05 HPC
Analytics Challenge
Award
ISC
Life Sciences Award
2005
TeraGrid + UK e-Science Grid
Over UKLight at StarLight:
Uses steered molecular
dynamics to pull DNA
strand through hemolysim,
a channel protein
Problem size = ~250,000 atoms
Run time on normal servers
=25 years
Source: UCL
Large Scale Multi-National Molecular Modeling
Interactive
Molecular Dynamics
Simulations At Scale
Molecular Modeling of Dendrimers for Nanoscale Applications
By Tahir Cagin*, Guofeng Wang, Ryan Martin, and William A. Goddard III California Institute of Technology,
Materials and Process Simulation Center and Division of Chemistry and Chemical Engineering
Source: UCL
Modeling Fundamental Properties
Characterizing how turbulence within sheets of electrons generates structures
“Flux ropes” from simulations by Homa Karimabadi and Collaborators UCSD
USGS Will Create Images Many Thousands Of Times
More Detailed With Landsat 8 In 2013
Shane DeGross, Telesis
USGS
Resolution Can Be Less Than 6 inches
Source: EVL, UIC
USGS Images 10,000 Times
More Data than Landsat7
Landsat7 Imagery
100 Foot Resolution
Draped on elevation data
Shane DeGross, Telesis
USGS
New USGS Aerial Imagery
At 6-inch Resolution
Source: EVL, UIC
Today’s Aerial Imaging is >500,000
Times More Detailed than Landsat7
Shane DeGross
SDSU Campus
30 meter pixels
Source: Eric Frost, SDSU
Laurie Cooper, SDSU
4 centimeter pixels
The Crisis Response Room of the Future
SHD Streaming TV -- Immersive Virtual Reality – 100 Megapixel Displays
Source: EVL, UIC
Brain Imaging Collaboration -- UCSD & Osaka Univ.
Using Real-Time Instrument Steering and HDTV
Southern California OptIPuter
Most Powerful Electron
Microscope in the World - Osaka, Japan
HDTV
Source: Mark Ellisman, UCSD
UCSD
OptIPuter JuxtaView Software for Viewing
High Resolution BioImages on Tiled Displays
30 Million Pixel Display
NCMIR Lab UCSD
Source: David Lee, Jason
Leigh, EVL, UIC
Laboratory for the Ocean Observatory Knowledge
Integration Grid (LOOKING)
Remote Interactive HD Imaging of Deep Sea Vent
Canadian-U.S. Collaboration
Source John Delaney & Deborah Kelley, UWash
New OptIPuter Driver: Gigabit Fibers on the Ocean Floor
A Working Prototype Cyberinfrastructure for NSF’s ORION
www.neptune.washington.edu
• NSF ITR with Principal Investigators
– John Orcutt & Larry Smarr - UCSD
– John Delaney & Ed Lazowska –UW
– Mark Abbott – OSU
•
Collaborators at:
– MBARI, WHOI, NCSA, UIC, CalPoly,
UVic, CANARIE, Microsoft,
NEPTUNE-Canada
Southern California Coastal Ocean
Observing System
www.sccoos.org/
LOOKING (Laboratory for the Ocean Observatory Knowledge
Integration Grid) –
Adding Web Services to LambdaGrids
Goal – From Expedition to Cable Observatories
with
Streaming
Stereo
HDTV
Robotic
Cameras
http://disney.go.com/disneypictures/aliensofthedeep/alienseduguide.pdf
Scenes from
The Aliens of the Deep,
Directed by James
Cameron & Steven
Quale
Source: Smarr, Cameron
MARS New Gen Cable Observatory Testbed Capturing Real-Time Basic Environmental Data
Central
Lander
MARS Installation
Oct 2005 -Jan 2006
Source:
Jim
Bellingham
, MBARI
Tele-Operated
Crawlers
PI Larry Smarr
Marine Genome Sequencing Project
Measuring the Genetic Diversity of Ocean Microbes
CAMERA will include
All Sorcerer II Metagenomic Data
Source: Larry Smarr, C. Venter
Looking Back Nearly 4 Billion Years
In the Evolution of Microbe Genomics
Science Falkowski and Vargas 304 (5667): 58
(pre-filtered, queries
metadata)
Data
Backend
(DB, Files)
W E B PORTAL
Beyond
Traditional Web-Accessible Databases
Request
Response
PDB
BIRN
NCBI Genbank
Source: CalIT2
+ many others
Source: Phil Papadopoulos, SDSC, Calit2
Craig Venter Announces Creation
of the First Synthetic Life Form
Creation of a Bacterial Cell Controlled
by a Chemically Synthesized Genome
Daniel G. Gibson,1 John I. Glass,1
Carole Lartigue,1 Vladimir N. Noskov,1
Ray-Yuan Chuang,1 Mikkel A. Algire,1
Gwynedd A. Benders,2 Michael G.
Montague,1 Li Ma,1 Monzia M. Moodie,1
Chuck Merryman,1 Sanjay Vashee,1
Radha Krishnakumar,1 Nacyra AssadGarcia,1 Cynthia Andrews-Pfannkoch,1
Evgeniya A. Denisova,1 Lei Young,1 ZhiQing Qi,1 Thomas H. Segall-Shapiro,1
Christopher H. Calvey,1 Prashanth P.
Parmar,1 Clyde A. Hutchison, III,2
Hamilton O. Smith,2 J. Craig Venter1,2,*
Published in Science, May 20, 2010
Science DOI: 10.1126/science.1190719
Use of OptIPortal
to Interactively View Microbial Genome
15,000 x 15,000 Pixels
Acidobacteria bacterium Ellin345 (NCBI)
Source: Raj Singh, UCSD
Soil Bacterium 5.6 Mb
Use of OptIPortal
to Interactively View Microbial Genome
15,000 x 15,000 Pixels
Acidobacteria bacterium Ellin345 (NCBI)
Source: Raj Singh, UCSD
Soil Bacterium 5.6 Mb
Use of OptIPortal
to Interactively View Microbial Genome
15,000 x 15,000 Pixels
Acidobacteria bacterium Ellin345 (NCBI)
Source: Raj Singh, UCSD
Soil Bacterium 5.6 Mb
Advanced Visualization Enabled
By Optical Lightpaths
The 200-million-pixel HIPerWall at Calit2 on the UC Irvine campus
is now part of the OptIPortal Collaboratory.
Source: UCSD
Earth and Planetary Sciences are
an OptIPuter Large Data Object Visualization Driver
EVL Varrier Autostereo 3D Image
SIO 18 Mpixel IBM OptIPuter Viz Cluster
SIO HIVE 3 Mpixel Panoram
Source: Smarr, Defanti: OptIPuter
3D Modeling and Simulation
Distributed Simulation Analysis
•
•
•
Sandia National Laboratories, USA
High Performance Computing Center Stuttgart
(HLRS), Germany
Thanks to the Computer Services for Academic
Research Centre (Manchester, UK), the Centre of
Virtual Environments at University of Salford (UK),
Tsukuba Advanced Computing Center (Japan) and
Pittsburgh Supercomputing Center (USA) for
additional supercomputing resources.
This application emphasizes distributed
parallel supercomputing and a
collaborative virtual-reality computation steering environment applied
to Grand Challenge problems.
Source: Tom Defanti Dan Sandia, EVL
VR
TeleImmersive
Haptics
Haptic Collaboration in
a Networked Virtual
Environment
•
•
Gifu MVL Research Center,
Telecommunications Advancement
Organization (TAO), Japan
University of Tokyo, Japan
A demonstration of wearable
haptic gear (touch and force)
communication, as well as
visual communication, first public
demonstration between the CAVE
at iGrid 2000 and CABIN at the
University of Tokyo.
www.cyber.rcast.u-tokyo.ac.jp/mvl2/
The StarCAVE as a 25kW or 50kW Browser: Tom DeFanti, CalIt2
UCSD
• Next step up in res/power:
KAUST-Saudi Arabia
• 24 Sony 4K projectors 100
Million Pixels/eye
• 240,000 lumens!
• Mechdyne/Iowa State
Design based on original
1991 EVL CAVE
Source: Tom DeFanti, UCSD, KAUST
KAUST
CORNEA
Digital Media (iGrid 2000, Yokohama Japan
USA, Canada, Japan, Singapore, Netherlands, Sweden,
CERN, Spain, Mexico, Korea)
GiDVN: Global Internet
Digital Video Network
•
•
•
•
•
•
•
•
•
•
Digital Video Working Group, Coordinating
Committee for International Research Networks
CERN, Switzerland
APAN, Japan; KDD, Japan
APAN-KR, Korea; Seoul National University, Korea
SURFnet, The Netherlands
DFSCA-UNAM, Mexico
SingAREN, Singapore
Universitat Politecnica de Catalunya, Spain
Royal Institute of Technology, Sweden
Int’l Center for Advanced Internet Research
(iCAIR), Northwestern, USA
GiDVN projects have enhanced media capabilities for the nextgeneration Internet, enabling new applications to interoperate
throughout the world.
www.icair.org/inet2000
High-Performance Digital Media
For Interactive Remote Visualization (2006)
•
Interactive visualization coupled with
computing resources and data storage
archives over optical networks enhance
the study of complex problems, such as
the modeling of black holes and other
sources of gravitational waves.
•
HD video teleconferencing is used to
stream the generated images in real time
from Baton Rouge to Brno and other
locations
• Center for Computation and Technology,
Louisiana State University (LSU), USA
• Northwestern University
• MCNC, USA
• NCSA, USA
• Lawrence Berkeley National Laboratory,
USA
• Masaryk University/CESNET, Czech
Republic
• Zuse Institute Berlin, Germany
• Vrije Universiteit, NL
www.cct.lsu.edu/Visualization/iGrid2005
http://sitola.fi.muni.cz/sitola/igrid/
4K Media
4K Digital Media
Ultra High Definition
Digital
Communications
Digital communications using SHD transmits
extra-high-quality, digital, full-color, full motion
images.
4k pixels horizontal, 2k vertical
4 * HDTV – 24 * DVD
4K Video is approximately 4X standard HD
HD = 720x1280 or 1080x1920 pixels
4K = 3840x2160 pixels
www.onlab.ntt.co.jp/en/mn/shd
.
8k Media Experiments At the Univ of Essex
iCAIR 3D HD HPDMnet
Demonstration
At GLIF Deajong,
South Korea
Oct 2009
APAN Congress Multi-Continental Performance
Event In Thailand In Feb 2012
Performers in Spain, Brazil, Thailand
Event Made Possible via
HPDMnet
Dynamic Network Provisioning
UCSD CalIT2
Source Tom DeFanti
UCSD CalIT2
Source Tom DeFanti
UCSD CalIT2
Source Tom DeFanti
Adler Planetarium/MuseNet Initiative
Adler Has Begun to Produce
8k Movies
For Projection on
Domed Auditorium Screen
Adler Planetarium, Chicago
Currently Investigating Options
For an Optical Fiber Based
“Museum Network”
MUSEnet
That Would Connect Multiple Museums
To StarLight
Cooperative Project iCAIR, NCLT
iCAIR
StarLight International/National Communications
Exchange Facility
Abbott Hall, Northwestern University’s Chicago Campus
StarLight – “By Researchers For Researchers”
StarLight is an experimental
optical infrastructure and
proving ground for network
services optimized for
high-performance applications
GE+2.5+10GE
Exchange
Soon:
Multiple 10GEs
Over Optics –
World’s “Largest”
10GE Exchange
First of a Kind
Enabling Interoperability
At L1, L2, L3
View from StarLight
Abbott Hall, Northwestern University’s
Chicago Campus
StarLight Infrastructure
StarLight is a large research-friendly co-location facility with
space, power and fiber that is being made available to
university and national/international network collaborators
as a point of presence in Chicago
• An Advanced Network for Advanced Applications
• Designed in 1993; Initial Production in 1994, Managed at L2 &
L3
• Created by Consortium of Research Organizations -- over 20
• Partner to STAR TAP/StarLight, I-WIRE, NGI and R&E Net
Initiatives, Grid and Globus Initiatives etc.
• Model for Next Generation Internets
• Developed World’s First GigaPOP
• Next – the “Optical MREN”
• Soon - Optical ‘TeraPOP’ Services
iCAIR is a founding partner of the GLIF, a world-wide facility managed by a
consortium of institutions, organizations, consortia and country National
Research & Education Networks who voluntarily share optical networking
resources and expertise to develop the Global LambdaGrid for the
advancement of scientific collaboration and discovery.
GLIF 2011
GLIF 2008
GLIF 2011
GLIF 2011
IRNC TL/SL Deliverables
• Continue Enabling Multi-National Application and
Middleware Experiments Through Innovative Services and
Technologies On International Networks:
–
–
–
–
–
–
High-Performance Digital Media Network (HPDMnet)
iGENI: the GENI-funded international GENI project* ##
SAGE: connecting people and their data at high-res*
CineGrid: it’s all about visual communications
GreenLight International: less watts/terabyte*
Science Cloud Communication Services Network (SCCSnet)*: the
impending disruption
• Build Cooperative National and International Partnerships*
• Provide New Services, Including Many with Industrial
Partners
• Capitalize On Other Emerging Opportunities*
## Now, In Part, A CISE/OCI Partnership!!
*Currently also funded by various NSF awards to UCSD/UIC/NU
CA*net4 has 10x10Gb and Equipment at
StarLight
Source: CANARIE
SURFnet6 National Optical R&E Network
• High Performance Optical
Switching
• Numerous 10 Gbit/s
Lightpaths
• Dynamic Provisioning
• 500,000 Users
• 84 Institutes
GLIF EU Middle East India 2011
TransLight/Pacific Wave
10GE Wave Facilitates US West Coast Connectivity
Developing a distributed exchange facility on the US West Coast (currently
Seattle, Sunnyvale and Los Angeles) to interconnect international and US
research and education networks
www.pacificwave.net/participants/irnc/
JGNIIplus Network Topology Map
National Institute for Information Communication Technology
(NICT)
[Legends ]
20Gbps
10Gbps
1Gbps
Optical testbeds
<10G>
・Ishikawa Hi-tech Exchange Center
(Tatsunokuchi-m achi, Ishikawa Prefecture)
<100M>
・Toyam a Institute of Inform ation Systems
(Toyama)
・Fukui Prefecture Data Super Highway AP * (Fukui)
Access points
Core network nodes
Sapporo
<1G>
・Teleport Okayama
(Okayam a)
・Hiroshim a University (Higashi
Hiroshima)
<100M>
・Tottori University of
Environmental Studies (Tottori)
・Techno Ark Shimane
(Matsue)
・New Media Plaza Yam aguchi
(Yam aguchi)
<10G>
・Kyushu University (Fukuoka)
<100M>
・NetCom Saga
(Saga)
・Nagasaki University
(Nagasaki)
・Kumamoto Prefectural Office
(Kum amoto)
・Toyonokuni Hyper Network AP
*(Oita)
・Miyazaki University (Miyazaki)
・Kagoshima University
(Kagoshima)
<10G>
・Kyoto University
(Kyoto)
・Osaka University
(Ibaraki)
<1G>
・NICT Kansai Advanced Research Center (Kobe)
<100M>
・Lake Biw a Data Highway AP *
(Ohtsu)
・Nara Prefectural Institute of Industrial
Technology (Nara)
・Wakayam a University
(Wakayam a)
・Hyogo Prefecture Nishiharim a Technopolis
(Kam igori-cho, Hyogo Prefecture)
<100M>
・Niigata University
(Niigata)
・Matsumoto Information
Creation Center
(Matsumoto,
Nagano Prefecture)
<100M>
・Hokkaido Regional Network Association
AP *
(Sapporo)
<1G>
・Tohoku University
(Sendai)
・NICT Iw ate IT Open Laboratory
(Takizaw a-mura, Iw ate Prefecture)
<100M>
・Hachinohe Institute of Technology
(Hachinohe, Aomori Prefecture)
・Akita Regional IX * (Akita)
・Keio University Tsuruoka Campus
(Tsuruoka, Yamagata Prefecture)
・Aizu University
(Aizu Wakamatsu)
<10G>
・Tokyo University
Fukuoka
Sendai
NICT Ki ta Kyushu IT
Open Laboratory
Kanazawa
Nagano
Osaka
NICT Koganei
Headquarters
Okayama
Kochi
NICT Keihannna Human
Info-Communications
Research Center
Okinawa
<100M>
・Kagawa Prefecture Industry Promotion
Center (Takamatsu)
・Tokushim a University (Tokushim a)
・Ehime University (Matsuyama)
・Kochi University of Technology
(Tosayam ada-cho, Kochi Prefecture)
Nagoya
NICT Tsukuba
Research Center
(Bunkyo Ward, Tokyo)
・NICT Kashim a Space Research Center
(Kashima, Ibaraki Prefecture)
<1G>
・Yokosuka Telecom Research Park
(Yokosuka, Kanagawa Prefecture)
<100M>
・Utsunom iya University (Utsunomiya)
・Gunma Industrial Technology Center
(Maebashi)
・Reitaku University
(Kashiwa, Chiba Prefecture)
・NICT Honjo Inform ation and
Communications Open Laboratory
(Honjo, Saitam a Prefecture)
・Yamanashi Prefecture Open R&D
Center
(Nakakoma-gun, Yam anashi Prefecture)
Otemachi
10 GbpsStarLight
<100M>
・Nagoya University (Nagoya)
・University of Shizuoka (Shizuoka)
・Softopia Japan (Ogaki, Gifu Prefecture)
・Mie Prefectural College of Nursing (Tsu)
*IX:Internet e Xchange
AP:Access Point JGN II
Source:
USA
KREONet2, KRLight
e-Science based
KREONET/KREONet2/SuperSIReN
→ joining global Cyber Infrastructure
KREONET
Seoul
KREONet2
Incheon
Suwon
Chonan
SuperSIReN Daejeon
Jeonju
Daegu
ChangKwangju won
Busan
Pohang
APII
APII
TEIN
Japan/SINET
Singapore/SingAREN
Europe
China/CSTNET
High-Performance S&T Facilities
(High-Performance Cluster/Supercomputers, Storage,
Experimental Facilities, Visualization, Access Grid, DB Servers, etc.)
Source: Kreonet
GEANT
GLIF EU 2011
GLIF SA 2011
ESnet4 Configuration
160-400 Gbps in 2011-2013
Canada
Canada
Asia-Pacific
Asia Pacific
(CANARIE)
(CANARIE)
CERN (30 Gbps, each
path)
GLORIAD
Europe
(Russia and
China)
(GÉANT)
Boston
Australia
1625 miles / 2545 km
Science Data
Network Core
IP Core
Boise
New York
Denver
Washington
DC
Australia
Tulsa
LA
Albuquerque
South America
San Diego
(AMPATH)
South America
Jacksonville
El Paso
(AMPATH)
IP core hubs
SDN (switch) hubs
Primary DOE Labs
High speed cross-connects
with Ineternet2/Abilene
Possible hubs
Core network fiber path is
~ 14,000 miles / 24,000 km
2700 miles / 4300 km
Production IP core (10Gbps)
SDN core (20-30-40Gbps)
MANs (20-60 Gbps) or
backbone loops for site access
International connections
DOE ESnet Advanced Networking Initiative: 100 Gbps
NIH Biomedical Informatics Research Network (BIRN)
International Federated Repositories
BIRN Collaboratory today: Enabling collaborative
research at 28 research institutions comprised of
37 research groups.
www.nbirn.net
NASA’s NISN is at StarLight
Commercial Ameritech NAP
Starlight
Most Important Connections to
Mid-West & SouthEast
Regional networks, Tier 1 ISP networks
NGIX East
New Generation IP Connections to
East Coast Regional networks,
Tier 1 ISP networks, and Europe
NGIX West
New Generation IP Connections to
West Coast Regional networks,
Tier 1 ISP networks,
the PacRim, Japan, and Australia
NISN
MAE East
Most Important Connections to
East Coast Regional networks,
Tier 1 ISP networks, and Europe
MAE West
Most Important IP Connections to
West Coast Regional networks,
Tier 1 ISP networks,
the PacRim, Japan, and Australia
Source: NASA
DREN Network Is At StarLight
Source: John Silvester
Warren
FHN Family
HealthCare
Orangeville
FHN Family
Healthcare
Stockton
FHN Family
HealthCare
Pecatonica
FHN Family
Healthcare
Lena
FHN Family
HealthCare
Galena
Stauss Hospital
Illinois Rural HealthNet
Logical Network Map
Harvard
Mercy-Harvard
Hospital
Janet Waddles
Center
UIUC Medical
Center
Belvidere
Northwest Suburban
Community Hosp
Freeport
FHN Memorial
Hospital
TriRivers
Health Network
Rockford
NIU Rockford
Village of Hoffman
Estates
NIU Hoffman
Estates
Byron
Swedish American
Medical Group
Starlight
MREN
Savanna
FHN Family
Healthcare
Mt. Carroll
FHN Family
Healthcare
Lannark
FHN Family
Healthcare
Forreston
FHN Family
Healthcare
Oregon
FHN Family
Healthcare
Davis Junction
Swedish American
Medical
City of Malta
Tri-County Nursing
Morrison
Morrison Community
Hosptial
Sterling/Rock Falls
CGH Medical Center
Dixon
Katherine Shaw
Bethea Hosptial
Kewanee
Kewanee Hospital
Rochelle
Rochelle
Community Hosptial
DeKalb County
Health
Amboy
Sinnissippi
Center
Riverside
Medical Clinic
City of DeKalb
NIU DeKalb
City of Kanakee
Provena St. Mary’s
Hospital
City of Chenoa
OSF Medical Group
Galesburg
St. Mary’s Medical
Center
Town and Country
Rural Health Care
Clinic
Hopedale
Hopedale Medical
Complex
Monmouth
Community Medical
Center
Coleman Rural
Health Clinic
Springfield
St. Johns Hospital
City of Macomb
Sarah Bush Lincoln
Health Center
Table Grove
Family Practice
Astoria
Community Medical
Center
Rushville
Sarah D. Culbertson
Memorial Hosptial
City of Carthage
Memorial Hospital
Litchfield Family
Practice Center
Greenfield
Boyd Fillager Clinic
Carlinville
Carlinville Area
Hosptial
City of Jacksonville
Passavant Area
Hospital
Staunton Family
Practice
Pittsfield
Illini Community
Hospital
City of East St. Louis
Kenneth Hall
Regional Hospital
Monticello
John and Mary E.
Kirby Hosptial
City of Urbana
Carle Foundation
Hosptial
U of I
Lincoln
Rural Health Clinic
City of Normal
ISU
City of Danville
Provena USMC
Danville
Pediatric Center
City of Tuscola
Carle Clinic
Clinton
Dr. John Warner
Hospital
City of Decatur
Decatur Memorial
Hospital
City of Mattoon
Sarah Bush Lincoln
Health Center
City of Effingham
St. Anthony’s Memorial
Hospital
St. Mary’s Hospital
Pana
Pana Community
Hospital
Hillsboro
Hillsboro Area
Hospital
City of Paris
Paris Community
Hospital
Paris Family
Medical Center
Newton
Brush Creek Medical
Center
Robinson
Crawford
Memorial
Hospital
Mid-Ill Medical Care
Assoc. LLC
City of Litchfield
St. Francis Hospital
Flora
Clay County
Hosptial
Vandalia
Fayette County
Hosptial
City of Salem
Salem Township
Hospital
Olney
Richland Memorial
Hospital
City of Staunton
Community
Memorial Hospital
Fairfield
Fairfield Memorial
Hospital
Highland
St. Joseph’s Hospital
City of Centralia
St. Mary’s Hospital
City of Belleville
Memorial Hospital
Nashville
Washington County
Hospital
Red Bud
Regional Hosptial
Sparta
Community Hospital
Pinckneyville
Community
Hospital
Lawrenceville
County Memorial
Hospital
Mount Carmel
Wabash General
Hospital District
Germantown
Clinton Co. Rural
Health Clinic
Winchester
Winchester Family
Practice
Chester
Memorial Hospital
City of Urbana
Provena Covenant
Medical Center
BroMenn
Health Care
City of Peoria
Pekin Hospital
Talyorville
St. Vincent
Memorial Hosptial
Women and Family
Medical Care
Carrollton
Thomas Boyd
Memorial Hospital
City of Eureka
Eureka Community
Hospital
Springfield
Memorial Medical
Center
City of Warsaw
Hamiton-Warsaw
Clinic
City of Quincy
Blessing Hospital
OSF St. Joseph
Medical Center
Lincoln
Abraham Lincoln
Memorial Hospital
Mason
Mason District
Hospital
Hoopeston
Hoopeston
Community Memorial
Hospital
City of Paxton
The Paxton Clinic
City of Pontiac
OSF St. James
JW Albrecht MC
Mendota
Mendota
Community Hospital
Cissna Park
Cissna Park Medical
Clinic
Gibson City
Gibson City Area
Hospital
City of Braceville
Repeater Station
Sandwich
Valley West
Community Hospital
City of Canton
Graham Hospital
City of Onarga
The Onarga Clinic
City of Naperville
NIU Naperville
Ben Gorden
Center
City of Rochelle
Your Family
Doctor
Princeton
Perry Memorial
Hospital
Galva
Regional Family
Health Center
Mercer
Mercer County
Hospital
DuPage National
Tech Park
(ISP Services)
Kishwaukee
Community
Hospital
Genesso
Hammond-Henry
Hospital
Internet 2
Mount Vernon
Crossroad
Community Hosptial
McLeansboro
Hamilton Memorial
Hosptial
Benton
Franklin Hospital
DuQuoin
Marshall Browning
Hospital
Murphysboro
St. Joseph Memorial
Hospital
Eldorado
Ferrell Hosptial
StarWave: A Multi-100 Gbps Facility
•
•
•
StarWave, A New Advanced Multi-100 Gbps Facility and Services
Will Be Implemented Within the StarLight
International/NationalCommunications Exchange Facility
StarWave Is Being Funded To Provide Services To Support Large
Scale Data Intensive Science Research Initiatives
Facilities Components Will Include:
• An ITU G. 709 v3 Standards Based Optical Switch for WAN
Services, Supporting Multiple 100 G Connections
• An IEEE 802.3ba Standards Based Client Side Switch,
Supporting Multiple 100 G Connections, Multiple 10 G
Connections
• Multiple Other Components (e.g., Optical Fiber Interfaces,
Measurement Servers, Test Servers
Selected 100 Gbps Partners
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
University of Illinois at Chicago ARI 100 G Facility
National Center for Supercomputing Applications Blue Waters Petascale Computing
Facility
CHI MAN/CHI Express Chicago Metro Area Optical Transport Facility for Argonne
National Accelerator Laboratory, Fermi National Accelerator Laboratory and
StarLight
Illinois Wired/Wireless Infrastructure for Research and Education (I-WIRE) V.2.0
NorthernWave 100 Gbps Facility
Several State Networks
Northwestern University, University of Chicago
US National R&E Networks
ESnet (Energy Science Network 100 G Advanced Network Initiative)
ESnet 100 Gbps Testbed
NASA Goddard Space Flight Center
CANARIE, SURFnet, Other International
Various CIC Related Networks
National Oceanographic and Atmospheric Administration
Mid-Atlantic Crossroads, Pittsburgh Supercomputing Center, 3ROX etc
Various 100 Gbps Network Testbeds
StarLight International/National Communications Exchange
CA*net/Ciena/StarLight/iCAIR 100 Gbps
Testbed Sept-Oct 2010
100 Gbps
~850 Miles, 1368 Km
Source: CANARIE
National Science Foundation’s Global
Environment for Network Innovations (GENI)
•
•
•
•
•
•
GENI Is Funded By The National Science Foundation’s Directorate
for Computer and Information Science and Engineering (CISE)
GENI Is a Virtual Laboratory For Exploring Future Internets At
Scale.
GENI Is Similar To Instruments Used By Other Science Disciplines,
e.g., Astronomers – Telescopes, HEP - Synchrotrons
GENI Creates Major Opportunities To Understand, Innovate and
Transform Global Networks and Their Interactions with Society.
GENI Is Dynamic and Adaptive.
GENI Opens Up New Areas of Research at the Frontiers of Network
Science and Engineering, and Increases the Opportunity for
Significant Socio-Economic Impact.
iGENI: The International GENI
•
•
•
•
•
•
The iGENI Initiative Will Design, Develop, Implement, and
Operate a Major New National and International Distributed
Infrastructure.
iGENI Will Place the “G” in GENI Making GENI Truly Global.
iGENI Will Be a Unique Distributed Infrastructure Supporting
Research and Development for Next-Generation Network
Communication Services and Technologies.
This Infrastructure Will Be Integrated With Current and Planned
GENI Resources, and Operated for Use by GENI Researchers
Conducting Experiments that Involve Multiple Aggregates At
Multiple Sites.
iGENI Infrastructure Will Connect Its Resources With Current
GENI National Backbone Transport Resources, With Current
and Planned GENI Regional Transport Resources, and With
International Research Networks and Projects,
Building On Existing Partnerships, Current & Future
International Partners Include Researchers From Many
Countries
•
•
•
•
•
•
•
•
•
•
•
Australia
Brazil
Canada
China
Egypt
Germany
India
Japan
Korea
Taiwan
Singapore
•
•
•
•
•
•
Netherlands
Spain
New Zealand
Sweden
United Kingdom
Et Al
Experiments Planned For S3 = Many Ref For Example -- TransCloud
Alvin AuYoung, Andy Bavier, Jessica Blaine, Jim Chen, Yvonne Coady, Paul Muller, Joe Mambretti,Chris
Matthews, Rick McGeer, Chris Pearson, Alex Snoeren, Fei Yeh, Marco Yuen
Example of working in the TransCloud
[1] Build trans-continental applications spanning clouds:
• Distributed query application based on Hadoop/Pig
• Store archived Network trace data using HDFS
• Query data using Pig over Hadoop clusters
[2] Perform distributed query on TransCloud, which currently spans the
following sites:
• HP OpenCirrus
• Northwestern OpenCloud
• UC San Diego
• Kaiserslautern
• Use By Outside Researchers? Yes
• Use Involving Multiple Aggregates?
Yes
• Use for Research Experiments? Yes
Ref: Experiments in High Perf Transport at GEC 7
Demo:Also
http://tcdemo.dyndns.org/
Digital Media TransCoding Demonstration
Transcoding
Cloud 1
• TransCloud: Advanced Distributed
Global Environment Enables Dynamic
Creation of Communication Services,
Including Those Based on Rapid
Migration of Virtual Network
and Cloud Resources
•TransCloud: Set of Protocols,
Standards, Management Software
Enables Interoperation of Distinct
Cloud and Network Resources
•Example: Dynamic Cloud+Dynamic
Network for Digital Media Transcoding
Using Single Platform vs Multiple
Infrastructures
Transcoding
Cloud 3
Transcoding
Cloud 2
Video
Sources
OpenFlow
Switches
InstaGENI
• An Innovative Highly Programmable Distributed
Environment for Network Innovation
• Multiple Organizational Partners
– HP Research Labs, iCAIR, Northwestern University, Princeton
University, University of Utah, GENI Program Office, New York
University, Clemson, Navel Post Graduate School, Virginia
Tech, UCSD, UCI, Texas A&M, George Mason University,
University of Connecticut, University of Hawaii, Georgia Tech,
Vanderbilt, University of Alabama, University of Alaska, Many
International Universities, StarLight Consortium, Metropolitan
Research and Education Network, etc
– 34 Instantiations Around the US
Technology Communication Activities
•
•
•
•
•
•
•
•
•
Scholarly Publications in Peer Review Journals and Books
Participation in Technology Standards Organizations
Organization of Seminars, Workshops, National and International
Conferences
Demonstrations at Major National and International Conferences
Participation in Cooperative R&D Projects with Government
Agencies and Corporations
Participation in Strategic Planning Projects with Government
Agencies and Corporations
Participation on Review, Evaluation and Advisory Committees
Participation in Large Scale Communications Infrastructure Policy
Initiatives
General Press, Media Coverage
iCAIR/StarLight/MREN: Continually
Progressing Forward…
www.startap.net/starlight
Thanks to the NSF, DOE, DARPA
Universities, National Labs,
International Partners,
and Other Supporters
Download