ICFASCIC_DDExecReport_V4.2

advertisement
International Committee for Future Accelerators (ICFA)
Standing Committee on Inter-Regional Connectivity (SCIC)
Chairperson: Professor Harvey Newman, Caltech
ICFA SCIC Digital Divide Executive Report
Prepared by the ICFA SCIC Digital Divide Working Group
Chairperson: Professor Alberto Santoro, UERJ, Rio de Janeiro, Brazil
Members: Heidi Alvarez heidi@fiu.edu;
Julio Ibarra julio@fiu.edu;
Slava Illyin Ilyin@sinp.msu.ru;
Marcel Kunze Marcel.Kunze@hik.fzk.de;
Yukio Karita yukio.karita@kek.ip;
David O. Williams David.O.Williams@cern.ch;
Victoria White white@fnal.gov;
Harvey B. Newman newman@hep.caltech.edu
February 8, 2003
ii
Table of Contents
I.
Introduction and Overview ...................................................................................... 4
II
Methodology .......................................................................................................... 6
III.
Conclusions ............................................................................................................ 7
IV.
Recommendations ............................................................................................... 12
V.
Remaining Actions: Path to a Solution - A Worldwide CyberInfrastructure for
Physics .............................................................................................................................. 15
Appendix A – The Americas, with a Focus on South and Central America, and
Caribbean NRN Information ......................................................................................... 16
The Americas – South and Central America, the Caribbean and Mexico ................ 16
Research Networks in Latin America ................................................................... 17
Infrastructure in Latin America............................................................................ 19
Links to Regional High Energy Physics Centers and Networking Information . 25
South and Central America ................................................................................... 25
The Americas – North America ..................................................................................... 26
United States ............................................................................................................ 26
Canada ..................................................................................................................... 28
Appendix B – Questionnaire / Responses ..................................................................... 30
ICFA SCIC Questionnaire – Responses by Russian Sites..................................... 58
Appendix C - Case Studies ............................................................................................. 59
Rio de Janeiro, Brazil – Network Rio ( REDE RIO) ................................................... 59
Beijing, China .................................................................................................................. 61
ii
Table of Tables
TABLE 1: CHARACTERISTICS OF RESEARCH AND EDUCATION NETWORKS IN LATIN AMERICAN
COUNTRIES ..........................................................................................................................................19
TABLE 2: SUBMARINE FIBER-OPTIC CABLES TO CALA REGION WITH TOTAL BANDWIDTH CAPACITY 20
TABLE 3: RESPONDENTS...............................................................................................................................33
TABLE 4: CONNECTION STATUS...................................................................................................................35
TABLE 5: CONNECTION DETAILS .................................................................................................................39
TABLE 6: OTHER NETWORKING NEEDS .......................................................................................................43
TABLE 7: MOST RELEVANT NETWORKING RELATED PROBLEMS................................................................46
TABLE 8: PRESENTED IDEAS FOR PROSPECTIVE SOLUTIONS ......................................................................48
TABLE 9: PRESENT COMPUTING FACILITIES DEDICATED TO HEP,...........................................................50
Table of Figures
FIGURE 1GLOBAL PHYSICS NETWORK COST MODEL ................................................................................. 9
FIGURE 2 PENETRATION OF COMMUNICATION TECHNOLOGIES VERSUS THE COST AS A % OF PER
CAPITA INCOME............................................................................................................................................10
FIGURE 3: NON-HOMOGENEOUS BANDWIDTH DISTRIBUTION IN SOUTH AMERICA ...................................17
FIGURE 4: SUBMARINE AND TERRESTRIAL OPTICAL CABLE SYSTEMS CONNECTING THE AMERICAS ....20
FIGURE 5: RNP2, BRAZIL'S NATIONAL RESEARCH AND EDUCATION NETWORK .....................................23
FIGURE 6: REDE RIO NETWORK IN BRAZIL, RIO DE JANEIRO ...................................................................24
FIGURE 7: THE ABILENE NETWORK ............................................................................................................27
FIGURE 8: ESNET BACKBONE ......................................................................................................................28
FIGURE 9: CANADA'S CANET4 .....................................................................................................................29
iii
I.
Introduction and Overview
In an era of global collaborations, and data intensive Grids, advanced networks are
required to interconnect physics groups seamlessly, enabling them to collaborate
throughout the lifecycle of their work. For the major experiments, high performance
networks are required to make possible Data Grids capable of processing and sharing
massive datasets, rising from the Petabyte to the Exabyte scale within the next decade.
However, as the pace of network advances continues to accelerate, the gap between
the technologically “favored” regions and the rest of the world is, if anything, in
danger of widening. This gap in network capability and the associated gap in access to
communications and Web-based information, e-learning and e-commerce that
separates the wealthy regions of the world from the poorer regions is known as the
Digital Divide. Since networks of sufficient capacity and capability in all regions are
essential for the health of our major scientific programs, as well as our global
collaborations, we must encourage the development and effective use of advanced
networks in all world regions.
It is of particular concern that “Digital Divide” problems will delay, and in some
cases prevent, physicists in less economically favored regions of the world from
participating effectively, as equals in their collaborations. Physicists in these regions
have the right, as members of the collaboration who are providing their fair-share of
the cost and manpower to build and operate the experiment, to be full partners in the
analysis, and in the process of search and discovery. Achieving these goals requires
the experimental collaborations to make the transition to a new era of worldwidedistributed data analysis, and to a new “culture of collaboration”. As part of the
culture of collaboration, the managements of the laboratory and the experiments
would have to actively foster new working methods that include physicists from
remote regions in the daily ongoing discussions, and knowledge sharing that is
essential for effective data analysis. This same culture is essential if the expert
physicists and engineers from all regions who built detector components and
subsystems are to participate effectively in the commissioning and operation of the
experiment, through “Virtual Control Rooms”. For any of this to happen, high
performance networks are required, in all regions where the physicists and
engineers are located.
The purpose of the Digital Divide sub-committee is to suggest ways to help alleviate
and, where possible, eliminate inequitable and insufficient broadband connectivity. As
part of this process this report documents the existing worldwide bandwidth
asymmetry in research and education networks.
From the scientists’ point of view, what matters is the end-to-end performance of a
given network path. This path involves the local networks at the laboratories and
universities at the ends of the path; the metropolitan or regional networks within and
between cities; and the national, continental and, in some cases, intercontinental
network backbones along the path1. Compromises in bandwidth or link quality in any
1
There are also issues of configuration and performance of the network interfaces in the computers at the end of the path,
and the network switches and routers, which are the concern of the Advanced Technologies Working Group.
4
of these networks can limit overall performance as seen by other physicists. We can
therefore decompose end-to-end performance problems into three tiers of possible
bandwidth bottlenecks:



The Long Range (wide area) Connection on national and international
backbones
The “Last Mile”2 Connection across the metropolitan or regional network, and
The Local Connection across the university or laboratory’s internal local area
network
If any one of these components presents a bandwidth bottleneck, or has insufficient
quality leading to poor throughput, it will not be possible to effectively exploit the
next generation of Grid architectures that are being designed and implemented to
enable high-energy physicists in all world regions to work effectively with the data
from experiments such as the LHC.
In addition to the problem of bandwidth of the links themselves, there may be
additional problems preventing the installation or development of adequate networks:
 Lack of consistent policies among organizations (e.g. the research universities
themselves) in some world regions, allowing the evolution of bandwidth
provisioning to be managed coherently. This can also prevent the
establishment of reasonable policies for sharing the bandwidth in case it is
limited
 Lack of Competition, due to Government policies (for example monopolies),
or lack of network infrastructure in the region from competing vendors,
leading to very high bandwidth costs;
 Government Policies that inhibit encourage international network connections
with good performance
 Lack of cooperation among carriers managing national, regional and
metropolitan networks
 Lack of adequate peering among different academic and research networks
 Lack of arrangements for transit traffic across a national or continental
infrastructure, to link remote collaborative research communities
This report identifies locations where adequate end-to-end network performance is
unavailable for our high-energy physics community, and tries to look into the
underlying reasons.
The work of the Digital Divide sub-committee was motivated, in addition to the need
to solve some of HENP’s major problems, by the realization that solving the end-toend network problems of HENP will have broad and profound implications for
scientific research. The fields of Astronomy, Medicine, Meteorology, Biological
Sciences, Engineering, and High Energy Physics are recognizing and employing data
intensive applications that heavily and increasingly rely on broadband network
connections. Many scientific and engineering disciplines are building a new world of
2
This connection may also be 10 or 100 miles in practice, depending on the size of the metropolitan or regional network
traversed.
5
computing applications based on Grid architectures that rely, in turn, on increasing
levels of network performance.
The impact of solving the Digital Divide problem extends even beyond the bounds of
the research community. In the twentieth century at least three basic infrastructures
were needed for a modern city to be viable: Pervasive water distribution, drainage
distribution, and electricity distribution through an electrical grid. At the beginning of
the twenty-first century, a modern city also has to be equipped with pervasive optical
network infrastructures that enable advanced technologies in computing and media
communications. These four infrastructure conditions are now a necessary condition
for a society’s advancement in the sciences, arts, education, and business. It is
generally accepted that those countries making concerted, continuing efforts to
provide pervasive access to the latest high speed “broadband” network technologies
and services in residences, schools and the at-large community can look forward to
the greatest progress, economically and culturally, in the 21st century.
The solutions to HENP’s leading-edge problems, providing networks and building
new-generation systems enabling distributed organizations and worldwide
collaboration, if developed and applied in a broad context, could therefore have an
important impact on research, and society.
This Executive Report provides the Digital Divide sub-committee’s main conclusions,
and recommendations. The appendices provide the detailed materials that we have
gathered to support these conclusions and recommendations.
II
Methodology
The Digital Divide sub-committee carried out their work in the following manner:





Regular phone meetings of the group to chart actions and discuss data
Produced a questionnaire (see Appendix E) and disseminated it to the
management of labs and major collaborations
Analyzed and tabulated the responses to the questionnaire
Gathered maps, network topologies and bandwidth measurements from web
sources
Gathered case studies (Appendix C)
o Rio de Janeiro, Brazil
o Beijing, China
o Russia, Digital Divide and Connectivity for HEP in Russia
There exists a small, but well defined and documented, community of research and
education networking organizations throughout the world from which reference
material for this report was drawn. TERENA’s3 recent compendium provides
important information about the current status of Trans-European networks and serves
as a basic input for this evaluation. The US National Science Foundation CISE-ANIR
3
The TERENA (Trans-European Research and Education Networking Association) 2002 Compendium, edited by
Bert van Pinxteren, http://www.terena.nl
6
Division AMPATH Valdivia Group Report4 contains a recent evaluation and
recommendation about Latin American and Caribbean network development from a
collaborative scientific research and educational outreach perspective. In this report
we use data from these sources5 plus information gathered from the questionnaire.
In assembling this report, the sub-committee observed that there is a shortage of
readily available documentation. While some material is available on the Internet, it
is difficult to locate and display information that is oriented to a physics network. The
available maps, shown in the Appendix, display the present bandwidth availability to
HEP collaboration points per country, and only at the major research centers.
III.
Conclusions
1. The bandwidth of the major national and international networks used by the
HENP community, as well as the transoceanic links is progressing rapidly
and has reached the 2.5 – 10 Gbps range. This is encouraged by the continued
rapid fall of prices per unit bandwidth for wide area networks, as well as the
widespread and increasing affordability of Gigabit Ethernet.
This is important and encouraging because :
a. Academic and Commercial use of, and reliance on, networks is increasing
rapidly as a result of new ways of working.
b. High energy physicists increasingly rely on networks in support of
collaborative work with each other and with scientists in other disciplines,
such as medicine, astronomy, astrophysics, biology, climate research and other
disciplines. More and more, the inter-disciplinary interest is stimulated as
these domains start work in international collaborations using the research
networks.
c. Universities around the world have been engaged in the use of the Internet
(including advanced Internet private networks) for some time now. Students
around the world are engaged in attending courses across many scientific
domains. Many of these courses are now being developed as interactive
experiences and lectures provided at long distance using IP-based research
networks or the Internet to varying degrees. Once the infrastructure is in place
it becomes an economical alternative to expensive ISDN dial-up networking
for lecture delivery by videoconference.
d. Grids offer an enabling technology that permits the transparent coupling of
geographically-dispersed resources (machines, networks, data storage,
visualization devices, and scientific instruments) for large-scale distributed
applications. They provide several important benefits for users and
applications: convenient interfaces to remote resources, resource coupling for
4
http://ampath.fiu.edu/Valdivia_Report.pdf NSF ANI-0220176
Additional information that just appeared at the time of this report, focused on Latin America, may be found at the
AMPATH January 2003 Miami Workshop: “Fostering Collaborations and Next Generation Infrastructure”,
http://www.ampath.fiu.edu/Miami2003.htm
5
7
resource intensive distributed applications and remote collaboration, and
resource sharing. They embody the confluence of high performance parallel
computing, distributed computing, and Internet computing, in support of “big
science.” They require a high performance and reliable network
infrastructure. For LHC experiments they will be essential.
2. A key issue for our field is to close the Digital Divide in HENP, so that
scientists from all regions of the world have access to high performance
networks and associated technologies that will allow them to collaborate as
full partners: in experiment, theory and accelerator development.
This is not the case at present and several examples where inadequate network
infrastructure adversely affects the ability of scientists to fully participate were
noted through the work of the Digital Divide sub-committee. Also, some of the
obstacles that cause this Digital Divide were identified through the sub-committee
efforts.
a. In current collider experiments, such as the DZero experiment at the
Fermilab Tevatron, potentially valuable computing (and human) resources
that could be used for Monte Carlo data production, or for analysis of data
cannot be harnessed due to insufficient bandwidth to transfer data
between, for example, sites in Brazil and Fermilab.
b. Among the problems pointed out by the community the most important are
1. Cost: The high cost of bandwidth makes prohibitive the ability to
upgrade the research networks in several countries6.
The cost issue is complex, as illustrated in Figure 1. A typical
physics group in a “remote region” may find itself faced with
cost problems, as well as performance issues, in the local
university network, last mile connection or metropolitan network,
and national and international wide area network. The typical
case at the present time, in regions such as South America,
Southeast Europe, China, India, Pakistan, Russia and parts of
Southeast Asia is that the total cost (Y in the figure) for an
adequate network connection end-to-end, is prohibitive.
6
This high cost may be the direct result of a lack of competition, and of government policies that do not encourage open
competition and/or the installation and deployment of new optical infrastructures. Pricing structures imposed by
governments (in Latin America for example) may penalize high bandwidth users to subsidize the cost of low bandwidth
connections based on older technology (such as modems connected over telephone lines) . See the presentation by C.
Casasus at the http://www.ampath.fiu.edu/Miami2003.htm
8
Figure 1Global Physics Network Cost Model
2. Sharing the research networks with the university user
communities: In many cases University networks are not
properly engineered to support the high demands of HEP
research traffic, or the organization supporting the network is not
geared (or inclined) to support a “high performance” subcommunity on the campus.
3. New technology penetration is slower in poor regions, such
South and Central America, Mexico, Southeast Europe, and large
parts of Asia, than in the richer regions. It is generally accepted
that once a technology is perceived as having broad utilitarian
value, price as a percentage of per capita income is the main
driver of penetration: a lower percentage corresponds to a much
greater penetration (Figure 2).
The penetration of telecommunications technologies in low
income countries is further inhibited by at least 3 factors:
 Low income per capita;
 Less competition, resulting in higher prices from
monopolies;
 Fewer applications, as new applications are perceived to
have No broad utilitarian value.
The above factors have helped contribute to the fact that
Telecom monopolies, and some independent telecom
operators as well, have even higher prices in low income
countries (e.g. this is the case in Russia).
Speaking about networks in Latin America, a leader of
network7 development in Mexico said:
Broadband is [currently] about expanding the digital divide,
not narrowing it. The ones who need [broadband connectivity]
7
Observations provided by Carlos Casasus, CEO of the Corporacion Universitaria Para el Desarrollo de Internet (CUDI), a
non-profit corporation in charge of Mexico's Internet2 Project (http://www.ampath.fiu.edu/miami03_bios.htm#ccasasus ) at
the AMPATH Miami Workshop: Fostering Collaborations and Next Generation Infrastructure, January 2002
http://www.ampath.fiu.edu/Miami03-Presentations/Ccasasus.ppt
9
should get it. Let educators, researchers, businesses, hospitals
and governments get it. In a second [round of development,
the] increased income will drive [further] penetration.
Figure 2 Penetration of Communication Technologies versus
the Cost as a % of Per Capita Income
3. The rate of progress in the major networks has been faster than foreseen
(even 1 to 2 years ago). The current generation of network backbones,
representing a typical upgrade by a factor of four in speed, arrived in the last
15 months in the US, Europe and Japan. This rate of improvement is 2 to 3
times Moore’s Law. This rapid rate of progress, confined mostly to the US,
Europe and Japan, threatens to open the Digital Divide further, unless we
take action.
4. There are many end-to-end performance problems, particularly for networks
in economically poor regions. For many regions achieving reliable high end-toend performance is a great challenge and will take much more work in the
following areas.
a. End-to-end monitoring of traffic flows extending to all regions serving our
community. We must collect data and in some cases conduct simulations of
all branches of the research networks in order to understand traffic flow and
end-to-end performance. Tools, such as PINGer and IEPM have been
developed by ICFA/SCIC members, in order to provide clear and
unambiguous information of where there is packet loss. Results from PINGer
provide information to study the behavior of network segments and to
discover bottlenecks.
b. Dissemination of a network engineering cookbook or “best practices”
guidelines for university networks to properly support HEP research
traffic.
It is incumbent on the HEP community to work with the network engineering
communities to formulate this information and spread it widely, as some
university (and city or provincial) networks are not properly set up to support
HEP research traffic. As an example, the Southeastern University Research
Association (SURA), in the United States, published an Optical Networking
10
Cookbook8, as a result of an NSF-funded Optical Technologies Workshop, to
provide a practical resource that details the ingredients required for optical
networking and offers "recipes" in the form of case studies that illustrate a
variety of optical networking implementations.
c. Ensuring network architectures for the HEP community take into
account Grid application requirements to prevent network bottlenecks
from occurring. To prevent these bottlenecks, the end-to-end performance of
the network path (taking into account the Local Connection, the Last Mile and
the Long-Range Wide-Area connection) must be considered when designing
the network architecture for HEP Grid applications. It has become essential
that network architectures take into account the levels of requirements that are
demanding distinctions in networks for experimentation, research and
production uses. Network architectures at Tier centers must be sufficiently
scaleable and flexible to support a cyber-infrastructure9 for experimental,
research and production end-to-end Grid application requirements. A cyberinfrastructure10 encompasses the emerging computational, visualization, data
storage, instrumentation and networking technologies that support major
science and engineering research facilities, enabling e-Science. It
recommends an architecture that empowers the researchers (application
developers) to have access to essential network and computational resources in
a coordinated manner, through middleware. ICFA/SCIC should consider
adoption of the cyber-infrastructure architecture to ease the Digital Divide
conditions the HENP community is presently experiencing.
d. Systematizing and tracking information on network utilization as a
function of time, and using this information together with requirements
estimates from several fields on requirements, to predict the network
needs over the next decade. This is being done for the major backbones
and international links, but we need to generalize this to cover all regions
of the world.
Our questionnaire shows that congestion on research networks is being caused
by the large traffic volume that occurs during the day’s peak usage hours.
Even without using the data from the questionnaire, models can be created to
forecast the network load in a 5 to 10 year window, based on the requirements
and a range of new applications coming into use in the following disciplines:
Medical Projects (there are several), Long Distance Education (Universities
throughout the world are organizing courses using the Internet), Astrophysics
and Astronomy, Biology, Genome Projects, Climate, Videoconferences in
general, and so on. These applications will share the same network as HEP
Grids.
e. Dealing with (many) “Last Mile” connection problems. It is insufficient to
obtain terrestrial or submarine cable connections between regions. There must
be equal broadband connectivity from the Points-of-Presence (PoPs) and the
institutions where the Tier-N (N=1, 2, 3) data centers are located. In many
8
http://www.sura.org/opcook
9
http://www.calit2.net/events/2002/nsf/ExpInfostructure_Irvine_FINAL_110502.pdf
10
http://www.evl.uic.edu/activity/NSF/final.html
11
regions, it is the local networks, city or state networks that are the problem. In
many cases, the link coming into a country has a much greater bandwidth than
links within or between cities in the country.
f. Dealing with (many) Long Range (wide area) connection problems,
between long-distance regional network segments that interconnect the
HEP centers and universities. These segments often serve to interconnect
aggregation points of regional networks and to move large volumes of traffic
between these points. Network congestion and bottlenecks can easily occur
between segments if the network architecture cannot accommodate network
traffic-volume variations and large traffic bursts.
Another problem is “peering” among different networks, where the routing
policies interconnecting two or more networks, managed by separate
organizations, might be poorly configured, because the optimal end-to-end
path was not considered.
Scalable network architectures and pervasive network monitoring are essential
to be able to prevent network congestion and bottlenecks in these cases.
IV.
Recommendations
The world community will only reap the benefits of global collaborations in research
and education, and of the development of advanced network and Grid systems, if we
work to close the Digital Divide that separates the economically and technologically
most-favored from the less-favored regions of the world. We recommend that some of
the following steps be taken to help to close the divide:

Identify and work on specific problems, country by country and region by
region, to enable groups in all regions to be full partners in the process of
search and discovery in science. We have seen that networks with adequate
bandwidth tend to be too costly or otherwise hard to obtain in the
economically poorest regions. Particular attention to China, India, Pakistan,
Southeast Asia, Southeast Europe, Russia, South America and Africa is
required.
Performance on existing national, metropolitan and local network
infrastructures also may be limited, due to last mile problems, political
problems, or a lack of coordination among different organizations11.

Create and encourage inter-regional programs to solve specific regional
problems. Leading examples include the Virtual Silk Highway project
(http://www.nato.int/science/e/silk.htm) led by DESY, the support for links in
Asia by the KEK High Energy Accelerator Organization in Japan
(http://www.kek.jp), and the support of network connections for research and
11
These problems tend to be most prevalent in the poorer regions, but examples of poor performance on existing network infrastructures
due to lack of coordination and policy may be found in all regions.
12
education in South America by the AMPATH “Pathway to the Americas”
(http://www.ampath.fiu.edu ) at Florida International University.

Make direct contacts, and help educate government officials on the needs
and benefits to society of the development and deployment of advanced
infrastructure and applications: for research, education, industry,
commerce, and society as a whole.

Share and systematize information on the Digital Divide. The SCIC is
gathering information on these problems and developing a Web Site on the
Digital Divide problems of research groups, universities and laboratories
throughout its worldwide community. This will be coupled to general
information on link bandwidths, quality, utilization and pricing. This will
promote understanding the nature of the problems: from lack of backbone
bandwidth, to last mile connectivity problems, to policy and pricing issues.
Specific aspects of information sharing that will help develop a general
approach to solving the problem globally include:
o Sharing examples of how the Divide can be bridged, or has been
bridged successfully in a city, country or region. One class of
solutions is the installation of short-distance optical fibers leased or
owned by a university or laboratory, to reach the “point of presence” of
a network provider. Another is the activation of existing national or
metropolitan fiber-optic infrastructures (typically owned by electric or
gas utilities, or railroads) that have remained unused. A third class is
the resolution of technical problems involving antiquated network
equipment, or equipment-configuration, or network software settings,
etc.
o Making comparative pricing information available. Since
international network prices are falling rapidly along the major
Transatlantic and Transpacific routes, sharing this information should
help us set lower pricing targets in the economically poorer regions, by
pressuring multinational network vendors to lower their prices in the
region to bring them in line with their prices in larger markets.
o Identifying common themes in the nature of the problem, whether
technical, political and financial, and the corresponding methods of
solution.

Create a “new culture of collaboration” in the major experiments and at
the HENP laboratories. (This is discussed further in the main SCIC report to
ICFA).

Use (lightweight; non-disruptive) network monitoring to identify and
track problems, and keep the research community (and the world
community) informed on the evolving state of the Digital Divide. One
leading example in the HEP community is the Internet End-to-end
Performance Monitoring (IEPM) initiative (http://wwwiepm.slac.stanford.edu)
13
at SLAC.
It is vital that support for the IEPM activity in particular, which covers 79
countries with 80% of the world population be continued and strengthened, so
that we can monitor and track progress in network performance in more
countries and more sites within countries, around the globe. This is as
important for the general mission of the SCIC in our community as it is for our
work on the Digital Divide.

Work with the Internet Educational Equal Access Foundation (IEEAF)
(http://www.ieeaf.org), and other organizations that aim to arrange for
favorable network prices or outright bandwidth donations12, where
possible.

Prepare for and take part in the World Summit of the Information
Society (WSIS; http://www.itu.int/wsis/). The WSIS will take place in
Geneva in December 2003 and Tunis in 2005. HENP has been involved in
preparatory and regional meetings in Bucharest in November 2002 and in
Tokyo in January 2003. The WSIS process aims to develop a society where
“highly-developed… networks, equitable and ubiquitous access to
information, appropriate content in accessible formats and effective
communication can help people achieve their potential…”.
These aims are clearly synergistic with the aims of our field: for worldwide
collaboration, for effective sharing of the data analysis as well as the operation
of experiments, and for the construction and operation of future accelerators as
“global facilities”.
HENP has been recognized as having relevant experience in effective methods
of initiating and promoting international collaboration, and harnessing or
developing new technologies and applications to achieve these aims. It has
been invited to run a session on The Role of New Technologies in the
Development of an Information Society13, and has been invited14 to take part
in the planning process for the Summit itself. CERN is planning a scientific
event shortly before the Geneva Summit.
This recommendation therefore concerns a call for continuing work, and
involvement by additional ICFA members in these activities to help prepare a
statement to be presented at the WSIS, and other actions that will assist the
WSIS process.
12
The IEEAF successfully arranged a bandwidth donation of a 10 Gbps research link and a 622 Mbps production service
in September 2002. It is expected to announce a donation between California and the Asia Pacific region by mid-2003.
13 At the WSIS Pan-European Ministerial meeting in Bucharest in November 2002.
See http://cil.cern.ch:8080/WSIS and the US State Department site http://www.state.gov/e/eb/cip/wsis/
14 By the WSIS Preparatory Committee and the US State Department.
14

Formulate or encourage bi-lateral proposals15, through appropriate
funding agency programs. Examples of programs are the US National
Science Foundation’s ITR and International programs, the European Union’s
Sixth Framework and @LIS programs, NATO’s Science for Peace program
and FASTnet project linking Moscow to StarLight funded by the NSF and the
Russian Ministry of Industry, Science and Technologies (MoIST).

Help start and support workshops on networks, Grids, and the associated
advanced applications. These workshops could be associated with helping to
solve Digital Divide problems in a particular country or region, where the
workshop will be hosted. One outcome of such a workshop is to leave behind
a better network, and/or better conditions for the acquisition, development and
deployment of networks.
The SCIC is planning the first such workshop in Rio de Janeiro in February
2004.
An organizational meeting will be held in July 2003. ICFA members should
participate in these meetings and in this process.

Help form regional support and training groups for network and Grid
system development, operations, monitoring and troubleshooting16.
V.
Remaining Actions: Path to a Solution - A Worldwide
CyberInfrastructure for Physics
“The US Broadband Problem” by Charles H. Ferguson17 describes the dynamics of
the economy behind fiber optic terrestrial and submarine circuits. This report, which
provides a preliminary survey of the present broadband connectivity status, provides a
clear direction, for us to develop a project proposal with the unifying theme of
developing an integrated worldwide Cyberinfrastructure18, where all HEP research
institutes are connected at standard Gigabit/sec (Gbps) or higher end-to-end speeds,
based on fiber-optic links supporting multiple wavelengths19. Without this assertive
and cohesive direction, current and near-future HEP experiments will be severely
limited in their ability to achieve their goals of global participation in the data analysis
and physics; nor will these experiments be able to make effective use of emergent
Data Grid technologies in support of these goals.
15
A recent example of a proposal is one to NSF is from Florida International University, AMPATH, other universities in
Florida, Caltech and UERJ in Brazil for a center for HEP Research, Education and Outreach which includes partial funding
for a network link between North and South America for HENP.
16
One example is the Internet2 HENP Working Group in the US. See http://www.internet2.edu/henp
17 The U.S. Broadband Problem- July 2002 – http://www.brook.edu/comm/policybriefs/pb105.htm
18 One example of such a Cyberinfrastructure is the Global Biomedical Research Exchange, currently under development
by the IEEAF (www.ieeaf.org). The SCIC has discussed, and begun work on a possible Global Research Exchange for
Physics with the IEEAF during 2002.
19 Wavelength(s) is short for wavelength division multiplexing, a type of multiplexing developed for use on optical fiber.
WDM modulates each of several data streams onto a different part of the light spectrum.
15
Appendix A – The Americas, with a Focus on South
and Central America, and Caribbean NRN Information
Introduction
The appendix describes the research and education networks and infrastructure in the
Americas: North America, South and Central America, the Caribbean and Mexico.
The Americas – South and Central America, the
Caribbean and Mexico
Countries of South and Central America, the Caribbean and Mexico have a large
Digital Divide problem. If this problem is not fixed, the small number of countries
participating at the LHC experiments will have difficulties being able to contribute to
the analysis, and will not be able to participate in physics meetings, shifts on line, and
so on. These countries will be condemned to a very secondary role on these
experiments. We will have a sad contradiction in the sense that with the advancement
in digital technologies, the broadband problem20 is becoming a major bottleneck for
the world economy. The “broadband bottleneck”, exacerbating the Digital Divide
problem, will increase the cultural distances between the economically developed
countries and the countries that have people scientifically prepared to contribute to the
development of science, but cannot as a result of unequal access to Next Generation
Infrastructure. From the AMPATH Valdivia workshop Report21, we can see maps
(figure 4 below) and comments about the region, which show the present status of the
network bandwidth distribution in the countries of South America.
20
21
http://www.brook.edu/dybdocroot/comm/policybriefs/pb105.pdf
AMPATH Valdivia Workshop Report, http://ampath.fiu.edu/Valdivia_Report.pdf
16
Figure 3: Non-Homogeneous bandwidth distribution in South America
The challenges associated with the AMPATH service area22 are its immense size, the
total number of secondary schools, colleges and universities and the unknown number
of scientific activities that are going on. Some of the individual countries are well
advanced in their networking evolution, e.g. Chile and Brazil, while others are not so
well connected. It is the opinion of the Valdivia Group that most of the countries in
the service area have problems with their infrastructure. The Figure shows the
countries that have adequate networking infrastructure, those that are evolving to an
adequate infrastructure, and those countries that are not well connected. This issue is
what is referred to in the US as the “last mile” problem, in that in the US, the last
mile of copper wire into the home was considered a difficulty. The last mile - and in
many cases the “last thousand miles” in many Latin American countries - is a
challenge that needs to be addressed. The in-country infrastructure of some Latin
American countries is severely underdeveloped.
Research Networks in Latin America
22
South and Central America, Mexico and the Caribbean
17
Table 1 summarizes the characteristics of the research and education networks in 17
Latin American countries23 . There are many low-speed last-mile circuits in the Kbps
range, and numerous backbone circuits in the single digit Mbps range – clearly unable
to support HENP GRID requirements. There is little to no published information
about the university network infrastructures. Nevertheless, one can infer that the oncampus network infrastructures, where HENP researchers are located, are also
inadequate.
Country
Organization
Argentina
RETINA
Bolivia
BOLnet
Brazil
RNP
Chile
Existing
REN?
National
connections
External
Capacity
256Kbps –
34 Mbps
64 – 128
yes
Kbps
2 – 30 Mbps
(backbone
yes
up to 622
Mbps
59 Mbps
yes
REUNA
yes 155 Mbps
Colombia
RedCETCol
Costa
Rica
Cuba
CRNet
Ecuador
FUNDACYT
RedUniv
El
CONACYT
Salvador
Guatemala Not known
Honduras
HONDUnet
Nicaragua
32 – 512
Kbps
University 19.2 Kbps –
Network 2 Mbps
yes
yes
369
45 Mbps
via
AMPATH
Not
known
Not
known
Not
known
yes
18
Not known
34
23
Not known
no
no
no
Not known
no
Not known
no
REACCIUN
yes
Venezuela
202 Mbps
Nonexistent
64 Kbps to 1
Mbps
Uruguay
no
18
no
yes
CONCYTEC
1.5 Mbps
In planning
RAU
Peru
yes
no
256 – 512
Kbps
PANNET/
SENACYT
Connection
to US
Internet2
56
In planning
University
Network /
Government
Network
In planning
Panama
23
Not known Not known
Number
of
Connected
Sites
1.54
Mbps
11
no
no
6 Mbps
CAESAR Report, Review of Developments in Latin America, Cathrin Stöver, DANTE, June 2002
18
46
no
78
January 2003
Table 1: Characteristics of Research and Education Networks in Latin American
Countries
There is a striking disparity in the amount of bandwidth the research and education
community has to work with in each of the countries versus the capacity that is
present in the communications infrastructure. Table 1 describes the characteristics of
the RENs for each of the countries. In several countries, such as Colombia, Cuba,
Guatemala, Honduras and Nicaragua, the state of development of RENs is not known,
as was recently documented in the CAESAR24 report. With the exception of Chile
and Brazil, the available bandwidth to the universities in the other countries are in the
lower single digit Mbps units or even lower in Kbps. With the exception of the
countries connected to AMPATH (Argentina, Brazil, Chile and Venezuela) external
bandwidth capacity to RENs outside of the country is mostly unknown or non
existent.
Infrastructure in Latin America
The following map shows the submarine and terrestrial optical fiber systems that
interconnect South and Central America to North America and other regions. The
infrastructure exists and we can encourage upgrades in the HEP institutions.
24
http://www.dante.net/caesar/public/documents/2002/CAE-02-039.pdf
19
Figure 4: Submarine and Terrestrial Optical Cable Systems connecting the Americas
There are at least nine submarine fiber-optic cable systems that connect the countries
of the CALA region with North America and other continents. Nine of these cables
land on the southeast coast of Florida:
Submarine Fiber-Optic Cable
System
Americas 1
Americas II
Global Crossing’s South American
Crossing
Columbus II
Columbus III
Telefonica’s Emergia
ARCOS
Maya-1
360 Americas
Total Bandwidth
Capacity (Gbps)
0.560
2.5
1,280
0.560
2.5
1,920
960
60
10
Table 2: Submarine Fiber-Optic Cables to CALA Region with Total Bandwidth
Capacity
The total aggregate bandwidth capacity into the Latin American region is estimated at
4,236 GB. The total bandwidth capacity for the research and education community to
access global RENs via AMPATH is 225Mbps. Approximately 71.2Mbps is being
utilized25
Other Research Networks in the Latin American Region
The AMPATH Valdivia Group Report26 is a very important source of
information about research networks in Latin America Region.
AMPATH is certainly a very successful project to foster collaborative
efforts in the Latin American region where Digital Divide is a serious
problem. As we concentrate our focus in HEP institutions our summary below will
only be for countries having programs in HEP or as potential collaborators. We also
exclude the information about other global e-Science instrumentation and programs,
agreements and networks such as the Gemini and ALMA observatories in Chile,
NASA/INPE27-Brazil, weather, astrobiology, to name but a few. Comments below
derive in part from those found in the Valdivia Group Report, chaired by Robert
Bradford of NASA and written by a group of US research scientists and the
AMPATH staff. The Valdivia Group Report is the result of NSF Award #ANI-
25
http://www.net.fiu.edu/mrtg/ampathgsr.html
26
http://www.ampath.fiu.edu/AMPATH_Valdivia_Report.pdf . AMPATH is a project of Florida International University
with support from the National Science Foundation to connect Research & Education Networks and major US e-Science
projects in South and Central America, Mexico and the Caribbean to US and Global Research & Education networks.
27
http://www.inpe.br/ Instituto Nacional De Pesquisas Espacias
20
0220176 sponsoring the first AMPATH International conference in Valdivia, Chile in
April, 2002.
Mexico - Corporacion Universitaria para el Desarollo del Internet (CUDI)
Internet2 in Mexico. URL: http://www.cudi.edu.mx
HEP groups in Mexico collaborate with FERMILAB and LHC experiments. ALICE
has a Mexican group of physicists with a strong collaboration with Italy. Other
Mexican groups plan to present a proposal to collaborate with CMS.
CUDI is comprised of nearly 50 member institutions.
CONACYT – A Council for Research and financial support in Mexico intend to join
CUDI. The Network is provided by Telmex.
Network services include: Quality of Service (QoS), Mulitcast, IPv6, H.323, Routing,
Topology, Security and a network operations center (NOC).
Advanced applications being run over the CUDI network include:
Distance education, digital libraries, telemedicine, astronomy, earth sciences,
visualization, and robotics. The physics community in Mexico is preparing their
participation in the effort of GRID for LHC. During XX Mexican School of Particle
and Fields (Playa del Carmen), Mexico, October 2002 ) a number of representative
physicists from Latin American discussed the possibility of upgrading he existing
Links and their future collaboration with CERN and FERMILAB. It was informally
established a number of representative, to be the contact persons for physics and
networks purpose.
Colombia
ICFES – High Quality Research and Education in Virtual
Environments. 119 Universities have Internet Access. There is a group of high
energy physics in Colombia collaborating with Fermilab. We do not have information
about their activities in Computing for Grids or any other project. We have to look for
further information from them.
URL: http://www.icfes.gov.co/
Puerto Rico
We have no relevant information for Networks and/or Computing. They have
a proposal for a High Performance Computing facility and the creation of a Virtual
Machine Room for Puerto Rican CISE education and research.
URL: http://www.hpcf.upr.edu
Venezuela
21
REACCIUN – Academic Network of Research Centers and National
Universities is the network connected to the Centro Nacional de Teconologias de
Informacion (CNTI) created in March of 2000. CNTI connects 316 institutions.
Physicists and the CNTI are making efforts to upgrade their network and creates a
HEP group to, eventually, collaborate with LHC. (Venezuela has responded our
questionnaire.)
URL: http://www.reacciun.ve
Argentina
RETINA – REd TeleINformática Academica – is a project launched by
Asociacion Civil Ciencia Hoy in 1990. Argentina has HEP groups in Experimental
and Phenomenology. Argentina has a big potential to become a strong collaborator in
HEP due to their tradition in this area. The past and present economic situation has
stopped the development of the groups in high energy physics.
URL: http://www.retina.ar We believe that the physicists in Argentina will have
strong participation in HEP . There is a big effort to build the AUGER observatory
which is an International High Energy Cosmic Rays Project.
Brazil
RNP – Brazilian National Research Network www.rnp.br
Networks in Brazil started a long time ago with Bitnet. They originated in Rio
and São Paulo, then were slowly implemented in all the 27 states of the country via a
National project supported by CNPq, and later by the MCT. RNP is the organization
that was created to stimulate the creation of networks in the whole country. REDE
RIO was created in Rio de Janeiro as a powerful organization, a mixed institutional
backbone sponsored by TELEMAR, the Telecommunication Company of Rio de
Janeiro. TELEMAR is part of a 155 Mbps ring around the city of Rio de Janeiro. A
considerable part of the national budget for Science and Technology in Brazil is
dedicated to the network upgrades. There are many projects of supercomputing and
Networks in São Paulo and Rio de Janeiro. RNP is the center of all activities
involving
Brazilian
networks.
The ideas of Grids and advanced networks first appeared in the High Energy
Physics community. Until recently there was no indication as to whether RNP or
REDE RIO would truly support the needs of HEP in Brazil. In 2002 however, there
has been significant progress. UERJ in Rio, working together with AMPATH and
Caltech, has developed a plan with RNP and Global Crossing (GX) to connect to RNP
with a dark fiber at Global Crossing’s nearby Point of Presence. A plan has been
developed in late 2002 and early 2003 for an STM4 (622 Mbps) link between Rio and
Miami, supported by RNP with contributions from the US National Science
Foundation, and this link is expected to start in 2003. According to this plan, UERJ
can expect to use approximately half of this international link in support of its Tier2
center and Grid developments. We also started discussions with the groups
responsible for networks in São Paolo, along with RNP and GX, to share an STM16
(2.5 Gbps) link in 2004, if that turns out to be affordable.
22
We now show Brazil’s best research and education network and its
inhomogeneous bandwidth distribution within the country (Figure 3). This is the
topology of the Brazilian National Network for Research and Education. It shows a
widespread presence of the Digital Divide problem. Bringing the network to all
regions of the country is a stimulus to research work, to integration within the
country, and to Brzil’s partnership with the rest of the world. A better homogeneity of
bandwidth distribution has to be pursued, in order to open opportunities to the many
research centers and universities situated far from the more developed southern
region of the country.
Figure 5: RNP2, Brazil's National Research and Education Network
The following map shows the Digital Divide in the city of Rio de Janeiro. The
same problem is found in city of São Paulo. The other Brazilian states have a worse
situation for their links. Examining only one of the important States of Brazil, Rio de
Janeiro, we see a backbone of 155 Mbps with financial support of FAPERJ –
Financial Support Agency for Research of the State of Rio de Janeiro. UERJ, the
biggest University supported by the government of the State of Rio de Janeiro, is not
connected to this backbone.
23
Figure 6: Rede Rio Network in Brazil, Rio de Janeiro
As the network above shows, there is a very clear digital divide problem: The
differences in last-mile connection speeds are too great, causing a significant
imbalance. This has caused a stop to the programs of HENP research for many
institutions. UERJ has a strong group that is completely stopped by this policy. No
one in this state has a demand for higher bandwidth than the UERJ HEP group for
Dzero and CMS collaboration with several needs in computing and video conference.
The projects of the Brazilian high energy physicists’ collaborators include the
participation on GRID LHC/CERN projects.
24
HEPGRID
This part of the report does not come from the Valdivia Report. It
comes from Brazilian information about projects for GRID involving Networking.
The HEPGRID project was first presented in Rio, in 1998, as part of an
initiative of Physicists of seven Institutions as a consequence of the former project
CLIENT/SERVER that was a farm of PCs to produce Monte Carlo events for Dzero.
This farm worked during 3 years and was very useful for Dzero students. They could
remotely (through the web) submit and control their jobs, choosing the number of
nodes according to their needs.
The present project proposes the establishment of 5 Tier 3 distributed
in the following institutions: CBPF, UFRJ, USP, UFRGS and UERJ. The Tier 3 to be
set at UERJ is designed to evolve to a Tiers 1 before 2005/6. The current problem for
UERJ is the Last Mile Connection. The Institute of Physics is now requiring the
upgrade of the internal network from 10/100 Mbps to Gigabit. It is clear that the
whole network system has to evolve to a rate of 2.5 and 10 Gbps due to future needs.
There is a project already approved by FINEP- Financial Support
Agency for Research and Projects- awaiting for funds to be made available. Another
project has been approved by FAPERJ – Financial support Agency of the State of Rio
de Janeiro. HEPGRID has submitted two other projects to MCT/CNPq . The main
purpose of HEPGRID project is to create an opportunity for Brazilian groups to
continue to work on the next generation of LHC experiments.
Coordinator: A. Santoro - santoro@uerj.br
CHILE
REUNA2 – Connection between Chile to Internet2 via AMPATH
The REUNA National Backbone will be partially upgraded to 2.5 Gbps in
2003 and fully upgraded to 2.5 Gbps in 2004. As far as we know there is no high
energy physicists involved in LHC experiments in the country.
Other Latin American countries
. During the X Mexican School in Particle and Fields in November 2002 there
were discussions of problems related to the Network in Latin America. Many Latin
American physicists were not informed about the initiatives of AMPATH and the
effort of this project to upgrade the networks in Latin American. A group of
physicists is organizing Master courses in Physics in Central America and are
discussing the possibility of creating a HEP group in the region. This initiative
includes Puerto Rico, Guatemala, Honduras, San Salvador and Nicaragua.
Links to Regional High Energy Physics Centers and Networking Information
South and Central America



Universidad de Panamá (in Panama City)
Universidad de Costa Rica (in San Jose, Costa Rica)
Universidad Nacional de Costa Rica (in San Jose, Costa Rica)
25





Universidad Autónoma de Nicaragua, Managua (in Managua,
Nicaragua)
Universidad Nacional Autónoma de Honduras (in Tegucipgalpa)
Universidad de El Salvador (in San Salvador, El Salvador)
Universidad del Valle de Guatemala (in Guatemala City)
Universidad de San Carlos de Guatemala (in Guatemala City)
The Americas – North America
For North America, this appendix will describe the major research and education
networks in the United States and Canada. The research networks in North America
typically follow a hierarchical network architecture: a national backbone, regional
aggregation at core nodes, sub-regional aggregation typically at the State level, and
local metropolitan aggregation from Universities to the local PoP (Last Mile). In
North America, most, if not all universities, are connected at OC3c (155 Mbps)
speeds to a regional aggregation point. The present trend is that universities are
acquiring dark fiber for their last-mile connection.
United States
The United States has well-established research and education networks. There are
several research networks in the US that are funded and operated by other federal
agencies: DREN, ESNet, NISN, NREN, the vBNS; or by the academic community
Abilene. These networks provide an excellent platform for collaboration and the
development of advanced Internet applications that utilize protocols that are not
available on the commercial Internet, such as QoS, IPv6 and multicast, as well as
provide testbeds for testing new protocols. This section highlights two of it’s most
successful networks: UCAID’s Abilene28 network and the Department of Energy’s
research network, ESnet29.
28
29
http://abilene.internet2.edu/
http://www.es.net/
26
Figure 7: The Abilene Network
27
Figure 8: ESnet Backbone
Canada
Canada’s CAnet network is an advanced optical platform for research and education
networks, inter-connecting universities, schools and research organizations. Canada was
one of the early adopters of using dark fiber and promoting the concept of customerempowered networks. In Canada, there is a growing trend for many universities, schools,
and large businesses/institutions to acquire their own dark fiber as part of a condominium
or municipal fiber build. As a result, the research institutions in Canada have access to a
network infrastructure that can support very high-speed wide-bandwidth connectivity in
the country. Canada is very well connected to the US through Washington State,
Chicago at StarLight and New York City.
28
CA*net 4 Architecture
CANARIE
GigaPOP
ORAN DWDM
Carrier DWDM
Edmonton
Saskatoon
St. John’s
Calgary Regina
Quebec
Winnipeg
Charlottetown
Thunder Bay
Victoria
Sudbury
Vancouver
CANARIE
Optical switches
Seattle
Montreal
Ottawa
Fredericton
Chicago
New York
CA*net 4 node
Toronto
Possible future CA*net 4 node
Windsor
Figure 9: Canada's CAnet4
29
Halifax
Appendix B – Questionnaire / Responses
In order to get new information from institutions around the globe participating in
HEP experiments, to elicit their comments and suggestions for eventual solutions to
network-related problems, we designed a minimal questionnaire that was distributed to
the leaders of the HENP laboratories and the major experiments, as well as to individual
physics groups in more than 35 countries.
The questionnaire was:
1. Your Name, Institution, Country, E-Mail. What are your areas of responsibility
related to networks, computing or data analysis ?
General Questions on Your Network Situation, Plans, Problems and Solutions
2. Where are the bandwidth bottlenecks, in the path between your university
or laboratory and the laboratory (or laboratories) where your experiments
are sited (please explain):







In the local area network at my institution
In the “last mile” or local loop
The city or state or regional network (specify which)
The national network
International connections
Is your LAN connected to the WAN by use of a firewall? If yes, indicate
the possible maximum throughput.
Other
3. For each of the above elements in the network path, please send information about
the present situation, including the bandwidth, sponsoring organization (is it
academic or commercial or both), who pays for the network, etc. If you are using
a shared network infrastructure, mention how much is used, or allowed to be used
by HEP, and whether it is sufficient.
4. Describe the upgrade plans in terms of bandwidth, technology and schedule,
where known, for the
 Local network (LAN) infrastructure
 Regional networks
 National backbones
 Interconnection between the LAN and the outside.
5. Describe your network needs as they relate to the current and/or planned physics
programs in which you are involved. Compare the needs to the current and
planned networks. Describe any particular needs, apart from sufficient bandwidth,
that you consider relevant.
30
6. Please describe any particular network-related problems you consider most
relevant.
[Examples: the city/regional/national/international network hierarchy;
high prices; restrictive policies; lack of network service;
technical problems, etc.]
7. Please present your ideas for solving or improving any of the specific problems
you face. Where possible, describe how these approaches to solutions are applied
in other cases.
8. Which organizations or committees (other than the ICFA SCIC) are working on
characterizing and solving the network problems that affects you? Do you work
on any of these committees?
Some Specific Questions About Your Current Computing and Networking Situation
1. Describe the Computing Facility you currently have in your Institution. (If it is a
shared facility, specify the resources dedicated to HEP.)
a. Do you have a Mainframe?
b. Clusters? (How many and what type of CPUs? Storage?
Interconnection Technology?)
c. How are the CPUs connected to the local area network?
2. Describe your local area network. Specify which technologies (e.g. 10 Mbps
Ethernet, 100 Mbps, Gigabit Ethernet, ATM, Gigabit trunking), the configuration
(shared or switched), and physical media used (optical fiber, copper links, radio links,
other).
3. How is your local network connected to the wide area network (WAN)?
Does HEP have any special links to the WAN that are separate from the general
gateway to the WAN at your institution?
4. How many hops (number of internet routers) are there between the local and remote
machines, over the network paths that are most important to your physics programs?
(Provide one or more traceroutes and point to any problem hops if known).
5. Does HEP have to pay for its connection to the WAN, or is it covered in the general
expenditures of your university of laboratory. If there is specific payment, is it for
unlimited use or for a specified bandwidth ?
The response to these questions by each country/Institute/Existing Local Network
will serve as basis for our further discussions, and will help us greatly in finding the best
practical
alternatives.
31
A summary of the responses received follows. We expect to receive more responses, in
order to increase our database for this subject.
32
Table 3: Respondents
Name
Institution
e-mail
Maria Teresa Dova
Univ. Nacional de La Plate
Argentina
dova@fisica.unlp.edu.ar
Stefaan Tavernier
AND Rosette
Vandenbroucke
Ghislain Gregoire
HHE(VUB-ULB),
Belgium
Tavernier@hep.iihe.ac.be
vandenbroucke@helios.iihe.a
c.be
Ghislain.Gregoire@cern.ch
AND Alain Ninane
Philippe.Herquettec
AND
Joseph Hanton
Alexandre
Sztajnberg
Eduardo Gregores
Panos Razis
Thomas Muller
AND Guenter Quast
P. V. Deshpande
Lorme Levinson
Paolo Capiluppi,
AND
Paolo Mazzanti
Pierluigi Paolucci,
Heriberto Castilla
Valdez
University of Louvain – Belgium
University of Mons-Hainaut Service PPE –
Belgium
UERJ - Rio de Janeiro State
University,
Brazil
IFT-UNESP –
Brazil
University of Cyprus
Cyprus
Alain.Ninane@slashdev.be
Philippe.Herquet@umh.ac.be
Joseph.Hanton@umh.ac.be
alexszt@uerj.br
razis@ucy.ac.cy
HEP Physicist
mullerth@ekp.physik.unikarlsruhe.de
Gunter.Quast@cern.ch
Weizmann Institute of Science
Israel
Dept. of Physics and INFN,
Bologna
Italy.
Lorme.Levinson@weizmann
.ac.il
Paolo.Capiluppi@bo.infn.it
pvd@tifr.res.in
Group Leader –HEP
AND
Contact Person
Network Administrator
HEP responsible
HEP - Physicist
Paolo.Mazzanti@bo.infn.it
Responsible for
Computing
pierluigi.paolucci@na.infn.it
HEP - Physicist
castilla@fnal.gov
QAU - National Centre for Physics
at Quaid-iAzam University Islamabad
Pakistan
Hafeez.Hoorani@cern.ch
Serge D. Belov
Budker Instgitute of Nuclear
Physics - Novosibirsk –
Russia
Mathematics and Computing
Division –IHEP-Protvino –
Moscow Region
Russia
belov@inp.nsk.su
Petukhov@mx.ihep.su
33
HEP Physicist
AND
Technical Contact
Contact Person
AND
Technical Contact
Professor Physics
AND
Technical contact
Network support staff
HEP Physicist
Hafeez Hoorani
Vadim Petukhov
HEP physicist
gregores@ift.unesp.br
Institut fuer Experimentelle
Kernphysik – University of
Karlsruhe
Germany
Tata Institute – Bombay –
India
INFN Napoli
Italy
Cinvestav-IPN,
Mexico
Notes
HEP physicist
Professor of Physics.
Group working for CMS
HEP Physicist
Head of Math.and
Computing Division
Table 3: Respondents - Continuation (page 2)
ITEP – Moscow –
Russia
Institution
Victor.Kolosov@itep.ru
Victor Abramovsky
Novgorod State University,
Russia
ava@novsu.ac.ru
Area of responsibility is
data analysis
Yuri Ryabov
Technology Division in S.
Petersburg Nuclear Physics Institute
Russian Academy of Science (PNPI
RAS)
Russia
Institute of Experimental Physics,
Slovak Academy of Sciences,
Kosice
SLOVAKIA
Jozef Stefan Institute,
Slovenia
Instituto de Física de Cantabria –
University of Cantabria –
Spain
Inst. Of Particle Phys. –ETH-Zurich
Switzerland
University of Cukurova, Adana
University of Cukurova, Adana,
Turkey
Middle East Technical University
Turkey
University of California – Santa
Barbara CA
USA
University of California, San Diego
(UCSD)
USA
Physics Department University of
Illinois at Chicago
USA
VINCA Institute of Nuclear
Sciences – Belgrade –
http://www.rokson.nw.ru
ryabov@pnpi.nw.ru
Yuri.Ryabov@cern.ch
Chief of Information Network and computing
,connectivity
Responsibility
Victor Kolosov
Name
Anton Jusko
Andrej Filipcic
Teresa Rodrigo
AND Jesus Marco,
Angel Camacho
Christoph Grab
Gulsen Onengut
Isa Dumanoglu,
Ramazan Sever,
Lap Leung
James Branson
Ian Fisk
Mark Adams
Michael Klawitter
Petar Adzic
AND
Predrag Milenovic
AND
e-mail
Dragoslav Jovanović Faculty of Physics, Belgrade
Yugoslavia
2.
Luis A. Núñez
Centro Nacional de Cálculo Científico
Universidad de Los Andes
(CeCalCULA)
Anamaria Font
Departamento de Fisica, Facultad de
Ciencias, Universidad Central de
Venezuela (UCV) Venezuela
34
HEP Physicist
Notes
jusko@saske.sk
Head of User Support
Group for Computing
andrej.filipcic@ijs.si
network administrator
Rodrigo@ifca.unican.es
grab@phys.ethz.ch
HEP-Physicist
AND
Contact Person
On behalf of IPP ETHZ
in CMS
onengut@cu.edu.tr
dumanogl@cu.edu.tr
sever@metu.edu.tr
Ramazan.Sever@cern.ch
Lap@hep.ucsb.edu
Computer System
Manager
branson@ucsd.edu
HEP-Physicist
fisk@ucsd.edu
adams@uic.edu
Technical Contact
HEP-Physicist
engineer@uic.edu
Petar.Adzic@cern.ch
AND
Predrag.milenovic@cern.ch
AND
Technical Contact
HEP Yu. Coordinator
AND
Responsible for HEP
Networks (Lab 010 and
CERN) on behalf of The
CMS Belgrade Group
dragan@ff.bg.ac.yu
AND
Faculty of Physics LAN
Administrator
http://www.cecalc.ula.ve High Performance &
nunez@ula.ve
Scientific Computing
Particle Theoretician
afont@fisica.ciens.ucv.ve
Table 4: Connection Status
Institution
Internal (1)
UNLP,
10 Mbps, shared,
copper links
Bottleneck
Regional / National (2) International (3)
HHE(VUB- The LAN will be
ULB),
connected to the
Belgium WAN via a
firewall with a
max. throughput
of 100 Mbps
University
There is a firewall
of Louvain – at the entrance of
the lab which
could do filtering
Belgium
at 100 Mbit/s (at
least).
University
100Mbit,
of MonsUniversity internal
Hainaut –
to router : 1 Gbit ,
Service PPE University router
to international
Belgium
UERJ,
10/100 Mbps
(mostly 10)
Brazil
IFT-UNESP LAN to WAN 10
Mbps and 100
Brazil
Mbps
University
10 Mbps
of Cyprus - connection
between the
Cyprus
Comp. Center and
building
-
Private carriers
have
considerable
resources for
those who can
pay for it
-
-
-
National provider : 2.5
Gbit ,
-
NO firewall
10 Mbps ATM
connection
4 Mbps to Regional
Backbone
45 Mbps via
IMPSAT
-
RedeRIO, our regional network is
supported by the State Government
-
Daily traffic
Slow traffic by
GEANT some
times
The Main Bottleneck is Internal
and/or Last Mile Connection.
TATA
Institute –
India
Weizmann
Inst. of Sci
Israel
Dept. of
Physics and
INFN,
Bologna
Italy
2 Mbps link to
Regional
Backbone
35 Mbps
155 Mbps
155
Mbps/GEANT
Firewall without limitation for
Bandwidths
-
-
Our LANs are connected via a
firewall, indeed only a packet filter
on the routers and therefore the
maximum throughput is at 34 Mbps.
Argentina
100 Mbps
No information
The current
bottleneck is the
“local loop” at 34
Mbps.
Copper link to UNLP
Computing center, then
optical link to Buenos
Aires
Notes
35
UNLP pays for its link
There are no bottlenecks in the
network connections.
Table 4: Connection Status- (Continuation)
Institution
INFN Napoli
Italy
Cinvestav,
Mexico
QAU,
Pakistan
Internal (1)
1Gbps LAN
with 100 Mbps
connection to
local hosts.
100 Mbps
ethernet
switches
Bottleneck
Regional / National (2)
INFN – Napoli to
GARR link at 64Mbps is
the slowest line along
the path to CERN
GARR, at 155 Mbps
International (3)
GARR to CERN
link, at 1 Gbps
Sponsored under
international
agreements
2 x 2 Mbps, shared
Not mentioned
128 kpbs or dialup
Actual connection and future
upgrades supported by Ministry of
Science and Technology-100 users
Internal Network.
BINP->KEK 0.5 Paid by KEK,SLAC and BINP.Mbps Currently
Upgrade cost prohibited. RBNet the most probable the Natio.Networking Infrastructure
bottleneck
for Scientific and Educational
institutions. It is not clear what
bandwidth could be made available
for HEP activities
No Bottleneck
155 Mbps to US
& Europe
Budker Inst.
of Nuc. Phys.
– Novosibirsk
Russia
all Siberian networks is
No Information estimated as 10-12
Mbps, including all
kinds of
access to European and
US networks.
ITEP –
Moscow –
Russia
No Bottleneck No Bottleneck
LAN-100-1000 100-1000 Mbps
Mbps
Regional
2-1000 Mbps National
Local: 100
Poor connection
Mbps LAN to
between Protvino and
WAN: 6 Mbps Moscow
by a
CISCO7500
router
Possibility to
go to 1 Gbps
Here is the
And outside of the
main
University- Max of 30
bottleneck,
Kbps
Mathematics
and
Computing
Division –
IHE-Protvino
– Moscow
Region
Russia
Novgorod
State
University,
Russia
Tech. Div. S. 10-100 Mbps 256 Kbps
Petersburg
Ethernet
Nuc. Phys.
Inst. Russian
Academy of
Science (PNPI
RAS)
Russia
Notes
Sponsored by INFN-Napoli and
University of Naples – Physics
Dept.
Sponsored by MIUR (Minister for
Research and University)
Cinvestav pays for the link
Low throughput is Fast Ethernet
devoted for Int.
connections for
the scientific and
Federal
Organizations
The LAN is Connected to the WAN
using firewall with max of 2.Mbps.
No
clear Collaborate with ATLAS,CMS,
information
LHCb and ALICE
36
Table 4: Connection Status- (Continuation)
Institution
Institute of
Experimental
Physics,
Slovak
Academy of
Sciences,
Kosice
Slovakia
JSI,
Slovenia
Inst. de Fís.de
Cantabria –
Univ. of
Cantabria
Spain
Inst. Of
Particle Phys.
–ETH-Zurich
Switzerland
University of
Cukurova,
Adana
University of
Cukurova,
Adana
Turkey
Internal (1)
HEP
community
subnet is
connected to
this LAN
through firewall
(2x100Mb
ethernet cards).
Bottleneck
Regional / National (2)
The national network
(SANET2) is 1Gb/s.
Notes
International (3)
International
connections (see
www.sanet.sk):
GEANT 100Mb/s
GTS 100Mb/s
ACOnet 1Gb/s
Possible bottlenecks between my
institution and CERN. Our LAN
connection is now 10Mbit/sec,
upgrade is planned this year to
100Mbit/s. (see for current trafic
statistics
http://stats.tuke.sk/mrtg/kosice/TUKosice-gw.tuke.sk/tu-kosicegw.tuke.sk_3.html )
Fast ethernet,
switched, high
load, no QoS
-
1 Gbps, switched, shared 622 Mbps, shared,
w/ ~1000 PCs
not limited but
not sufficient
Bandwidth: 155 Mbps (expected in
few months) – free access, not
defined for HEP
-Sponsor: RedIris(academic)
100 Mbit/s
within
buildings
1 Gbit/s for the
local loop
(within ETH)
~ 1 Gbit/s for connection
to the regional network ,
i.e. to SWITCH
The national network is
operated by SWITCH
(Swiss Education and
research network: see
www.switch.ch/network
), and provides now up
to 10 Gbit/s connectivity
between Zuerich and
Geneva/CERN.
Our main
problem is low
bandwidth and
there are no
bottlenecks till
the Turkish
Academic
Network
(UlakNet).
We have
connections to
GEANT/Internet2
from the CERN /
Geneva national
peering CIXP
point at 2.5
Gbit/s, and a
further 622 Mbit/s
to GBLX . see
http://wwwcs.cern
.ch/public/service
s/cixp/
In addition there
is direct peering
from ZH to the
public Internet
exchange point
ZH, which
provides about
200 Mbit/s.
Part of our physicists are located
within CERN, so they suffer only
from the conditions at CERN itself.
The real bottlenecks between the
desktop in Zuerich and the
Experiment at CERN is at present
the local network within CERN
The Turkish Acad. Net. (UlakNet),
provide access to all Univ. and
research organizations in Turkey.
More than 120 connections for a
total bandwidth of 75 Mbps, to
Internet. This access uses ATM
backbone installed between three
(PoP) in Ankara, Istanbul & İzmir.
The bandwidth of UlakNet, to
Internet, is 10/4 Mbps. Will be
increased to over 34 Mbps in a short
while
37
Table 4: Connection Status- (Continuation)
Institution
Middle East
Technical
University
Turkey
University of
California –
Santa Barbara
CA USA
University of
California,
San Diego
(UCSD)
USA
Bottleneck
Internal (1)
Regional / National (2)
Local: 10 Mbps The “last mile” or local
loop: 2 Mb
The regional network
Academic network
No Info
No Info.
From our local
area network at
our
site. .
Physics
Department
University of
Illinois at
Chicago
USA
The bandwidth
limitation for
HEP is our
local
connection to
the backbone
Centro Nac.
de Cálculo
Científico
Universidad
de Los Andes
(CeCalCULA)
The main
bottleneck is at
the institution
network. Most
of them are
poorly
administered. It
can be checked
through
http://www.cec
alc.ula.ve/estadi
sticas/conexion/
Bandwith
bottlenecks
occur in the local
loop, the
national &
international
networks.
No
Departamento
de Fisica,
Facultad de
Ciencias,
Universidad
Central de
Venezuela
(UCV)
Venezuela
Vinca – Inst.
For Nuc, Sc.
Yugoslavia
Notes
International (3)
: International
connections: 8
Gb?
No Firewall
No Info
No Info
The network between
our computing cluster
and the wide area
network is currently
restricted to 100 Base-T,
which is slower than
most of the wide area
links we use
Recently installed
100Base-T Ethernet
connections in our
offices and have access
to a Gigabit switch, that
is optically connected to
the backbone.
A secondary
bottleneck is
congestion on the
relatively high
speed national
and
intercontinental
WAN links.
Our primary bottleneck between
UCSD and the experiments we are
working on, currently Babar and
CMS
. Typically each
institution has 1 to 4
Mbps, and the whole
academic network has 15
to 20 Mbps of
international link for 15
institutions.
The Universidad
de Los Andes has
4 Mbps and
CeCalCULA has
1 Mbps.
UIC has a very good connection to
national backbones. Available
connectivity for HEP upgrades in
the expanding STARLIGHT layout
The academic network is (I should say
was) a cooperative governmental
organization paid by each institution.
Now, because the objectives of this
organization have been changed, each
institution is looking for their
our LAN does not have a firewall
connectivity solution
128Kbps Vinca LAN
Yu. Academic Network
2 x 2 Mbps
Internal Network is Gbps
(1) Internal problems. Obsolete technology, router problems, networking equipment,
intercampus connection
(2) Regional and National problems. Last / first mile problem. Last 1000 miles problem.
Network /POP Hierarchy connection / bandwidth problems
(3) International connection / bandwidth
38
Table 5: Connection Details
Institution
UNLP,
Argentina
HHE(VUBULB),
Belgium
University of
Louvain –
Belgium
University of
Mons-Hainaut
- Service PPE
Belgium
UERJ,
Brazil
IFT-UNESP
Brazil
University of
Cyprus –
CYPRUS
Institut fuer
Experimentell
e Kernphysik
– University
of Karlsruhe –
Germany
TATA,
India
Weizmann
Institute of
Science –
Israel
Providers
Firewall
Regional / National (2)
Commercial optical fiber
International (3)
Not mentioned
(1) LAN: own infrastructure of
the IIHE(VUB-ULB)
(2) Connection to the University
network: 100 Mbps
(3) Connection of the University
network to the National Research
Network: will go up to 2.5 Gbps
(national) – connection and use
sponsored by government
(1) Connection to Géant at 2.5
Gbps (international research)
(2) Connection to Teleglobe at
155 Mbps and to Internet2 at 155
Mbps (international research)
(3) Connection to Opentransit at
622 Mbps (international
commercial) – use paid by
university
Yes
1. Laboratory LAN 100 Mbit/s.
2. Link to the university LAN at
10 Mbit/s
3. University LAN 100 Mbit/s
between routers
4. Link to BELNET at 155
Mbit/s - managed by BELNET.
-
5. Link with GEANT - unknown
speed - managed by GEANT
Yes
RedeRio http://www.rederio.br
RedeRio to IMPSAT
4 Mbps via FAPESP
155 Mbps via TERREMARK
34 Mbps via CYNET
GEANT
-
Networking is funded by the
university; link to the FZK shared
with other Faculties, but supposed
to be suffient
All our systems are
safeguarded by firewalls (the
GridKa centre, the
university, our work station
cluster in the institute.
GEANT 155 Mbps
Yes but without limitation for
bandwidths
2 Mbps via National Carrier
VSNL
Israel Inter-University
Computing Center
No firewall
For all these links, access is free
(no costs) for the reseach
community
-
39
No
CISCO ACLs. No BW
limitations.
NAT and Firewall 10 Mbps
maximum
Table 5: Connection details (Continuation)
Institution
Providers
Regional / National (2)
International (3)
Dept. of
• LANs are shared between
• Regional, national and
Physics and
INFN Section and Physics
international connections are
INFN, Bologna Department, HEP can use all
managed by GARR and INFN
resources. There are areas on
pays for the sharing of bandwidth
Italy
the LAN where the bandwidth
in proportion of the local
is insufficient. Costs are shared. connections (34 Mbps for
• WAN connection is dedicated Bologna INFN). Current national
to HEP (The Physics Dept. has backbone capacity is more than
its own WAN connection) and 2.5 Gbps, as well as the
is payed by INFN via the
international (EU and US)
National Academic & Research connections
Network GARR.
INFN Napoli
Local: No upgrade planned in
Don’t know
the short term Regional
Italy
networks: No upgrade planned
in the short term National
backbones: Don’t’ know.
Firewall
Yes
No
Cinvestav,
Mexico
QAU,
Pakistan
Not mentioned
Not mentioned
Not mentioned
ITEP Moscow–
Russia
Math. & Comp.
Div. HEProtvino
Moscow
Region
Russia
Novgorod State
University,
Russia
Budker Insitute
.of Nuclear
Physics –
Russia
No Information
ISP-Most via dialup 54 kibps
$0.5/hour – Need more
bandwidth-Problems with Funds
No Information
No information
Tech. Div. S.
Petersburg Nuc.
Phys. Inst.
Russian
Academy of
Science (PNPI
RAS)
Russia
Fast Ethernet
Radio-MSU NET owned by
IHEP
Ministry of Science and
Technology
All elements is paid by the
University
National network, which is
provided by 34M trunk
operated by Transtelecom.
CANET (city network),
SANET2 (national network). I
am in touch with CANET, but I
don't work in any SANET2committee.
-
No -
Yes
Connection to National Network
Yes
is 34M link Novosibirsk-Mosco
shared by numerous organizations
of Siberian Branch of Russian
Academy of Science, universities;
Used 60-75% -Saturation for soon
is expected.
-
40
External channel is covered
by institution’s budget and
RAS foundation for
telecommunication
development
Table 5: Connection details (Continuation)
Institution
Providers
Firewall
Regional / National (2)
International (3)
Institute of
SANET2 is academic network, International connectivity
YES
Experimental
paid by government. For now,
seems to be sufficient (see
Physics, Slovak the HEP community shares the http://mrtg.cvt.stuba.sk/index.
Academy of
network with academic and
html - Our trafic is routed
Sciences,
governmental institutions, there through the GEANT
Kosice
is no special agreement about
connection).
the bandwidth for HEP
SLOVAKIA
community.
JSI,
Not mentioned
Not mentioned
Yes, 1Gbps max throughput
Slovenia
Instituto de
In steps up to 2.5 Gbps .
Física de
Schedule unknown
Cantabria –
University of
Cantabria
Spain
University of
Turkish Academic Network
(Regional) + London Level 3 Yes
Cukurova,
(UlakNet):
(Satellite Connection)
Adana
Local network has single mode
fiber optics connection through
University of
10Mbps/100Mbps and
Cukurova,
155Mbps ATMs. 2 Mbps
Adana,
ULAKNET (National) + 1Mbps
Adana-AnkaraERE
Turkey
Middle
East Local network (LAN)
Interconnection between the Yes
Technical
infrastructure: To 100 Mb in
LAN and the outside. To 140
University
near feature
Gb
Regional networks: To 4 Mb
Turkey
National backbones
University of
LAN not Connect to WAN We UCSB HEP has unlimited
NO
California –
may upgrade part of the LAN to usage of the
Santa Barbara
gigabit soon
academic/research network
CA
funded by both state and
federal government. We have
USA
sufficient bandwidth now.
University of
The local area network is
For our connections to CMS NO. We use two national
California, San primarily Fast Ethernet
and Babar we are routed
networks for connections out of
Diego (UCSD) supported by the University of through the
state, ESnet and Abilene, both
California. The "last mile" is
California supported state
support by the NSF. ESnet is
USA
Gigabit Ethernet over fiber also network called Calren2,
currently 155Mbps ATM and
supported by the UC. This is
which is primarily
heavily used between Fermilab
shared with the rest of the
622 Mbps ATM. This
and UCSD. We have been able to
University but usage is usually network is shared between all successfully achieve 100Mbps for
less than 30% and is sufficient the California universities and long periods of time. The
for our current needs Our LAN is often very congested. It is Abilene network is shared by a
is not connected through a
difficult even with network
large number of universities and
firewall and we have been able tuning and multiple streams
is frequently extremely congested,
to achieve the full performance to achieve better than 250
even late at night 70% usage is
of the available links.
Mbps.
not uncommon. The Abilene
network connects UCSD with
Chicago and the trans-Atlantic
link to CERN
41
Table 5: Connection details (Continuation)
Institution
Physics
Department
University of
Illinois at
Chicago
USA
Centro Nac. de
Cálc. Científico
Universidad de
Los Andes
(CeCalCULA)
Providers
Regional / National (2)
International (3)
All networks are supported by UIC Our connections are sufficient No Info
(academic). HEP independently
for our current usage to
paid for and installed cable to
FNAL. We have local linux
connect our PCs to a local Gigabit boxes and windows pcs at
switch via 100Base-T.
desks (total around 12). We
do not have a large analysis
cluster at UIC. We do not use
the UIC mainframe.
We are planning to have 1 - 2
Universidad de Los Andes
No
Mbps link between Caracas and has 4 Mbps and CeCalCULA
Merida. This link will be used
has 1 Mbps. We are looking
to have IP telephony and direct to have 8 Mbps next year
logging from this institute and
our center.
LAN : 100 MBps (sponsored by
UCV) National Network: 512 +
E1 (sponsored by Reacciun which
is part of the Ministry of Science
and Technology)
Dep. de Fisica,
Fac. de
Ciencias, Univ.
Central de
Venezuela(UC
V)
Venezuela
Vinca – Inst. of Vinca LAN(RCUB) Yu.
Nuc. ScienceAcad.Network
Yugoslavia
Beotel (Mbps) link + GRnetGeant (2Mbps)
42
Firewall
No Info.
Table 6: Other networking needs
Institution
UNLP,
Argentina
HHE(VUB,ULB),
Belgium
Computing / networking needs related to HEP
a) LAN upgrade to 100 Mbps
b) LAN-to-WAN upgrade to 4 Mbps
Network needs do not seem to be a problem for the
participation in LHC (CMS). The WAN speed is Ok and the
LAN access to data servers can easily be upgraded.
University of
1.Network needs are fulfilled for our current activities but will
Louvain –
be very insufficient for our future tasks: Grids, CMS analysis,
2. Apart from bandwidth, we needs reliable and 24x7x36[56]
Belgium
availability.
3. We have been suffering of network breakdown (hours, days)
without help or technical desk available.
University of MonsHainaut – Service
PPE Belgium
UERJ,
HEPGRID PROJECT presented for financial support to work
on CMS
Brazil
IFT-UNESP
Brazil
University of
Cyprus CYPRUS
Will maintain a farm for Monte Carlo studies and a Tier 3 Grid
node
The HEP group intends to have responsibilities on Monte Carlo
Production and build a Grid Unity type T2 or T3. Need to
upgrade Network to Gigabit. In principle there is no limits to
use the Network. But the daily traffic is the real limitation.
Institut fuer
Experimentelle
Kernphysik –
University of
Karlsruhe –
Germany
TATA,
India
Weizmann Institute
of Science –
Israel
Dept. of Physics and
INFN, Bologna
Italy
The most important upgrade is the increase in bandwith to the
FZK from 155Mbit to 1 Gbit/sec.
Needs: good connection to the German Tier1 must always be
ensured; good connections of the German Tier1 to the other
LHC Tier1 centers and to the Tevatron are important
Other
-
-
-
Waiting for delivering
approved budget to build a
T3 and Later 2005/6 a T1
The Bandwidth of 34 Mbps is
sponsored by Cyprus
Telecommunications Agency
via a research Program and
GEANT. The University
pays for the Network
-
Will have a Tier 3 Grid node
-
No needs for now. Don’t know how will be the present
network behavior when they will need to transfer more data
-
• LANs upgrade is already occuring, moving to a LANs
“backbone” of Gbit ethernet and bringing the Gbit to key Farms
and Servers.
• Regional and National networks are upgrading to GARR-G
(Italian Gbit academic-research infrastructure)
• Interconnection between the LANs and the outside is going to
be upgraded to 100 Mbps and we’ll have an option to “grow”
up to Gbit.
43
Table 6: Other networking needs (Continuation)
Institution
Computing / networking needs related to HEP
INFN Napoli
Italy
Cinvestav,
Mexico
QAU,
Pakistan
Budker Insitute .of
Nuclear Physics –
Russia
ITEP – Moscow –
Russia
Mathematics and
Computing Division –
IHE-Protvino –
Moscow Region
Russia
Novgorod State
University,
Russia
Institute of
Experimental Physics,
Slovak Academy of
Sciences, Kosice
SLOVAKIA
No info.
Other
No Info.
Dedicated 2 Mbps link for HEP group
-
In terms of Hardware declared that they have what they really
need. In terms of bandwidth need to upgrade but no last mile
connection problem.
Upgrade everything. Scarce resources, equal situation for other
communities, it is very difficult situation for upgrades of get a
dedicated network.
Routing equipment for LAN, and to increase the capacity of
external connections.
Local area network: our plan is to reach 1 Gbps within the
next 3 years Link to Moscow will grow to 30 Mbps by the next
year and up to 600 Mbps by 2007-2008 ( optical link)
-
First half of 2003 increase existing external channel throughput
to no less than 150 Mbps. Reconstruct local area network to
Gigabit Ethernet.
Upgrade schedule:
-LAN: 10Mb and 100Mbit (majority of computers are
connected with 100Mb)
-LAN-City network connection: 10Mb, upgrade to 100Mb this
year, upgrade to 1Gb planned in 2003
-National backbone: 1Gb, I don't think it will be more in
2003 International connection: 1Gb + 2x100Mb (see above), I
don't know what is planed in near future.
Additional bandwidth should be reserved for HEP
SB RAS with the funds
reserved by the Ministry of
Science and Technology
Connection to WAN: 600
Mbps to be supported by
the budget for Institute
Network-related problems
(i.e. what I expect to be the
bottleneck in next years):
manpower, bandwidth
Other: the lack of raw CPU
power and disk space
JSI,
Slovenia
Instituto de Física de
CMS/CERN – MC production center
Cantabria –University CDF/FERMILAB – Data analysis
of Cantabria
GRID Computing
Spain
Inst. Of Particle Phys. As member of the LCG-GDB and in the process of setting up a
–ETH-Zurich
national Tier-2 regional centre at SCSC in Manno
(www.scsc.ch); As such Switzerland is participating in the
Switzerland
LCG-phase1 data challenges for the various experiments, and
we plan to use this our local RC for that purpose;
For this the required networking infrastructure between the RC
(in Manno), the experiment at CERN (CMS) and the homeinstitute (ETHZ) has to be of order 1 GByte/s in 2003, and a
gradual increase to reach about 10 GByte/s by end 2004. This
bandwidth is anticipated to be available between Zuerich and
CERN, but it may be a bit more difficult to obtain that
bandwidth between Zuerich and Manno. By the time of LHCstartup (2007) we'll need a bandwidth well beyond 10 GByte/s.
The present planning of the Swiss national network is expected
to roughly match our needs, with the possible exception of the
above-mentioned connection ETHZH->Manno.
44
Table 6 : Other networking needs (Continuation)
University of
Cukurova, Adana
University of
Cukurova, Adana,
Turkey
Middle East Technical
University
Turkey
Sponsoring organizations are ULAKBIM and University of
Çukurova (both academic) There is no special allocation for
HEP. We have one Sun Sparc station and 3 Intel Pentium
PCs.Our infrastructure is not sufficient.
University of
California – Santa
Barbara CA
USA
University of
California, San Diego
(UCSD)
USA
We have sufficient bandwidth for now.
Restructuring local area network is necessary. Switch based
infrastructure is also necessary.
Present bandwidth is not enough at the user end.
-
UCSD has become a Monte Carlo production center for CMS
and is expected to join the large scale simulation effort in Babar
as well. We are also attempting to become more involved in
remote analysis. We estimate to meet our production and
analysis goals, we need to routinely achieve100Mbps between
UCSD and the experiments we
participate in.
No specific need
-
Physics Department
University of Illinois at
Chicago
USA
Centro Nacional de
We are not aware of any upgrade plans.
Cálculo Científico
Our network needs better security
Universidad de Los
Andes (CeCalCULA)
Dep. de Fisica, Fac. de
Ciencias, Univ. Central
de Venezuela(UCV)
Venezuela
Vinca – Inst. of Nuc.
Higher Bandwidth to outer links. Both Regional and
ScienceInternational are low bandwidth.
Yugoslavia
45
-
Table 7: Most relevant networking related problems
Institution
UNLP,
Argentina
HHE(VUB-ULB),
Belgium
University of Louvain –
Belgium
University of MonsHainaut - Service PPE Belgium
UERJ,
Brazil
IFT-UNESP
Brazil
University of Cyprus –
CYPRUS
Institut fuer
Experimentelle
Kernphysik – University
of Karlsruhe –
Germany
TATA,
India
Weizmann Institute of
Science –
Israel
Dept. of Physics and
INFN, Bologna
Italy
Most relevant networking related problems
2 Mbps optical link, shared with well over 1000 people, is not sufficient
Technical manpower assistance
UERJ is outside the Rede RIO’s 155 Mbps ATM ring, connected by a 10 Mbps link. Rede
RIO has a private International 45 Mbps connection, which is saturated. RNP, the national
backbone research network has a 155+42 Mbps International link that could be used by
our institution given that the proper configuration was imposed.
(2/3) speed and bandwidth
Last Mile Connections, big traffic on the used network limit the present work. Need to
buy more nodes to work for CMS Monte Carlo. Missing of trained people on Network
Technologies.
-
High prices
No info
We are involved in the LCH Program as well as other HEP Experiments (CDF, Hera-B,
etc.). Plans of upgrade of network is following mainly the LHC projects and experiments,
so that our local “Tier2s” can be part of the LHC Computing Systems. Another driving
activity for the network upgrades is related to our participation to Grid Projects.
Apart from the bandwidth, we consider important to have a “global” HENP policy for
“security”. Firewalls, at least, must be coordinated.
INFN Napoli
Italy
Cinvestav,
Mexico
QAU,
Pakistan
Budker Insitute .of
Nuclear PhysicsRussia
ITEP – Moscow
Russia
2 x 2Mbps links for the whole community is not sufficient
(2/3) Internet connection bandwidth. In processes of deploying a 64 Kbps leased line (by
October) - limited by funding
0.5M dedicated channel BINP-KEK
*
100M connection to regional network
34M shared connection to National backbone (up to 10% really available)
10M shared connection to Global Internet (up to 10% really available)
Future needs:-2M dedicated channel BINP-KEK + (2-3) Gbps connection to regional
network + (4-5)*155 M shared connection to National backbone or 155M dedicated
connection to National backbone + 155M dedicated connection to Global Internet
No information
46
Table 7: Most relevant networking related problems(Continuation)
Institution
Mathematics and
Computing Division –
IHE-Protvino – Moscow
Region
Russia
Novgorod State
University,
Russia
Tech. Div. S. Petersburg
Nuc. Phys. Inst. Russian
Academy of Science
(PNPI RAS
Russia
Institute of Experimental
Physics, Slovak Academy
of Sciences, Kosice
SLOVAKIA
JSI, Slovenia
Instituto de Física de
Cantabria –University of
Cantabria
Spain
Inst. Of Particle Phys. –
ETH-Zurich
Switzerland
University of Cukurova,
Adana
Turkey
Middle East Technical
University
Turkey
University of California –
Santa Barbara
CA
USA
Physics Dept University
of Illinois at Chicago
USA
Centro Nacional de
Cálculo Científico
Universidad de Los
Andes (CeCalCULA)
Dep. de Fisica, Fac. de
Ciencias, Univ. Central
de Venezuela(UCV
Venezuela
Vinca – Inst. of Nuc.
ScienceYugoslavia
Most relevant networking related problems
International connections: low bandwidth caused by high prices
We have only financial, rather than technical problems
Next requirements are for better international networks
--
LAN-WAN interconnection: high load, no QoS, new infrastructure needed
Network national hierarchy is the main problem now.
-
-
-
No info
No Info
High prices, and the lack of National policies have forced us to look for individual
connectivity solutions instead of a cooperative organization that we planned
Perhaps the biggest problem we face is the lack of support and improvement policies, both at
the faculty and university level
A local firewall to improve security could be useful.
To our knowledge there are no committees trying to solve our problems.
No Funds to do upgrades
47
Table 8: Presented ideas for prospective solutions
Institution
UNLP,
Argentina
HHE(VUB,ULB),
Belgium
University of Louvain –
Belgium
University of MonsHainaut - Service PPE –
Belgium
UERJ,
Brazil
IFT-UNESP
Brazil
University of Cyprus –
CYPRUS
Institut fuer
Experimentelle
Kernphysik – University
of Karlsruhe –
Germany
TATA,
India
Weizmann Institute of
Science –
Israel
Dept. of Physics and
INFN, Bologna
Italy
INFN Napoli
Italy
Cinvestav,
Mexico
QAU,
Pakistan
Budker Insitute .of
Nuclear Physics –
Russia
ITEP – Moscow –
Russia
Mathematics and
Computing Division –
IHE-Protvino – Moscow
Region
Russia
Presented ideas toward a solution for the detected problems
Better economical support
A solution will be to have a dedicated line and high bandwidth from the laboratory
directly to the BELNET research network.
-
(1) Upgrade internal network as soon as the approved budged is delivered.
(2) Upgrade the last mile link with an “almost”-blind fiber and up to 155 ATM Mbps.
(3) Upgrade the Rede RIO’s international link or reconfigure our routing tables to
forward packets through RNP’s international link (necessary a Rede RIO and RNP
agreement). Most of our solutions require non-immediately-available funding.
University should provide funds to increase speed and bandwidth
More effort is needed to include the smaller Institutes and Laboratories in the GRID
Projects and not only the well established Research Centers. GRIDs concentrated in HEP
programs and not waste limited resources.
-
Proposals for progressive upgrades
Next upgrade to 45 Mbps – Implement QOS (Quality of Services) classes to get discount
on bulk traffic
There is an increasing need of “QoS” and it seems difficult to get it via the network
providers. Real “QoS” services can only be obtained if the “quality” is all along the path
between end-nodes.
No Info.
More money would allow them to buy additional links 2 Mbps each.
(2/3) Upgrade Internet connection to 512 Kbps
No specific technical ideas/inventions - all technologies are well proven and the only
thing necessary in order to implement them is additional funding.
No specific ideas
In many cases we need the access only to the some particular Scientific Centers (CERN,
FNAL, DESY, …) but not to all Internet resources. So, may be there are any possibilities
to organize tunnels between the point in Internet with lower Tariff.
It will allow HEP community to exchange by Petabytes of data with reasonable costs.
48
Table 8: Presented ideas for prospective solutions (Continuation)
Institution
Novgorod State
University,
Russia
Tech. Div. S. Petersburg
Nuc. Phys. Inst. Russian
Academy of Science
(PNPI RAS)
Russia
Institute of Experimental
Physics,Slovak Academy
of Sciences, Kosice
SLOVAKIA
JSI,
Slovenia
Instituto de Física de
Cantabria –University of
Cantabria
Spain
Inst. Of Particle Phys. –
ETH-Zurich
Switzerland
University of Cukurova,
Adana
Turkey
Middle East Technical
University
Turkey
University of California –
Santa Barbara CA
USA
University of California,
San Diego (UCSD)
USA
Presented ideas toward a solution for the detected problems
We have just began, thus the only problem that we met is lack of facilities, due to the lack
of funds allowed for this program.
Physics Department
University of Illinois at
Chicago
USA
Centro Nacional de
Cálculo Científico
Universidad de Los
Andes (CeCalCULA)
No Info
No information
Better infrastructure: 1 Gbps Giga-Ethernet for LAN and WAN, 10 Gbps SDH for
national network and 1 Gbps SDH for international links. Reserved bandwidth for HEP
-
-
--
-
No Info
No Info
Establish a cooperative organization that provided services (IP telephony, Digital
Libraries, Distributed Computing, Storage, and training.) to the institutions
Dep. de Fisica, Fac. de
Ciencias, Univ. Central
de Venezuela(UCV
Venezuela
Vinca – Inst. of Nuc.
Dedicated Line to HEP
ScienceYugoslavia
49
Table 9: Present Computing Facilities Dedicated to HEP
A-Mainframe, Clusters, LAN, Networked CPUs
Institution
UNLP,
Argentina
HHE(VUB,ULB)
Belgium
Mainframe
No
No
1 Beowulf cluster, CPU
type P4, 480 Mbytes
storage, interconnection
via fast Ethernet
Via fast Ethernet
University of
Louvain –
Belgium
No
.
No clusters.
We have 5 servers:
1 - management
3 - computing (1 HEP, 2
shared with other
research projects)
1
storage (HEP)
4. We have ± 15 of
desktop PCs for HEP.
Connection is a mix of
100/1000 Mbit/s
Ethernet between
servers. Offices are
connected at 10 Mbit/s.
University of Mons- Hainaut - Service
PPE –
Belgium
UERJ,
Brazil
IFT-UNESP
Brazil
University of
Cyprus
CYPRUS
Institut fuer
Experimentelle
Kernphysik –
University of
Karlsruhe Germany
TATA,
India
Weizmann Institute
of Science –
Israel
QAU,
Pakistan
Clusters
Networked CPUs
Isolated PCs in a LAN
No
No
Dedicated cluster of 4
via 1GBit Ethernet
BiProcessors Intel PIII1.2GHz each with 1
GBytes of memory, 2
Mono Processor PIV ,
512MBytes of memory
all running Linux
RedHat 7.X and , 2.5 TB
Raid5 disk array,
to be deployed
35
No
No
No
Linux Clusters
“many” nodes
-
with “few” computers
Workstations
LAN Technology
10 Mbps, shared copper
links
LAN: switched Ethernet
at 100 Mbps, essentially
copper links
Our LAN is connected
to the WAN though the
University network. For
the moment, there is no
dedicated
network/gateway
for
HEP community
Outside connection is
via 100MBit Ethernet
network
copper cable
10/10
hub/switches
100 Mbps
and
-
-
-
No
No
32
100 Mbps
No
64 Pentium-farm
No Info.
100 Mbps
No
6 PC Linux
10
50
Ethernet
A-Mainframe, Clusters, LAN, Networked CPUs (Continued)
Institution
Mainframe
Dept. of Physics and
INFN, Bologna
No
Italy
Clusters
Yes we have Clusters and
Farms, about 10. Most of
them are Linux. They
range from a couple of
nodes to 30 dual-CPU
boxes rack mounted.
Typical CPUs are PIII at
1GHz (but we are rapidly
changing to the new
CPUs).
Networked CPUs
Storage is served by
“disk servers” (some TB
on a few servers) via
NAS or SAN (Gbit
Ethernet and/or FC).
INFN Napoli
Italy
No
local cluster for CMS
6 CPU of 2.4GHz
360 GB storage,
connected at 100 Mbps
planned to be upgraded
at 1Gbps
1100 Mbps Ethernet link Naples is a new CMS
group, working on the
construction of a Farm
in order to begin to work
on the CMS core
software development.
Cinvestav, Mexico
No
25 2-CPUs + File Server
Budker Insitute .of
Nuclear Physics –
Russia
No
(4-5) clusters working
for different projects,
each of 10-15 middlerange CPU (P3/500MHz,
1GB Memory,
20GB/CPU storage).
51
Intracluster
interconnections are
based on switched
100M/1G Ethernet.
LAN Technology
The LANs are all
100Mbps Ethernet,
interconnected via Gbit
Ethernet trunking. Some
portion of LANs are
already full Gbit
Ethernet also for end
nodes. All the nodes
(with very few
exceptions) are
switched.
Physical media is
twisted pairs (cat 5 and
above) copper. Gbit
Ethernet for trunking is
on fiber, local Gbit
connections are on
copper. We have a
backup connection
between our two major
buildings via radio link
(35 Mbps) HEP has a
special link (INFN
Section of Bologna) to
WAN (GARR) currently
via ATM at 34 Mbps.
The Dept. of Physiscs of
the University of
Bologna has a
connection via ATM at
622 Mbps to University
MAN which is in turn
connected to GARR at
100 Mbps (fast
Ethernet).
100 Mbps NICs w/
twisted pair
Usually 100M
connection via Ethernet
switch.
A-Mainframe, Clusters, LAN, Networked CPUs (Continued)
Institution
ITEP Moscow
Russia
Mainframe
No
Clusters
50 CPUs 2 TB Storage
Mathematics and
Computing Division
–IHE-Protvino –
Moscow Region
Russia
AlphaServers
: (1)8200
(6CPUs)
5/300 MHz;
1GB RAM;
(2) 8200
(6CPUs)5/44
0 MHz;1GB
RAM;
(3) 200
(4CPUs)
5/625 MHz;
1GB RAM.
No
(1)10 CPUs Pentium III
833 MHz;
(2)30 CPUs Pentium III
930 MHz;
(3)2 TB discs;
(4)Fast Ethernet.
No
One CPU connected to
the LAN, it is dedicated
to HEP
Tech. Div. S.
Petersburg
Nuc.Phys. Inst.
Russian Academy
of Science (PNPI
RAS)
Russia
No
Our LAN consists of more
then 500 PC (~200 PC
dedicated to HEP).
Cluster dedicated to HEP:
6 dual PC (Intel p-III-700
Mhz, RAM-256 Mb,
HDD 30 GB).
The segments of LAN
belongs to different
departments built on the
10Mbps Ethernet and
Fast Ethernet
Institute of
Experimental
Physics,Slovak
Academy of
Sciences, Kosice
SLOVAKIA
No
mainframe
We have Linux PC
Cluster (10 nodes, 1 node
is based on SMP MB with
2 procesors. Average node
consists of: 2x733MHz
Pentium III procesor,
512MB RAM, 40GB
HD).
Nodes are
interconnected through
100Mbit ethernet
switch, which is
connected to LAN.
Common RAID-5 disk
array (140GB) is used
by all the nodes, this
year to be upgraded to
1TB RAID-5 disk array.
1c. 100Mb (through
ethernet switch)
Novgorod State
University,
Russia
JSI,
Slovenia
No
Inst. de Física –
Univ. of Cantabria
Spain
No
~40 CPUs, ~25 Athlon
1500XP equiv + Alpha
server, 1 TB disk
80 Linux 1.26 GHz 640
Mb memory, storage: 7
Tb disk, 7Tb tapes
52
Networked CPUs
LAN Technology
100-1000 Mbps Ethernet local connections: 100
Mbps Ethernet, and 1
GBit Ethernet backbone
optical link
Fast Ethernet;
10 Mbps Ethernet;
FDDI;
100 Mbps Ethernet;
Ethernet
FDDI
We have only one
computer connected to
the University’s switch
using Fast Ethernet (100
Mbps).
Technologies using
cooper twisted pairs (5th
cat) and communication
equipment HUB and
Switch (Cisco, 3COM,
Intel and others). The
segments are combined
into Institution’s LAN
using optical Fiber (the
Length ~ 5 km).
Computing facility is
shared, (but now it is
used mainly by HEP
group).
100 Mbps ethernet,
switched copper links
Through the univ.
network
A-Mainframe, Clusters, LAN, Networked CPUs (Continuation)
Institution
Inst. Of
Particle Phys.
–ETH-Zurich
Switzerland
Mainframe
No
University of
Cukurova,
Adana
Turkey
NO
Clusters
Networked CPUs
For CMS computing
ETH provided :
CPU :
a) 25%of a Beowulf cluster
(ASGARD), that
corresponds to 128
CPUs(Intel 500 MHz each),
located in Zuerich.
This is HEP facility, is
sharing with others (theory,
etc); HEP receives 25% of
an overall of 512 CPU's.
b) about 5 PC (Intel
500MHz each) , located at
Zuerich
c) about 10 PC (Intel 1 GHz
each) , located at CERN
All Linux OS;
Disk :
a) about 1 TB, located at
Zuerich, attached to
ASGARD
b) about 1 TB, located
at CERN, available to
the PCs
c) about 0.5 TB, located at
Zuerich, available to the PCs
NO
Through ethernet cards
LAN Technology
Network connection:
all connections are with
Ethernet; Mostly 100
Mbit/s, apart from the
fileservers on the
ASGARD cluster, which
is 1 Gbit/s.
Most of the LAN and
WAN questions have
already been answered
above.
Ethernet everywhere.
Optical fibers for the
long range links (Zuerich
-> CERN)
We have 10/100 Mbps
Ethernet and ATMs. We
have copper links
connect to main router
with single mode fiber
optic.
Local WAN has single
mode optical fiber.HEP
do not have any especial
link.
Middle East
Technical
University
Turkey
University of
California –
Santa Barbara
CA
USA
Yes
NO
No for HEP Mostly
Pentium IV
Most CPUs use
100BaseT switched
network
NO
53
100 Mbps Ethernet
LAN->WAN: Gigabit
Ethernet.
A-Mainframe, Clusters, LAN, Networked CPUS (Continuation)
Institution
University of
California,
San Diego
(UCSD)
USA
Physics
Department
University of
Illinois at
Chicago USA
Centro
Nacional de
Cálculo
Científico
Universidad
de Los Andes
(CeCalCULA)
Dep. de
Fisica, Fac.
deCiencias,
Univ. Central
de
Venezuela(UC
V)
Venezuela
Vinca – Inst.of
Nuc.ScienceYugoslavia
Mainframe
NO
Clusters
We currently have a
computing cluster of 20
dual CPU P3 800 MHz
nodes. These are connect
to 4.5TB of IDE and SCSI
RAID arraysThese
systems will be connected
with gigabit. We hope to
upgrade to gigabit Ethernet
over fiber soon.
Networked CPUs
The data servers are
connected with gigabit
Ethernet and the
computational
nodes have 100Mbps
fast Ethernet. . We are
in the process of
procuring an additional
20 computational nodes
with 2.4GHz dual P4
Xeons.
Not at all, we
~60 dual processors ~900 Gigabit Ethernet
have several
MHz 1GB RAM/2proc ~3
multiprocessor
terabytes storage capacity
servers (SUN,
IBM and SGI)
http://www.cecalc
.ula.ve/recursos/h
ardware/
We just have several CPU's
The LAN uses 100 MBps
connected to the LAN with switched on copper links.
TCP/IP.
No
No
54
400
LAN Technology
Our local area network
is 100Mbps Ethernet
over copper We are
currently connect to the
wide area network over
100Mbps copper
Ethernet.
Gigabits Trunking
through Fiber optics, 100
Mbps local area through
twisted pair and radio
links (Broadband
Delivery System and
Spread Spectrum, 2.4
GHz, 11
Router CISCO 7500 with
two links (512 KB/2 GB)
Mbps)
100 Mbps
B-Firewall, WAN, Hops, Financial Support
Institution
Firewall
UNLP,
No
Argentina
HHE(VUB,UL Yes
)
Belgium
University of
Louvain –
Belgium
Yes
University of
Mons-Hainaut
- Service PPE
Belgium
UERJ,
Brazil
IFT-UNESP
Brazil
University of
Cyprus –
CYPRUS
No
No
Yes
Institut fuer
Experimentelle
Kernphysik –
University of
Karlsruhe –
Germany
TATA,
India
Weizmann
Yes
Institute of
Science –
Israel
Dept. of
Yes
Physics and
INFN,
Bologna
Italy
WAN connection
Commercial optical
fiber
University LAN is
then connected to the
WAN at 2.5 Gbps.
No separate HEP
connections to the
WAN
Our LAN is connected
to the WAN through
the University
network. There is no
dedicated
network/gateway for
HEP community.
We are using the
internal 100 MBit
network of our
university
University’s 10 ATM
Mbps over fiber
NO Info
Hops
23 to CERN 22
to Fermilab
About 10 hops
Local Network to a
WAN by CYNET
(National Level) and
GEANT for
international)
-
HEP does not have to pay for the WAN
connection
HEP do not pay for network
infrastructure.
One
No
20
Rio de janeiro State Govern
20 to FNAL
University
Only 1 internet
Router. The
Second Line is
128Kbps
University of Cyprus
-
-
ATM/Gb internal
15 to CERN
backbone/ 35 Mbps to
Regional Network
over fiber.
See Above Table A
10.
55
Who pays for the connection ?
UNLP and/or Conicet
Weizmann Institute
HEP has its own WAN connection and
therefore pays for that. There is
agreement between HEP (INFN) and the
Department (University) for use of the
reciprocal WAN connections in case of
failures (backups).
B-Firewall, WAN, Hops, Financial Support (Continuation)
Institution
INFN Napoli
Italy
Cinvestav,
Mexico
QAU,
Pakistan
Budker Inst. of
Nuc.Phys
Russia
ITEP –
Moscow Russia
Mathematics
& Comp. Div.
–IHE-Protvino
– Moscow
Region
Russia
Novgorod
State
University,
Russia
Tech. Div. S.
Petersburg
Nuc. Phys.
Inst. Russian
Academy of
Science (PNPI
RAS)
Russia
Firewall
No
WAN connection
1Gbps no trunking no
ATM switched LAN
with connection to
single hosts at 100
Mbps (copper- 64
Mbps connection to
GARR dedicated to
INFN and University of
Naples Physics
Department Ethernet
cabling)
Hops
Who pays for the connection ?
WAN connection is covered in the
general expenditures by INFN
Not mentioned
22 to Fermilab
Cinvestav and/or Conacyt
No
-
15
The Institute
No info.
No Info.
Regional 5-10
Internet. 10-20
No info.
No Info.
No Info
10
IHEP
Yes.
No info.
30
The University
No Info.
Our LAN is connected No Info.
to the commercial
WAN using optical
fiber. HEP community
of our institute hasn’t
any special lines to the
WAN. The bandwidth
PNPI-Gatchina ~ 256
Kbps, Gatchina Saint-Petersburg 2048 Kbps (current
status). The
bandwidth PNPI Gatchina restricted by
financial reason.
56
Russian Academy of Science
B-Firewall, WAN, Hops, Financial Support (Continuation)
Institution
Firewall
Institute of
YES
Experimental
Physics,Slovak
Academy of
Sciences, Kosi
ce
SLOVAKIA
JSI,
Yes
Slovenia
Inst. de Física No
de Cantabria –
Univ.
Cantabria
Spain
Inst. of
No
Particle Phys.–
ETHZurich
Switzerland
University of
Cukurova,
Adana &
University of
Cukurova,
Adana,
Turkey
Middle
East
Technical
University
Turkey
University of
No
California –
Santa Barbara
CA
USA
University of
No
California, San
Diego (UCSD)
USA
Physics
Department
University of
Illinois at
Chicago
USA
No
WAN connection
Ethernet 100Mbit is
used in LAN
Hops
13 hops
Who pays for the connection ?
LAN is connected by 10Mb/s fibre, to be
upgrade to 100Mbit this year.
There is no specific payment for
bandwidth paid by HEP community.
Internet connection is shared, no limit
(for now) is known to me.
12 to CERN
Not mentioned
CERN – 4
FERMILAB – 6
Cover by the University
-
30
ETH
2 Gbps Through a
switch, HEP has no
any special link to
WAN. Main entries is
ATM and star
structure is used for
local distribution.
3
TUBITAK and our University
Gigabit Ethernet.
30
Every IP host at UCSB pays $11 per IP
address per year for the IP network usage
-
Between 10 and
15 hops
The network connection is covered by the
general expenditures of the university
-
7
Connection to WAN is paid for by
university
1 Gbps optical fiber,
no special links for
HEP
57
B-Firewall, WAN, Hops, Financial Support (Continuation)
Institution
Firewall
Centro
No Info.
Nacional
de
Cálculo
Científico
Universidad de
Los
Andes
(CeCalCULA)
Venezuela
Vinca – Inst.
of Nuc.
Science-
No info.
WAN connection
Gb internal backbone
– 128Kbps serial line
to Regional Network
Hops
For physics
programs
involving
national
institutions. Adn
5 or 6 to
international
institutions in
USA.
20 to CERN
Who pays for the connection ?
Our university
Serbian Government
Yugoslavia
ICFA SCIC Questionnaire – Responses by Russian Sites
These are contained in the accompanying report, on the Digital Divide in Russia
58
Appendix C - Case Studies
Rio de Janeiro, Brazil – Network Rio ( REDE RIO)
We have shown several maps of the networking situation in Brazil, in Appendix
A. Brazil is part of South America, a region experiencing a strong Digital Divide
condition. Of all the countries in South America, Brazil and Chile are the countries that
have the best infrastructure to support Grid-based networks for HEP, followed by
Mexico. We will examine the status of the Rio de Janeiro State Network (REDE-RIO)
that is part of a larger national network - Rede Nacional de Ensino e Pesquisa (RNP).
The States of Rio de Janeiro and Sao Paulo host the most important Science and
Technology State funding agencies: FAPERJ and FAPESP, respectively. Both have as
fundamental goals supporting and funding State Universities and State Research Centers;
(there are also Federal funding agencies, which have as their main goal, supporting Federal
institutions). FAPERJ and FAPESP, each, run a regional network connecting primarily
universities in their States. As a result, the policies of these networks are defined by these
agencies.
It is common knowledge that Grid projects need very high-speed networking
infrastructures. The people responsible for the development and operation of the Brazilian
research networks have been systematically warned of the urgency and necessity to upgrade
the network infrastructures to properly support advanced Grid-based physics projects.
Unfortunately, there has been little to no reaction to our pleas for help.
Recently, RNP, Brazil’s National Research Network operator, received approval for a
project to deploy a Gigabit-network infrastructure30 connecting the research networks in Rio
de Janeiro and Sao Paulo. RNP is aware of the fact that the Brazilian High Energy Physics
groups have been waiting for an adequate network infrastructure to begin running experiments
using a Grid-based architecture, supporting high-volume data exchanges. We are optimistic
that RNP’s GIGA Project will provide significant improvements in bandwidth availability and
access to network resources to advance Brazil’s HENP community’s Grid-physics
requirements.
However, it is uncertain if Brazil’s national funding agencies will recognize our Gridbased research and collaboration as a science application. Ultimately, Brazil’s scientific
community must succeed in establishing national policy that will contain a long-term strategy
to provide funding for the advancement of network and computational technologies to support
our scientific disciplines. In Brazil, the scientific community has traditionally not paid for
network infrastructure. This is generally supported by the agencies mentioned above.
What will Optical Networking for the Americas look like?, Michael Stanton, Rede Nacional de Ensino e Pesquisa do Brasil –
RNP, AMPATH Workshop: Fostering Collaboration and Next-Generation Infrastructure, FIU, Miami, FL, USA, January 2003,
http://www.ampath.fiu.edu/miami03_agenda.htm
30
59
There are many Brazilian institutions collaborating with all LHC experiments: UERJ,
UFRJ, UFBA, USP, UNESP and CBPF. If we extend this to the experiments at Fermilab and
Auger, the number of institutions increases (UNICAMP, for example). Considering also the
collaboration of Engineers, phenomenologists and Computer Science Researchers working in
HEP experiments, the number of institutions increases even more, (UFRGS, for example). If
we count the collaborators for LHC distributed by the four experiments, we have about 60
researchers (physicists, engineers and computer scientists) from several Universities and
Research Centers (UERJ, UFRJ, UFBA, UFRGS, UNICAMP, CBPF, USP, and UNESP).
The academic and scientific networks mentioned above each have technical
committees; however, major science users are unfortunately, not represented. This lack of
representation causes many difficulties for the science/research domain users of these
networks and has greatly contributed to the research communities’ Digital Divide conditions,
such as the State University of Rio de Janeiro (UERJ).
In the State of Rio de Janeiro, the main backbone used for research networking was
established by an agreement between FAPERJ, Rede-RIO and Telemar (a
telecommunications provider). The initial agreement favored five institutions that would be
connected to the backbone, without fees. The criteria for selecting the five institutions was not
clearly defined, resulting in UERJ not being considered in this initial agreement. Brazil’s
HENP community, led by UERJ, informed the management of RNP and Rede-Rio that their
selection criteria had resulted in their universities and laboratories not being connected to
Rede-Rio. Regrettably, no change in their policy occurred in this initial offering and the HEP
community at these universities did not receive any improvement in their network
connectivity.
Four years ago, a second agreement with Telemar was established to upgrade the
Rede-Rio backbone to 155 Mbps. Again, important research universities, such as UERJ, the
most important University funded by the State Government of Rio de Janeiro, was not
included in the network upgrade plan. Up until 2001, the connection between UERJ and
Rede-RIO was 2 Mbps. After a lot of effort, UERJ’s connection was finally upgraded to a
meager 8 Mbps. This rate still does not allow UERJ’s HEP group to conduct important work
with its international collaborators. Additionally, this link is now saturated with the
university’s general network traffic, without even including traffic originated by our HEP
experiments. Many of the other universities listed above face the same conditions as UERJ.
Clearly, if this situation does not improve, the consequences for sciences in Brazil, and in
particular for High Energy Physics, will be disastrous. Beyond the HEP projects in UERJ and
at other Brazilian universities, there are many projects in other domains that are halted due to
inadequate access to network resources.
60
Beijing, China
The first data links to China were established in the mid-1980’s with the
assistance of Caltech (with DOE support) and CERN, in association with the L3
experiment at LEP. These links used the “X.25” protocol that was then used widely in
Europe and the US, alongside the TCP/IP links that make up today’s Internet.
The first "always on" Internet link to mainland China became operational in 1994
between SLAC and IHEP/Beijing to meet the needs of the IHEP/SLAC Beijing Electron
Spectrometer (BES) collaboration. Wolfgang Panofsky of SLAC and T. D. Lee of
Columbia were instrumental in starting the project. Part of the funding for the link was
provided through SLAC by the DoE, and SLAC provided much expertise, project
leadership and visits by Les Cottrell to Beijing to assist and train.
61
Download