das3-bal - Department of Computer Science

advertisement
DAS-1
DAS-2
DAS-3
Henri Bal
Vrije Universiteit Amsterdam
Faculty of Sciences
Outline
•
•
•
•
Introduction
DAS-1, DAS-2, DAS-3 systems
Impact of DAS
Collaborations with
- VL-e
- SURFnet
- Grid’5000
• New users
Distributed ASCI Supercomputer
• Joint infrastructure of ASCI research school
• Clusters integrated in a single
distributed testbed
• Long history (10 years)
and continuity
DAS is a Computer Science grid
• Motivation: CS needs its own infrastructure for
- Systems research and experimentation
- Distributed experiments
- Doing many small, interactive experiments
• DAS is simpler and more homogeneous than
other grids
- Single operating system
- “A simple grid that works’’
Typical heterogeneous Grid
Configuration
200 MHz Pentium Pro
64-128 MB memory
2.5 GB local disk
DAS-1 (1997-2002)
VU (128)
Amsterdam (24)
Myrinet interconnect
BSDI => Redhat Linux
6 Mb/s
ATM
Leiden (24)
Delft (24)
Configuration
two 1 GHz Pentium-3s
>= 1 GB memory
20-80 GB disk
DAS-2 (2002-2007)
VU (72)
Myrinet interconnect
Redhat Enterprise Linux
Globus 3.2
PBS => Sun Grid Engine
Amsterdam (32)
SURFnet
1 Gb/s
Leiden (32)
Utrecht (32)
Delft (32)
DAS-3 Procedure
•
•
•
•
•
NWO proposal (ASCI, VL-e, MultimediaN), Sep’04
Funding from NWO/NCF, Apr’05
Steering group + implementation group
European tender (with Stratix and TUD/GIS)
Selected ClusterVision, Apr’06
• Optical wide-area network by
SURFnet (GigaPort-NG project)
dual AMD Opterons
4 GB memory
250-1500 GB disk
More heterogeneous:
2.2-2.6 GHz
Single/dual core nodes
Myrinet-10G (exc. Delft)
Gigabit Ethernet
Scientific Linux 4
Globus, SGE
DAS-3
UvA/MultimediaN (46)
VU (85)
SURFnet6
UvA/VL-e
(40)
10 Gb/s lambdas
TU Delft (68)
Leiden (32)
Performance
DAS-1 DAS-2 DAS-3
# CPU cores
200
400
792
SPEC CPU2000 INT (1 core)
78.5
454
1445
SPEC CPU2000 FP (1 core)
69.0
329
1858
1-way latency MPI (s)
21.7
11.2
2.7
Max. throughput (MB/s)
75
160
950
6
1000
40000
Wide-area bandwidth (Mb/s)
Outline
•
•
•
•
Introduction
DAS-1, DAS-2, DAS-3 systems
Impact of DAS
Collaborations with
- VL-e
- SURFnet
- Grid’5000
• New users
Usage of DAS
• ~200 users, 34 Ph.D. theses
• Clear shift of interest:
Cluster computing
Distributed computing
Grids & peer-to-peer computing
Virtual laboratories for e-Science
Opening workshops
• DAS-1 opening (1998):
- only 1 talk (VU) about distributed computing
• DAS-2 opening (2002):
- only 1 talk (VU) about local computing
• DAS-3 opening (NOW):
- talks about grids, e-Science, optical networks,
international embedding, distributed multimedia
computing, P2P/gossiping, co-allocating Grid Scheduler
Impact of DAS
• Major incentive for VL-e  20 M€ BSIK funding
- Virtual Laboratory for e-Science
• Collaboration SURFnet on DAS-3
- SURFnet provides multiple 10 Gb/s light paths
• Collaboration with French Grid’5000
- Towards a European scale CS grid?
Grid’5000
VL-e: Virtual Laboratory for e-Science
Some VL-e results
•
•
•
•
VLAM-G workflow system
KOALA scheduler (see Mohamed’s talk)
Zorilla peer-to-peer system (see Drost’s talk)
Java-centric grid computing (see demo)
- Ibis: Grid communication library
- Satin: divide&conquer on grids
- JavaGAT: Gridlab Application Toolkit
• Applications of Ibis, Satin, JavaGAT:
- Automatic grammar learning (UvA)
- Protein identification (AMOLF)
- Brain image analysis (VUmc)
SURFnet6
Groningen1
• 4 DAS-3 sites
happen to be on
Subnetwork 1
(olive green)
Dwingeloo1
Sub network 4:
Blue Azur
Zwolle1
Amsterdam1
Subnetwork 3:
Red
Amsterdam2
Leiden
Hilversum
Enschede1
Sub network 1:
Green
Den Haag
Nijmegen1
Utrecht1
Delft
Rotterdam1
Dordrecht1
Subnetwork 2:
Dark blue
Breda1
• Only VU needs
extra connection
Den Bosch1
Source: Erik-Jan Bos
Eindhoven1
Subnetwork 5:
Grey
Tilburg1
Maastricht1
Adding 10G waves for DAS-3
WSS
WSS ->
WSS ->
split
select
select
split
WSS <-
WSS <-
G5
G5
G5
G5
•
Amsterdam1
Amsterdam2
WSS ->
WSS ->
split
select
select
split
WSS <-
WSS <-
G5
G5
G5
G5
•
•
•
G5
G5
UvA
MultiMedian
G5
G5G6
Delft
G6
G5
GMD
G5
Amsterdam3
G6
G6
VU
GMD
Leiden
Leiden
“spur” –
hardwired
connections
G4
G4
G4
G4
Delft
Delft
G3
G5
G1
G1
G1
GMD
G3
Delft
G1
GMD
G5
G3
GMD
Den Haag
Utrecht
Delft
G3
GMD
GMD
G9
G9
G9
GMD
Hilversum
GMD
• Source: Erik-Jan
Bos
GMD
G9
Band 5 added at all
participating nodes
WSS added for
reconfigurability
“Spur” to connect VU
Full photonic mesh
possible
• Also see
De Laat’s talk
International grid experiments
• Distributed supercomputing
on GridLab
- Real speedups on a very
heterogeneous testbed
• Crossgrid experiments
- N-body simulations (astronomy)
- Grid-based interactive
visualization of medical images
Grid’5000
Grid’5000
500
500
1000
500
Rennes
Lyon
Sophia
Grenoble
Bordeaux
Orsay
500
500
500
500
500
Connecting DAS-3 and Grid’5000
• Optical 10 Gb/sec link between Paris and
Amsterdam, funded by VL-e
• See Cappello’s talk (and keynote @ CCGrid2007)
Users of DAS-3
• ASCI, VL-e
• MultimediaN
- See MultimediaN talk + demo
• NWO projects
- StarPlane, JADE-MM, GUARD-G, VEARD, GRAPE Grid,
SCARIe, AstroStream, CellMath,MesoScale, …..
• NCF projects (off-peak hours)
New users
• DAS is intended primarily for computer
scientists from ASCI, VL-e, MultimediaN
• How do we define “Computer Scientist”?
• Need a light-weight admission test …
Proposed test
There are only 10 types
of people in the world:
Those who understand binary
and those who don't.
= Pass as Computer Scientist
Source:
Acknowledgements
•
•
•
•
•
•
Lex Wolters
Dick Epema
Cees de Laat
Frank Seinstra
Erik-Jan Bos
Henk van der Vorst
•
•
•
•
•
•
NWO
NCF
SURFnet
VL-e
MultimediaN
VU, UvA, TUD, Leiden
•
•
•
•
•
Andy Tanenbaum
Bob Hertzberger
Henk Sips
Aad van der Steen
Many others
•
•
•
•
ASCI
ClusterVision
TUD/GIS
Stratix
Special acknowledgement ….
• Reasons why DAS is ``a simple grid that works’’
• 1) simple, homogeneous design
• 2) Kees Verstoep
Download