European datagrid project Leanne Guy 25/02/2002

advertisement
European datagrid project
Leanne Guy
WP2 – IT/DB Group – CERN
25/02/2002
Leanne.Guy@cern.ch
http://cern.ch/leanne
CERN
17 April, 2002
Leanne Guy IT/DB CERN
2
The LHC
Î Counter circulating
beams of protons in the
same beampipe.
Î Centre of mass
collision energy of 14
TeV.
Î 1000 superconducting
bending magnets, each 13
metres long, field 8.4
Tesla.
ÎSuper-fluid Helium
cooled to 1.90 K
World’s largest superconducting structure
17 April, 2002
Leanne Guy IT/DB CERN
3
Online system
• Multi-level trigger
• Filter out background
• Reduce data volume
• Online reduction 107
• Trigger menus
• Select interesting events
• Filter out less interesting
40 M
Hz
lev
e
l1
75 K
- sp
(40
ecia
l
TB/
sec
)
Hz
har
(75
dwa
l2
G
re
e
B/ s
mb
5K
e
e
Hz
c)
(5 G dded
lev
el 3 B/sec proces
sor
)
- PC
1
s
0
( 10 0 H z
s
0M
B/s
Dat
ec)
a re
offli
c
ne ording
ana
lysi &
s
lev
e
17 April, 2002
Leanne Guy IT/DB CERN
4
EDG partners
Managing partners
UK: PPARC
Italy: INFN
France: CNRS
Netherlands: NIKHEF
ESA/ESRIN
CERN
Industry
IBM (UK), Compagnie des Signaux (F), Datamat (I)
Associate partners
Istituto Trentino di Cultura, Helsinki Institute of Physics, Swedish Science
Research Council, Zuse Institut Berlin, University of Heidelberg, CEA/DAPNIA
(F), IFAE Barcelona, CNR (I), CESNET (CZ), KNMI (NL), SARA (NL), SZTAKI
(HU)
Other sciences
KNMI(NL), Biology, Medicine
Formal collaboration with USA being established
17 April, 2002
Leanne Guy IT/DB CERN
5
Programme of work
Middleware
WP 1 Grid Workload Management
WP 2 Grid Data Management
WP 3 Grid Monitoring services
WP 4 Fabric Management
WP 5 Mass Storage Management
Grid Fabric -- testbed
WP 6 Integration Testbed
WP 7 Network Services
Scientific applications
WP 8 HEP Applications
WP 9 EO Science Applications
WP 10 Biology Applications
Management
WP 11 Dissemination
WP 12 Project Management
17 April, 2002
Leanne Guy IT/DB CERN
INFN
P. Kunszt/CERN
S. Fischer/PPARC
O. Barring/CERN
J. Gordon/PPARC
F. Etienne/CNRS
C. Michau/CNRS
F. Carminati/CERN
L. Fusco/ESA
C. Michau/CNRS
G. Mascari/CNR
F. Gagliardi/CERN
6
EDG requirements
Local fabric
¾
Management of giant computing fabrics
¾
auto-installation, configuration management, resilience, selfhealing
¾
Mass storage management
¾
multi-PetaByte data storage, “real-time” data recording
requirement,
¾
active tape layer – 1,000s of users, uniform mass storage interface,
¾
exchange of data and metadata between mass storage systems
Wide-area
¾
Workload management
¾
no central status, local access policies
¾
Data management
¾
caching, replication, synchronisation, object database model
¾
Application monitoring
Note: Build on existing components such as Globus middleware
Foster (Argonne)
and Kesselman (University of Southern California)
17 April, 2002
Leanne Guy IT/DB CERN
7
Download