ATLAS and GridPP Collaboration Meeting, Edinburgh, 5 November 2001

advertisement
ATLAS and GridPP
GridPP Collaboration Meeting, Edinburgh, 5th
November 2001
RWL Jones, Lancaster University
ATLAS Needs
ƒ
ƒ
ƒ
Long term, ATLAS needs a fully Grid-enabled
Reconstruction, Analysis and Simulation environment
Short-term, the first ATLAS priority is a Monte Carlo
production system, building towards the full system
ATLAS has an agreed program of Data Challenges
(based in MC data) to develop and test the computing
model
RWL Jones, Lancaster University
Data Challenge 0
ƒ
ƒ
ƒ
ƒ
ƒ
Runs from October-December 2001
Continuity test of MC code chain.
Only modest samples 105 event samples, and
essentially all in flat file format.
All the Data Challenges will be run on Linux systems
compilers distributed with the code if not already
installed locally in the correct version.
RWL Jones, Lancaster University
Data Challenge 1
ƒ
ƒ
ƒ
ƒ
ƒ
ƒ
ƒ
Runs in the first half of 2002
Several sets of 107 events (high level trigger studies,
physics analysis).
Intend to generate and store 8Tbytes in the UK,
1-2Tbytes in Objectivity.
Will use of M9 DataGrid deliverables and as many
other Grid tools as time permits.
Tests of distributed reconstruction and analysis
Test of database technologies
RWL Jones, Lancaster University
Data Challenge 2
ƒ
ƒ
ƒ
ƒ
ƒ
ƒ
Runs for the first half of 2003
Will generate several samples of 108 events
Mainly in OO-databases
Full use of the Testbed 1 and Grid tools
Complexity and scalability tests of the distributed
computing system
Large-scale distributed physics analysis using Grid
tools, calibration and alignement
RWL Jones, Lancaster University
LHC Computing Model (Cloud)
Lab m
Uni x
USA
Brookhaven
Lab a
Physics
Department
USA
FermiLab
UK
France
The LHC
Tier
1
Computing
Tier2
Uni a
CERN
Centre
Uni n
……….
Italy
Desktop
γ
NL
Lab b
β
α
Germany
Lab c
Uni y
Uni b
RWL Jones, Lancaster University
Implications of Cloud Model
ƒ
ƒ
ƒ
Internal: need cost sharing between global regions
within collaboration
External (on Grid services): Need
authentication/accounting/priority on the basis of
experiment/region/team/local region/user
Note: The NW believes this is a good model for tier-2
resources as well.
RWL Jones, Lancaster University
ATLAS Software
ƒ
ƒ
ƒ
ƒ
Late in moving to OO as physics TDR etc given a high
priority
Generations and reconstruction now done in C++/OO
Athena framework
Detector simulation still in transition to
OO/C++/Geant4; DC1 will still use G3
Athena common framework with LHCb Gaudi
RWL Jones, Lancaster University
Simulation software for DC1.
ATHENA
Particle lev. simulation
GeneratorModules
C++, linux
----------------
Py6
+code dedicated to
B-physics
---------------PYJETS->HepMc
--------------EvtGen BaBar
package ( later).
HepMc
ATHENA
Fast det.simulation
Detector simulation
Dice: slug+geant3
fortran
produce
GENZ+KINE bank
ZEBRA
Atlfast++
reads HepMc
produce
Ntuples
Reconstruction
C++
reads GENZ +kine
convert to HepMc
produce
Ntuples
RWL Jones, Lancaster University
Requirement Capture
ƒ
ƒ
ƒ
ƒ
ƒ
Extensive use case studies:“ATLAS Grid Use Cases
and Requirements” 15/X/01
Many more could be developed, especially in the
monitoring areas
Short-term use case centred on immediate MC
production needs
Obvious overlaps with LHCb – joint projects
Three main projects defined, “Proposed ATLAS UK
Grid Projects” 26/X/01
RWL Jones, Lancaster University
Grid User interface for Athena
ƒ
Completely common project with LHCb
¾ Obtains resource estimates and applies quota and
security policies
¾ Query installation tools
Î Correct software installed? Install if not
¾ Job submission guided by resource broker
¾ Run-time monitoring and job deletion
¾ Output to MSS and bookkeeping update
RWL Jones, Lancaster University
Installation Tools
ƒ
ƒ
ƒ
ƒ
ƒ
Tools to automatically generate installation kits,
deploy using Grid tools and install at remote sites via
Grid job
Should be integrated with a remote autodetection
service for installed software
Initial versions should cope with pre-built libraries
and executables
Should later deploy development environment
ATLAS and LHCb build environments converging on
CMT – some commonality here
RWL Jones, Lancaster University
MC Production System
ƒ
ƒ
ƒ
ƒ
ƒ
For DC1, will use existing MC production system (G3),
integrated with M9 tools
(Aside: M9/WP8 validation and DC kit development in
parallel)
Decomposition of MC system into components: Monte Carlo
job submission, bookkeeping services, metadata catalogue
services, monitoring and quality-control tools
Bookkeeping and data-management projects already
ongoing – will work in close collaboration, good link with
US projects
Close link with Ganga developments
RWL Jones, Lancaster University
ƒ
ƒ
ƒ
ƒ
ƒ
ƒ
ƒ
ƒ
ƒ
Allow regional management of large productions
Job script and steering generated
Remote installation as required
Production site chosen by resource broker.
Generate events and store locally
Write log to web
Copy data to local/regional store through interface with
Magda (data management).
Copy data from local storage to remote MSS
Update book-keeping database
RWL Jones, Lancaster University
Work Area
PMB
Allocation
(FTE)
Previously
Allocated
(FTE)
Total
Allocation
(FTE)
ATLAS/LHCb
2.0
0.0
2.0
ATLAS
1.0
1.5
2.5
LHCb
1.0
1.0
2.0
This will just allow us to cover the three projects
Additional manpower must be found for monitoring
tasks, testing the computing model in DC2, and the
simple running of the Data Challenges
RWL Jones, Lancaster University
WP8 M9 Validation
ƒ
ƒ
ƒ
WP8 M9 Validation now beginning
Glasgow, Lancaster(, RAL?) involved in the ATLAS
M9 validation
Validation is exercises the tools using the ATLAS kit
ƒ The software used is behind the current version
ƒ This is likely to be the case in all future tests
(decouples software changes from tool tests)
ƒ
ƒ
Previous test of MC production using Grid tools a
success
DC1 validation (essentially of ATLAS code); Glasgow,
Lancaster (Lancaster is working on tests of standard
generation and reconstruction quantities to be
deployed as part of kit) Cambridge to contribute
RWL Jones, Lancaster University
Download