The ATLAS Collaboration - SLAC Group/Department Public Websites

advertisement
The ATLAS Collaboration
A.J. Lankford
ATLAS Deputy Spokesperson
University of California, Irvine
Outline:
• Collaboration, organization, membership
• LHC and ATLAS schedule
• Focus of current activities
Many slides are drawn from recent ATLAS Plenary
presentations, particularly by F. Gianotti & S. Myers.
July 16, 2009
Lankford
-
SLUO LHC Workshop
1
ATLAS
Collaboration
(Status April 2009)
37 Countries
169 Institutions
2815 Scientific participants total
(1873 with a PhD, for M&O share)
Albany, Alberta, NIKHEF Amsterdam, Ankara, LAPP Annecy, Argonne NL, Arizona, UT Arlington, Athens, NTU Athens, Baku,
IFAE Barcelona, Belgrade, Bergen, Berkeley LBL and UC, HU Berlin, Bern, Birmingham, UAN Bogota, Bologna, Bonn, Boston,
Brandeis, Brasil Cluster, Bratislava/SAS Kosice, Brookhaven NL, Buenos Aires, Bucharest, Cambridge, Carleton, CERN, Chinese
Cluster, Chicago, Chile, Clermont-Ferrand, Columbia, NBI Copenhagen, Cosenza, AGH UST Cracow, IFJ PAN Cracow,
UT Dallas, DESY, Dortmund, TU Dresden, JINR Dubna, Duke, Frascati, Freiburg, Geneva, Genoa, Giessen, Glasgow, Göttingen,
LPSC Grenoble, Technion Haifa, Hampton, Harvard, Heidelberg, Hiroshima, Hiroshima IT, Indiana, Innsbruck, Iowa SU, Irvine UC,
Istanbul Bogazici, KEK, Kobe, Kyoto, Kyoto UE, Lancaster, UN La Plata, Lecce, Lisbon LIP, Liverpool, Ljubljana, QMW London,
RHBNC London, UC London, Lund, UA Madrid, Mainz, Manchester, CPPM Marseille, Massachusetts, MIT, Melbourne, Michigan,
Michigan SU, Milano, Minsk NAS, Minsk NCPHEP, Montreal, McGill Montreal, RUPHE Morocco, FIAN Moscow, ITEP Moscow,
MEPhI Moscow, MSU Moscow, Munich LMU, MPI Munich, Nagasaki IAS, Nagoya, Naples, New Mexico, New York, Nijmegen, BINP
Novosibirsk, Ohio SU, Okayama, Oklahoma, Oklahoma SU, Olomouc, Oregon, LAL Orsay, Osaka, Oslo, Oxford, Paris VI and VII,
Pavia, Pennsylvania, Pisa, Pittsburgh, CAS Prague, CU Prague, TU Prague, IHEP Protvino, Regina, Ritsumeikan, Rome I, Rome II,
Rome III, Rutherford Appleton Laboratory, DAPNIA Saclay, Santa Cruz UC, Sheffield, Shinshu, Siegen, Simon Fraser Burnaby,
SLAC, Southern Methodist Dallas, NPI Petersburg, Stockholm, KTH Stockholm, Stony Brook, Sydney, AS Taipei, Tbilisi, Tel Aviv,
Thessaloniki, Tokyo ICEPP, Tokyo MU, Toronto, TRIUMF, Tsukuba, Tufts, Udine/ICTP, Uppsala, Urbana UI, Valencia, UBC
Vancouver,
Victoria,ATLAS
Washington,
Weizmann Rehovot, FH Wiener Neustadt, Wisconsin, Wuppertal, Würzburg, Yale, Yerevan
Fabiola Gianotti,
RRB, 28-04-2009
2
ATLAS Projects & Activities
• Activities of ATLAS members from the U.S. are embedded in the projects
and activities of ATLAS
• 5 ATLAS Detector Projects:
–
–
–
–
–
Inner Detector (Pixels, SCT, TRT)
Liquid Argon Calorimeter
Tile Calorimeter
Muon Instrumentation (RPC, TGC, MDT, CSC)
Trigger & Data Acquisition
• 5 ATLAS “horizontal” Activities:
–
–
–
–
–
Detector Operation
Trigger
Software & Computing
Data Preparation
Physics
• Upgrade
• U.S. contributions are well integrated into ATLAS.
– US ATLAS Management works closely with ATLAS Management to set
priorities and to make U.S. contributions maximally effective in the areas of
detector M&O, software and computing, and now upgrades. This
cooperation is greatly appreciated by ATLAS.
July 16, 2009
Lankford
-
SLUO LHC Workshop
3
Note: upgrade activities and
organization not shown here
CB
Publications
Committee,
Speakers
Committee
ATLAS management
Collaboration management, experiment execution, strategy,
publications, resources, upgrades, etc.
Technical
Coordination
Detector Operation
(Run Coordinator)
Detector operation during
data taking, online data
quality, …
Trigger
(Trigger
Coordinator)
Executive Board
Computing
(Computing
Coordinator)
Trigger performance, Software infrastructure,
menu tables, new
world-wide computing
triggers, ...
operation
Data Preparation Physics
(Data Preparation (Physics
Coordinator)
Coordinator)
Offline data quality,
calibration, alignment,
…
optimization of
algorithms for physics
objects, physics
channels
Detector systems: Inner Detector, Liquid-Argon, Tiles, Muons, TDAQ
5 detector systems, 5 “horizontal” experiment-wide activities, plus upgrade (not shown here)
 Detector Project Leaders and Activity Coordinators: 2-year term.
Each year a Deputy Activity Coordinator is appointed, who becomes Coordinator one year
later for one year (this staggering mechanism ensures continuity)
 Experiment’s execution reviewed monthly in the Executive Board: 1.5 day meeting
(one Gianotti,
day open
to RRB,
the full
Collaboration followed by half day closed)
Fabiola
ATLAS
28-04-2009
4

Collaboration Board
ATLAS
Resources Review
(Chair: K. Jon-And
Deputy: G. Herten)
Plenary Meeting
Board
Spokesperson
CB Chair Advisory
Group
ATLAS Organization
July 2009
(F. Gianotti
Deputies: A.J. Lankford
and D. Charlton)
Technical
Coordinator
Resources
Coordinator
(M. Nessi)
(M. Nordberg)
Executive Board
(P. Wells)
(I. Wingerter-Seez)
(A. Henriques)
Muon
Instrumentation
Computing
Coordination
Data Prep.
Coordination
Physics
Coordination
Upgrade SG
Coordinator
PubComm
Coordinator
(T. LeCompte,
dep. A. Nisati)
(N. Hessey)
(J. Pilcher)
Inner Detector
LAr Calorimeter Tile Calorimeter
(C. Guyot,
dep. A. Hoecker)
July 16, 2009(next B. Heinemann)
(D. Barberis,
dep. K. Bos)
Lankford
(L. Pontecorvo)
-
Trigger/DAQ
( C. Bee)
SLUO LHC Workshop
Trigger
Commissioning/ Coordination
Run Coordinator (N. Ellis, dep. X. Wu)
(C. Clément,
dep. B. Gorini)
Additional
Members
(T. Kobayashi,
M. Tuts, A. Zaitsev)
(next T. Wengler,
dep. S. Rajagopalan)
P.Jenni
ex-officio
6 months
as former SP
5
ATLAS Individual Membership
1/2
For an individual at an existing ATLAS institution
• ATLAS membership is open to members of ATLAS institutions.
– Requires approval by your institution's ATLAS "team leader".
– See ATLAS home page for CERN / ATLAS registration information.
• ATLAS authorship requires qualification (recently revised).
– Authorship privileges require an institutional commitment to:
• Sharing in shifts and other operations tasks,
• Sharing of maintenance & operations expenses.
– Authorship privileges require individual qualification:
• Obtaining qualification:
– ATLAS membership for at least one year,
– Not to be an author of another major LHC collaboration,
– At least 80 days and at least 50% research time during qualification year on
ATLAS technical activities.
• Continuing qualification:
– Continued ATLAS membership,
– At least 50% research time on ATLAS,
– Not to be an author of another major LHC collaboration.
July 16, 2009
Lankford
-
SLUO LHC Workshop
6
ATLAS Individual Membership
2/2
For an individual at an existing ATLAS institution
(Cont’d.)
– Policies exist concerning authorship of former ATLAS members and other
exceptional circumstance.
• See ATLAS Authorship Policy:
http://atlas.web.cern.ch/Atlas/private/ATLAS_CB/CB_Approved_Documents/
A60_AUTHOR_policy_7%201.pdf
– ATLAS technical work is described in the appendix of the above
document.
– Lists of high priority qualification tasks are maintained on Twiki:
https://twiki.cern.ch/twiki/bin/view/AtlasProtected/AuthorShipCommittee
• Currently contains priority tasks in Activities. Projects to be added soon.
• One may need help from an ATLAS member due to web protection.
• In summary: Contact your ATLAS team leader.
• Individuals not at existing ATLAS institutions become ATLAS members by
affiliating through a member institution.
• See subsequent slides regarding ATLAS institutional membership.
July 16, 2009
Lankford
-
SLUO LHC Workshop
7
ATLAS Institutional Membership
1/2
• ATLAS welcomes new institutions that are interested in and
capable of substantive contributions to the ATLAS research
program.
• Procedural overview:
– An expression of interest is prepared in consultation with the
Spokesperson.
– The expression of interest is presented by the Spokesperson to the
Collaboration Board at a CB meeting.
– Membership is decided by Collaboration Board vote at a subsequent CB
meeting.
– This process typically takes 0.5-1 year preceded by a period of contacts
and initial informal involvement.
• See following slide for alternative procedures.
July 16, 2009
Lankford
-
SLUO LHC Workshop
8
ATLAS Institutional Membership
2/2
• Two alternatives to typical procedure:
1.
Association with an existing ATLAS institution.
• A new or small institute may join ATLAS in association with an existing
ATLAS institution.
– Procedure is rather informal, and fully under the responsibility of the hosting
ATLAS institution.
• Such an association may be permanent or temporary
– (e.g. while ramping up and preparing an EoI for full membership)
• Recent examples:
– UT Dallas: was associated with BNL; now UTD is an institution
– U of Iowa: now associated with SLAC; EoI presented to CB at most recent
meeting; decision in October
2.
Clusters of institutes: Small institutes may join ATLAS as a cluster.
– Together they form an ATLAS "institution".
• In summary: Contact both:
– ATLAS Spokesperson, Fabiola Gianotti
– US ATLAS Program Manager, M. Tuts, & Institute Board Chair, A. Goshaw
July 16, 2009
Lankford
-
SLUO LHC Workshop
9
ATLAS Operation Task Sharing
• ATLAS operation, from detector to data preparation and
world-wide computing, requires 600-750 FTE (of ~2800 scientists).
– Fairly shared across the Collaboration
• Proportional to the number of authors
• Students are weighted 0.75
• New institutions contribute more in the first two years (x 1.50, x 1.25)
• ~60% of these FTE are needed at CERN
– Effort to reduce this fraction with time.
• In 2009, ~12% (21,000) are shifts in the Control Room or oncall expert shifts
– Effort to increase remote monitoring with time
• Allocations are made separately for shifts and other expert
tasks. Institutions to cover both categories of activity.
• Required FTE and contributions updated & reviewed yearly.
July 16, 2009
Lankford
-
SLUO LHC Workshop
10
LHC and ATLAS Schedule
• Repair of the damaged portions of the LHC has gone well.
• Steps have been taken to avoid such catastrophic events in
the future.
– Including a new quench protection system (QPS) completed and
recently tested.
• Work now focuses on problems discovered in splices in the
copper bus bar in which the superconductor is embedded.
July 16, 2009
Lankford
-
SLUO LHC Workshop
11
Summary
from S. Myers at ATLAS Plenary, 6 Jul 08
•
The enhanced quality assurance introduced during sector 3-4
repair has revealed new facts concerning the copper bus bar in
which the superconductor is embedded.
•
Tests have demonstrated that the process of soldering the
superconductor in the interconnecting high-current splices can
cause discontinuity of the copper part of the busbars and voids
which prevent contact between the superconducting cable and the
copper
•
Danger in case of a quench
•
Studies are now going on to allow:
•
To find a safe limit for the measured joint resistance as a
function of the current in magnet circuits (max energy in the
machine)
•
Faster discharge of the energy from circuits
Strategy for Start-Up
• ~3 weeks delay with respect to baseline due to
•
•
•
•
R-long and R-16 measurements
Splice repairs
Delay in cool down of S12 and repairs of splices
(Re-warming of S45)
• BUT the story of the copper stabilizers goes on
• Need to measure the remaining sectors (S23, S78, and S81) ?at
80K
• Need to understand the extrapolation of measurements at 80K to
300K
– Measurement of variation of RRR with temperature
• Need to gain confidence in the simulations for safe current
– Compare different simulation models/codes
from S. Myers at ATLAS Plenary, 6 Jul 08
13
Strategy
• Measure S45 at 300k (DONE)
– will be redone W28 (better temperature stability)
• Measure remaining 3 sectors (at 80K); last one (81) presently foreseen
at beginning August
• Measure variation of RRR with temperature during cool down
• Update simulations (3 simulation models) of safe current vs resistance of
splices
– Decay times of RB/RQ circuits following a quench (?quench all RQs)
• Determine which splices would need to be repaired as a function of safe
current (beam energy)
• Evaluate time needed to heat up to 300K and repair these splices
• Prepare scenarios of safe operating energy vs date of first beams
• Discuss with Directorate and experiments and decide on best scenario.
– Preferred scenario :highest possible energy associated with earliest date
• (what is the maximum energy with no repairs needed)
• At start-up confirm all splice resistance measurements at cold using new
QPS
from S. Myers at ATLAS Plenary, 6 Jul 08
Simulations: Maximum safe currents vs copper joint resistance
12000
Adiabatic conditions, without QPS delay, RRR=240,
cable without bonding at one bus extremity,
no contact between bus stabiliser and joint stabiliser.
11000
Max. safe current [A]
10000
RB, tau=100 s (normal)
RB, tau=68 s (fast)
9000
RQ, tau=30 s (normal)
5 TeV
8000
RQ, tau=15 s (fast)
7000
4 TeV
6000
5000
46
4000
0
Warm (300K)
20
40
54
60
R_additional [microOhm]
80
100
120
140
Arjan Verweij, TE-MPE, 9 June 2009
15
Latest News – LHC Vacuum Leaks
• Vacuum leaks found in two “cold” sectors (~80 K)
– During preparation for electrical tests of Cu bus bar splices
– Both at one end of sector, where electrical feedbox joins final magnet.
• Leak from helium circuit to insulating vacuum
– No impact on beam pipe vacuum
• Repair requires partial warm-up
– Warm up of end sub-sector to room temperature
– Adjacent sub-sector “floats” in temperature
– Remainder of sector kept at 80 K
• “It is now foreseen that the LHC will be closed up and ready
for beam injection by mid-November.”
– This delay does not affect the overall schedule strategy.
July 16, 2009
Lankford
-
SLUO LHC Workshop
16
ATLAS position concerning running scenario (1)
 As discussed at the June Open and Closed EB
 Since the LHC schedule is still uncertain, this is our present position
The following 3 scenarios were considered to be possible options (see S.Myers’s talk
for updates) and were discussed in the EB:
❶ The machine can run safely at 2 x 4-5 TeV start data taking in 2009 aiming
at a long (~11 months) run (“Chamonix scenario”).
❷ The machine can run safely at 2 x ? TeV (where ? < 4 TeV)  start a “short”
(few months ?) run in 2009, then shut down sometime in 2010 to prepare all
sectors for 5 TeV beam operation. Data taking at 2 x 5 TeV could start in
second half of 2010.
❸ Fix all bad splices to achieve 2x5 TeV operation, i.e. delay the LHC start-up to
Feb/March 2010.
Scenario 1 is still the current plan and everybody hopes that this will become reality.
Scenarios 2 and 3 are alternatives in case bad splices are found in the remaining three
(cold) sectors. The exact energy in Scenario 2 will depend on the resistivity of the worst
splices. Scenario 3 would likely give collisions at 2 x 5 TeV earlier than in Scenario 2 but
would imply no collisions in 2009.
F.Gianotti, ATLAS week, 6/7/2009
ATLAS position concerning running scenario (2)
The EB reached (unanimously) the following conclusion:



ATLAS would like to run as soon as possible at the highest possible energy
afforded by safe operation of the machine.
If operation at ≥ 4 TeV per beam is not possible in 2009 (because it would require
warming up one or more of the cold sectors), we would like nonetheless to take data
in 2009 at a lower collision energy (the highest E for safe LHC operation).
In other words, ATLAS prefers Scenario 2 over Scenario 3.
The duration of the first run should (mainly) depend on the beam energy.
Indicatively, if the beam energy is below ~ 3 TeV, the first run should be
relatively short (a couple of months ?) and mainly be used to commission the
experiment with physics data. Indeed, studies performed for the Chamonix
workshop indicate that the LHC would not be competitive with the Tevatron
in this case, and therefore it would be more convenient, after a short run,
to shut down and reach the highest possible collision energy (10 TeV).
If the energy is higher than ~ 3 TeV per beam, a longer run should be envisaged,
as originally planned. Note: these are only indications, it’s premature to decide the run
duration today, as it will depend on many other parameters than the beam energy (machine
performance, detector status, etc.)
Main motivation: we need collision data as soon as possible to commission the experiment
(detector, trigger, software, analysis procedures …) and to perform first long-awaited
physics measurements. We also have a lot of students (~800 !) who need data to
complete
their
theses
F.Gianotti,
ATLAS
week,
6/7/2009
A useful resource for newcomers:
Preparation for physics
December 2008: “CSC book”
(CSC=Computing and Software
Commissioning) released
Most recent evaluation of expected
detector performance and physics
potential based on present software
(Physics TDR used old fortran software)
Huge effort of the community:
~2000 pages, collection of ~ 80 notes
Very useful reference for
studies with LHC data
Exercised also internal review
procedure in preparation to
future physics papers
arXiv:0901.0512
19
Focus on Readiness
As ATLAS awaits first LHC colliding beams, its activity is focused
on readiness.
– Detector readiness, including trigger, data quality, etc.
• Shutdown activities, combined running with cosmics, etc.
– Data processing readiness,
incl. computing infrastructure, software, calibration & alignment, etc.
– Physics readiness, including object definition
Some current, key activities concerning readiness:
–
–
–
–
Cosmics analysis
Analysis Model for the First Year Task Force
Distributed analysis tests
End-to-end planning & walk-thru’s for first physics analyses
July 16, 2009
Lankford
-
SLUO LHC Workshop
20
Detector status and shut-down activities
End of October 2008: ATLAS opened for shut-down activities
Examples of repairs, consolidation work, and achievements:
 Yearly maintenance of infrastructure (cryogenics, gas, ventilation, electrical power, etc.)
 Inner Detector: refurbishment of cooling system (compressors, distribution racks);
now running with 203 cooling loops out of 204.
 LAr: 58 LVPS refurbished; 1 dead HEC LVPS repaired
Dead FEB opto-transmitters (OTX) replaced; 6 died since (128 x 6 channels)
 Tilecal: 30 LVPS replaced (1 died since), 81 Super Drawers opened and refurbished
 Muon system: new rad-hard fibers in MDT wheels; gas leaks fixed; CSC new ROD
firmware being debugged; RPC gas, HV, LVL1 firmware, timing; some EE chambers installed
 Magnets: consolidation (control, vacuum,..); all operated at full current 25/6-30/6
Schedule:
 Cosmics slice weeks started mid April
 Two weeks of global cosmics running completed this morning: ~100M events recorded
 July-August: continue shut-down/debugging activities (EE installation, shielding,
complete ID cooling racks and install opto-transmitters, TDAQ tests, etc.)
 Start of global cosmics data taking delayed by 3 weeks to ~ 20 September
(after discussion with CERN/LHC Management)
F.Gianotti, ATLAS week, 6/7/2009
21
Cosmics (and single-beam) analysis:
 Effort ramping up
 Almost 300M events from 2008 run reprocessed twice and analyzed
 O(200) plots approved for talks at conferences
 Many notes in the pipeline, ~ 10 could become publications
(but more people needed so we can complete studies before first beams)
 Cosmics Analysis Coordinator (Christian Schmitt)
appointed to pull together efforts from various groups in a
coherent way (e.g. simulation strategy)
 Achieved level of detector understanding is far better
than expectations in many cases
Cosmics analysis is proving to be an effective step in commissioning ATLAS.
F.Gianotti, ATLAS week, 6/7/2009
Computing




4 main operations according to the Computing Model:
First-pass processing of detector data at CERN Tier0 and data export to Tier-1s/Tier-2s
Data re-processing at Tier-1s using updated calibration constants
Production of Monte Carlo samples at Tier-1s and Tier-2s
(Distributed) physics analysis at Tier-2s and at more local facilities (Tier-3s)
Actual Computing
(CM)
1stModel
pass raw
datamuch more complex: includes data organization, placement
and deletion strategy,
disk space
organization, database replication, bookkeeping, etc.
reconstruction
at
Tier0 and export
CM and above operations have been exercised and refined over the last years through
reconstruction
functional tests and data challenges of increasing functionality, realism and
size, including
processed
Event
data LHC expts).
recent STEP09 challenge (involving grid operations
of ATLAS and the other
Summary
Data
raw
Concern: distributed
analysis not thoroughly tested yet (started a few months ago with
data
Physics access by real users .
robotic Hammer Cloud tests); now need “chaotic” massive
Reprocessing
at Tier1
analysis
at Tier2
U.S. has been playing a lead analysis
role in tests of data access for analysis.
simulation
analysis objects
(extracted by physics topic)
Simulation
at Tier1/2
F.Gianotti, ATLAS week, 6/7/2009
interactive
physics
analysis
from J. Cochran
Status & Plans for Readiness Testing
robotic test jobs running on some analysis queues since Fall08 (all analysis queues since April09)
these tests incorporate a handful of analyses job types from actual users
Such robotic tests were a major component of STEP09 tests in early June
combined computing challenge for the LHC experiments
ATLAS: reprocessing, data-dist, sim-prod, robotic user
These tests provide important benchmarks (and define our limitations)
- for the most part the US cloud did well (e ~ 94%, highest in ATLAS)
Thanks to extensive
pretesting by Nurcan
Also need to test the system with actual users under battlefield conditions
(will be much more chaotic than the robot tests)
In addition to analysis queue infrastructure, need to test user ability to configure jobs,
run jobs, and transfer job output to local work area
Evolving Plan
from J. Cochran
Assume Early Analysis Model: physics (sub)groups produce D3PDs on T2s
users pull D3PDs to their preferred interactive location (T3)
(may need to adjust when AMFY plan is released)
What we need to test:
- D3PD production on T2s [primarily (sub)group designates, also some users]
- transfer of D3PDs from T2 to local work space [eventually for all users!!!]
In ideal world, should have
properly mixed samples corresponding to expected streams in both size & composition
Such samples do not exist – as an alternative generate large sample and make multiple copies
Expect 1B events total for run 1 - aiming for 500M event test
Generated (fast sim) 100M event multijet filtered sample which contains appropriate
amounts of tt, W/Z, & prompt g (500M such events ~48 pb-1) – very challenging generation
many problems overcome
Have made 5 copies (containers) of this sample; 2 copies sent to each US Tier2
users will be pointed to sets of 3 containers (roughly approximating a stream)
from J. Cochran
ATLAS has proposed a phased approach to such tests:
- study submission patterns of power users (learning where their jobs won’t run & why)
- then ask them to target specific T2s on specific days (sending multiple copies of their jobs)
- gradually opening up to more and more users (exercising more queues & more user disk space)
- (there is great sensitivity to not add significantly to user workload)
Start of such tests has not been uniform over the clouds (efforts primarily in US, UK, & Germany)
In US, as part of STEP09, asked 30 experienced users to run over the top-mixing sample
(~1M evts) which had been replicated to all US T2s
100M evt container & pre-test
– running their own jobs
container were not yet available
– transferring their output to their local work area
used pandamonitor to determine most active (experienced) users
Job success efficiency (%)
AGLT2
MWT2
NET2
SLAC
SWT2
59*
80
74
84
75
only 14 users were
able to participate
* one user made many attempts here before overcoming user config issue
Info obtained by hand from pandamonitor db
Much more metric info available – scripts are being developed to extract (almost ready)
– work needed to more easily extract T2  T3 transfer rates
from J. Cochran
Schedule
– 1st expert user is testing 100M event container now
– once ok we will pass on the US expert team (later this week)
- will run their own jobs on 300M sample and transfer output to local area
- we will use new scripts to tabulate metrics
– expand on to much larger US group (late July or early August)
- more users running T2 jobs and all users doing T2  local area transfer
- in particular need to include ESD (pDPD), cosmic, and db intensive jobs
- possibly focus effort into 1-day blitz (late-August)
– with guidance/blessing of physics coordination, expand to all ATLAS clouds
- will need to transfer 2 copies of 100M event container to each participating T2
- will allow studies of cross cloud analysis jobs (what we expect with real data)
- panda monitoring will provide metrics for ganga users ? (saved by panda-ganga integration ?)
- transfer tests will likely need to be tailored to each cloud (local area may be T2 or T1)
– likely need/want to repeat the full ATLAS exercise a few times
Analysis Model for the First Year – Task Force Mandate
From T. Wengler, ATLAS Open EB, June 2009
As listed on: https://twiki.cern.ch/twiki/bin/view/AtlasProtected/AnalysisModelFirstYear
• The Analysis Model for the First Year (AMFY) task is to condense the current analysis model,
building on existing work and including any necessary updates identified through the work of
the TF, into concise recipes on how to do commissioning/ performance/ physics analysis in the
first year. In particular it will:
• Identify the data samples needed for the first year, and how they derive from each other:
–
–
–
–
How much raw data access is needed (centrally provide/sub-system solutions)
How many different outputs and of what type will the Tier-0 produce
Expected re-generation cycles for ESD/AOD/DPDs
Types of processing to take place (ESD->PerfDPD, ESD->AOD, AOD-> AOD, AOD-> DPD, etc)
• Related to the items above, address the following points:
– Are the Performance DPDs sufficient for all detector and trigger commissioning tasks (are changes to
ESD needed)?
– What is the procedure for physics analysis in the first year in terms of data samples and common tools
used (down to and including PerfDnPDs and common D3PD generation tools), including both required
and recommended items?
– How much physics will be done based on Performance DPDs?
– How will tag information be used?
– Every part of our processing chain needs validation, how will it be done?
• Scrutinise our current ability to do distributed analysis as defined in the computing model
• Match the items above to available resources (CPU/ disk space / Tier-0/1/2 capabilities etc).
28
Physics
Recent proposal: “Analysis readiness walk-throughs”
Goal : be ready to analyse first collision data fast and efficiently
(lot of pressure from scientific community, “competition” with CMS, …)
Process will be driven by Physics Coordination with support of and help from EB
 Consider the basic list of analyses to be performed with first data:
-- minimum-bias
-- jets
-- inclusive leptons
-- di-leptons
-- etc.
 For each analysis, building on huge amount of existing work:
-- prioritize goals for Winter 2010 conferences: define the results we want to produce
minimally, while leaving the door open to more ambitious goals/ideas if time and people
-- review the sequence of steps from detector, to calibration, trigger, data-quality,
reconstruction, MC simulation, … needed to achieve the planned physics results
-- make sure all steps are addressed in the right order, are covered by enough people,
and that links and interfaces between steps are in place (“vertical integration”)
The above information should be prepared, for each analysis, by a team of ~5 people (from
systems, trigger, Data Preparation, Combined Performance and Physics WGs), with input from
whole community, and presented to dedicated open meetings (0.5-1 day per analysis).
A “review” group (including representatives from EB, Physics Coordination and community
at large) will make recommendations and suggestions for improvements.
Time scale:
start
of August with 1-2 “guinea pig” analyses
F.Gianotti,
ATLAS
week, end
6/7/2009
Some aspects requiring special efforts and attention in the
coming months (a non-exhaustive list … )
 LHC schedule and our position for running scenarios (discussion with CERN and machine
Management mid August)
 Global cosmics runs: aim at smooth and “routinely” data-taking of the full detector with
the highest possible efficiency
 Detector evolution with time: plan for and develop back-up solutions for delicate
components (LAr OTX, Tile LVPS, … etc.) for replacement during next/future shut-down
 Cosmics analysis: learn as much as possible, finalize notes (and possibly publications)
before beams
 Software: releases consolidation, validation, robustness; technical performance (memory !)
 Finalize the Analysis Model from detector commissioning to physics plots
 Computing: stress-test distributed analysis at Tier-2s with massive participation of users
 Complete simulation strategy: e.g. trigger; how to include corrections from first data
(at all levels)
 Analysis readiness walk-through’s
F.Gianotti,
Upgrade
(IBL,
Phase
1 and 2): develop and consolidate startegies, plan for Interim MoU
ATLAS
week,
10/7/2009
Summary
• ATLAS is well prepared for first LHC colliding beams.
– We can efficiently record and process data now.
• Much can be done to prepare further for fast, efficient physics
analysis & results.
• Many areas exist for individual and institutional contributions.
• Best wishes to all the participants for a successful workshop.
July 16, 2009
Lankford
-
SLUO LHC Workshop
31
Download