Status of PRS/Muon Activities D. Acosta University of Florida

advertisement
Status of PRS/Muon Activities
D. Acosta
University of Florida
Work toward the “June” HLT milestones
US analysis environment
HLT Milestones
The June HLT milestones are:
Complete HLT selection for high-lumi scenario
 HLT results on B physics
 CPU analysis for high lumi selection
 Repeat on line selection for low-lumi

Must have results in DAQ TDR by September!
We don’t have these results yet, but the current status
and L1 results were reported at the June CMS Week
HLT Muon code had severe crashes, infinite loops, and
memory leaks that prohibited collecting any statistics on our
HLT algorithms
 After monumental debugging effort, crashes traced to
incorrect use of “ReferenceCounted” objects
 User must never delete, even if performed new!
 In L1 results, rate spike appeared at =1.6
 New holes in geometry?
 Problems were “fixed” for ORCA_6_2_0 release

US CMS S&C and Physics Meeting, July 26, 2002
2
Darin Acosta
L1 CSC Rate Spike


Contributes ~1 kHz to L1 rate
 Spike occurs in  and !
 Region of crack between barrel/endcap
 Traced to ambiguity in pT assignment
for low pT muons (or punch-through)
 Fixed in CSC Track-Finder (but not sure
why this is a problem only now)

US CMS S&C and Physics Meeting, July 26, 2002
3
Darin Acosta
Not out of the woods…
Tony reports that the Muon RootTreeMaker has a massive memory
leak (200–500kB/event)

Analysis stopped at CERN (batch nodes were dying)
 But muon HLT code alone was shown to have a leak of “only”
16 kB/event when released
 So is it because events have more occupancy with pile-up, or is it
because of jet/tau/pixel code?
 Still under investigation
 At FNAL, I find a leak of 800 kB/event for Z, and it is lumi
dependent (600 kB/event for 2*1033)
 Nicola Amapane promises some results(fix?) this evening
Moreover, the DT reconstruction is known to have some
deficiencies
So we have a two-fold plan:

Continue with current Root Tree production at remote sites to get us
a set of baseline results for HLT milestone
 We already had Pt1, Pt10, and Pt4 (low lumi) done before CERN
shut down.
 Can Fermilab help? Run ~1000 short jobs on PT4 if leak not fixed
 Push hard to get new HLT reconstruction code written to solve
remaining problems in time for September deadline
US CMS S&C and Physics Meeting, July 26, 2002
4
Darin Acosta
Status of New Reconstruction Code
Stefano Lacaprara has authored a new version of DT
reconstruction code
Corrects some naïve handling of the DT hits, incorrect pulls,
new code organization,…
 Turns out DT reconstruction must know drift velocity to ~1%

This code has been examined (some bugs fixed) and
cleaned up by Bart Van de Vyver (and also by Norbert
and Nicola)
Aiming to get new results in August, hopefully with new
reconstruction code
US CMS S&C and Physics Meeting, July 26, 2002
5
Darin Acosta
Muon Analysis at Fermilab
Request made to get the Muon Federations copied from
CERN
Pt4 single muon sample at highest priority
 Shafqat predicts copied by Monday
 Pt1, Pt10, W, Z, and t-tbar to follow
 Z (on-peak and above) already available

Root Trees will be copied as well, when available
US users thus have at least one local choice for an
analysis center, in addition to CERN
Mechanism to obtain FNAL visitor id and computer
accounts remotely works well (Thanks Hans…)
When Pt4 sample ready, PRS/Muon group is interested in
running large number of RootTreeMaker jobs at FNAL

INFN still trying to copy PT4 tracker digis (0.7TB)
US CMS S&C and Physics Meeting, July 26, 2002
6
Darin Acosta
Florida Prototype Tier-2 Center
Currently host the Z samples in Florida
but only for local accounts, I think, at the moment. Eventually
should be accessible world-wide.

Limited by available disk space

Several TB of RAID ordered
Problematic analysis environment
Although the Production environment is working quite well
with DAR distributions, the analysis environment (where
users can compile and run jobs) is a little unstable
 Some difficulties building code in ORCA6, perhaps traced
to using RH6.2 vs. RH6.1 (loader version)
 Need a more standard way to set up analysis environment
 I think INFN also had some initial difficulties getting
ORCA_6_2_0 installed and working

Should be solved once key people come back from
vacation
US CMS S&C and Physics Meeting, July 26, 2002
7
Darin Acosta
Side Note…
For better or worse, we are working with a complex set of
software for CMS

Definitely not easy for newcomers to contribute to development or
to debugging (or to create a DB)
 Case in point: how can a summer student plug a new L2 module
into ORCA?
 Many layers to ORCA software, difficult to navigate, little
documentation of “common” classes
 Sometimes counterintuitive rules must be followed
 Complexity probably partly intrinsic to ORCA/COBRA, and partly
due to inexperienced physicists working in this environment
That being the case, we MUST have professional tools for
development and debugging

Must be able to debug and profile the code, check for memory
leaks, corruption, etc.
This is standard for CDF, and reliability of production code has
increased dramatically
 Requires analysis workstations with enough memory to handle
these tools
Should start defining a set of validation plots to show problems
early in production
US CMS S&C and Physics Meeting, July 26, 2002
8
Darin Acosta
Download