Merritt - High Energy Physics at the University of Chicago

advertisement
TileCalorimeter Software,
Jet Reconstruction, and Jet
Calibration in ATLAS
Frank Merritt
Ambreesh Gupta
Amir Farbin
Ed Frank
Adam Aurisano
Zhifong Wu
Rob Gardner
[ Richard Teuscher (UC)
[ Mark Oreglia (UC)
[ Matt Woods (UC)
Peter Loch (Arizona)
]
Sasha Solodkov (Protvino) ]
]
Version 3.0
F. Merritt
NSF Review
14-Jan-2005
1
Outline
• TileCal:
– Digitization of Tile signals
– Offline Optimal Filtering
– Calorimeter Objects: coordination with LAr
• JetEtMiss work:
– First jet-finding algorithms
– Ringberg Calorimeter workshop
– Navigation of calorimeter objects
– Calibration using samples: comparisons
– Current tatus and work in progress
– Test-beam work using Athena
• U.S.-Atlas Grid Activity
– Rob Gardner: Grid3
– Tier-2 Proposal
• Towards Physics Analysis of Atlas
– MidWest Physics Analysis Group
– Susy work
– North American Physics Workshop
F. Merritt
NSF Review
14-Jan-2005
2
Early Involvement in ATLAS/Athena
• Roles in Athena development and ATLAS reconstruction
– T. LeCompte (ANL): ATLAS-wide Tile data-base coordinator
– F. Merritt (U.C.): ATLAS-wide Tile reconstruction coordinator
• Tile software biweekly telephone conferences:
– Wednesday 10:00 am CST, every other week (organized by
Chicago/Argonne)
• Major Chicago/ANL Tile involvement in JetEtMiss group.
Biweekly telephone conferences (M. Bosman, convener):
– Wednesday 10:00 am CST, every other week.
– Minutes and agenda on web (JetEtMiss web page).
• Good working relationship with colleagues in Atlas
– Primarily in Tile (esp. ANL) and JetEtMiss
– Also with BNL (LAr Calorimeter), Arizona (HEC,FD),
and with colleagues in Spain, Italy, and Russia
F. Merritt
NSF Review
14-Jan-2005
3
Tile Cells and L1 Trigger Towers
(total of 9856 signals in 4x64 modules)
F. Merritt
NSF Review
14-Jan-2005
4
Chicago Contributions to Tile
Reconstruction Software
•
Development of new data classes corresponding to flow of data thru electronics (EF, FM, AS)
–
–
–
•
Development of Optimal Filtering code for high-rate Atlas environment (RT,FM,AA)
–
–
–
•
Starting with code developed by R. Teuscher for Tile test-beam and electronics.
Uses bunch structure of beam to extract energy deposition in each beam crossing
Only in-time deposition is passed on for inclusion in cell energies.
Calorimeter Navigation package (EF, AG)
–
–
–
•
Includes objects corresponding to pmts, cell, towers
Also container objects, data structures, mapping.
Also essential for providing mapping, data structures, resolution effects, and finally reconstructed cell and
tower energies in Atlas environment.
allows decomposition of Jet into cells, towers, clusters.
allows access to characteristics of constituents, e.g. cell layer, type, status (for Tile and LAr): allows reweighting for
calibration studies.
Separates navigable structure (representational domain) from behavior (OO domain).
Interface to Conditions DataBase (EF, TLC, FM)
–
TileInfo class provides access to constants through single interface (many accessor methods)
–
–
Constants set at initialization and stored in Transient Detector Store (TDS)
Parameters will be automatically updated when time interval expires.
F. Merritt
NSF Review
14-Jan-2005
5
Tile Data Objects
Tile Algorithms
TileDeposits
(local energy dep in scint)
TileOpticalSimAlg
TileHit
(signal seen by PMT)
TileElectronicsSimAlg
TileDigits
(with time struct. and noise)
TileOptimalFilter
TileRawChannel
(after optimal filtering)
TileCellMaker
TileCell
(calibrated cell energy)
F. Merritt
NSF Review
14-Jan-2005
6
Tile Shaping Function
B
Shape
1.2
1
0.8
B
0.6
0.4
0.2
0
-0.2
-50
F. Merritt
0
50
NSF Review
14-Jan-2005
100
A
150
200
250
7
Example of Optimal Filtering reconstruction of
in-time signal with two pile-up background events
F. Merritt
NSF Review
14-Jan-2005
8
Optimal Filter Algorithm #3
This is a variation of Algo #2, where in the very first step we do a 10P fit to all 9 crossing amplitudes as well as
the pedestal. In order to do this, we need to add a constraint term to the chisquare, and what we use is:
(P0-PC)^2/sigma^2. P0 is the first parameter (the ped level), PC is the nominal ped level (=50), and sigma is
taken to be about 10 (6 times bigger than digits noise). This very loose constraint is enough to allow the program
to calculate amplitudes for all 9 crossings
1.
2.
3.
4.
5.
Start with a crossing configure of all Ndig amplitudes plus pedestal (Ndig+1 parameters).
Carry out a 10P fit to Pedestal plus 9 crossings, with gaussian constrain on pedestal. Go to 4.
Apply the S matrix of this configuration to the digits vector to obtain a vector of fitted amplitudes and the errors for
each of these.
Find the amplitude with the lowest significance (A/Sigma = minimum).
If the significance of this amplitude is less than a cut value, drop this amplitude and go to step 3.
The algorithm continues until all spurious amplitudes have been rejected, and the remaining ones all have significance
greater than the cut value..
Table 4: Reconstructed TileRawChannel amplitudes as a function of Npileup for Algo #3
F. Merritt
# pileup
<D>
Drms
σ
% mis-conf
0
1
2
3
4
5
6
0.02
0.02
0.024
0.026
0.023
0.004
-0.092
1.74
1.87
2.07
2.34
2.79
3.54
4.80
1.64
1.77
1.98
2.24
2.66
3.34
4.36
1.16%
1.04%
0.93%
0.78%
0.67%
0.65%
1.08%
NSF Review
14-Jan-2005
This uses version 1.0 of the filter code.
It is the same as algo #2 except for
step 2; here we start with a
constrained fit with 10 parameters
(pedestal plus 9 crossings). The
results are far better than the earlier
ones.
9
Hadron Calibration
Strategies for Atlas
from Ringberg Castle Workshop
July 22-3, 2002
Frank Merritt
University of Chicago
(with Peter Loch
University of Arizona)
September 17, 2002
F. Merritt
NSF Review
14-Jan-2005
10
Lessons from the Ringberg workshop
(from the “other detector” talks)
•
•
H1: LAr/lead and LAr/steel, non-compensating: 50%/E + 1.6%
Zeus: Coarser subsystems, but compensating:
35%/E + 1%
•
Extensive test beam studies are a great advantage, especially in studying rsponse
near cracks or other difficult regions of the detector.
Careful monitoring of the detector is essential. This includes monitoring with
sources, studying aging effects (including gas purity), and continual monitoring
of energy profiles, track vs cluster comparisons, etc.
But this does not determine the overall energy scale (note D0 in particular). It is
absolutely essential to base this on clear in-situ physics measurements: e.g.
“double-angle” methods in HERA, W decays or Z-jet events in D0.
Energy flow corrections can give an enormous improvement in resolution -on the order of 20% in the experiments presenting talks. This depends critically
on the detector, and especially calorimeter granularity.
Noise reduction techniques in the calorimeter were important in all experiments.
Getting the best final resolution takes an enormous effort, and many years.
There were no great surprises here, but the reviews of the problems that others
have faced and solved was stimulating, encouraging, and very useful.
•
•
•
•
•
F. Merritt
NSF Review
14-Jan-2005
11
Recent and Ongoing Chicago Projects
in ATLAS Calorimetry (2003-5)
•
Development of JetRec package (A. Gupta)
–
Development of new jet-finding algorithms for Atlas
•
•
•
Reconstruction Task Force recommendations for changes in Athena structure.
–
–
–
•
F. Merritt and A. Gupta become co-conveners of the group (with D. Cavalli, Milano)
Organize i-weekly phone conferences with participation from many Atlas colleagues in U.S. and Europe
Plan Combined Performance sessions for Atlas Software weeks (4 per year)
Close contact with BNL, Pisa, many others.
Extensive development of Atlas analysis capabilities [Atlas-wide].
–
–
F. Merritt
Different calibration schemes developed: BNL, Chicago, Pisa
Creation of Jet Calibration package (AG) for comparing different calibration approaches.
Work in Atlas JetEtMiss Working Group
–
–
–
–
•
A series of meetings with calorimeter colleagues to reconsider design: meetings in Tucson, BNL, Barcelona
Common CaloCell objects with same interface for all calorimeters
Significant changes in Jet structure, with all jet objects inheriting from P4Mom and iNavigable (extends
navigation interface to essentially all objects that have energy and position)
Work on hadron energy calibration and determination of hadron energy scale
–
–
•
Cone algorithm, kt, seedless cone
Associated structures and tools for split-merge, etc.
Data Challenge 1
Data Challenge 2 (2004-5)
NSF Review
14-Jan-2005
12
Hadron Calorimeter Calibration
Three Weighting Schemes Being Studied
• “Pseudo-H1 weighting” [Frank Paige (BNL)]
– Estimates weight for each CaloCell depending on energy density in
cell. Independent of Jet energy.
• Weight by Sampling Layer [Ambreesh Gupta (U.C.)]
– Estimates weight for each sampling layer in the calorimeter
depending on Jet energy (but not on cell energy).
• Pisa weights [C. Roda, I. Vivarelli (Pisa)]
– Estimates weight for each CaloCell depending on both cell energy
and jet energy (and parameterized in terms of Et rather than E).
F. Merritt
NSF Review
14-Jan-2005
13
Main problem areas
• Calorimetry effects:
– Non-compensation of Atlas calorimeters
– Cracks and dead material
– Boundaries between calorimeters
• Definition of “truth”
– Can apply reco algorithms to MC particle list to obtain MC “jets”.
But is this truth? Clustering is different, propagation is different.
– Can sum all MC particles in cone around reco jet.
• Noise.
– Want to reject cells with no real energy,but also need to avoid
bias: rejecting E<0 => +300 GeV bias per event!
– => Use cluster-finding algorithm to reduce noise.
F. Merritt
NSF Review
14-Jan-2005
14
“Sampling Weights”
(Ambreesh Gupta)
Sampling Layers
EM Cal
 LAr calorimeter
HAD Cal  Tile+HCAL+FCAL
•
No noise added
• Calibration weights derived in
four eta regions 0.0 - 0.7, 0.7 - 1.5,
1.5 - 2.5, 2.5 - 3.2
• The weights have reasonable
behavior in all eta regions.
F. Merritt
NSF Review
14-Jan-2005
15
F. Merritt
NSF Review
14-Jan-2005
16
F. Merritt
NSF Review
14-Jan-2005
17
Scale & Resolution
Sampling Weights
/E = (68% /E)  3%
/E = (97% /E)  4%
/E = (127% /E)  0%
F. Merritt
/E = (114% /E)  8%
NSF Review
14-Jan-2005
18
Scale & Resolution
H1 Style Weights
/E = (75% /E)  1%
/E = (138% /E)  0%
F. Merritt
Different definition
of truth, compared
/E = (115% /E)  3% to those used in
deriving the weights
/E = (271% /E)  0%
NSF Review
14-Jan-2005
19
Improving sampling wt’s
25 GeV
(A. Gupta)
• Using sampling weight for each
calorimeter layer is not very
useful
-- large fluctuation in a single
layer.
• But using fraction of
energy deposited in EM and HAD
have useful information on how
jets develops.
• To make weights use energy
fraction information in EM and
HAD calorimeter.
F. Merritt
100 GeV
400 GeV
1000 GeV
Fraction of Jet energy in EM and HAd
NSF Review
14-Jan-2005
20
Ongoing work and plans for next two months
(in preparation for Rome Physics Workshop)
1.
Pisa wieghts are in the process of being put into JetRec for comparison to H1 and
Sample Weighting.
Will introduce a top-level calibration selector tool in JetRec that can be switched
through jobOpt.
Will carry out comparisons in January with the goal of establishing a benchmark
calibration by early February.
Produce new DC2 weights by mid-February (already in progress; F.P. and S.P.)
Extend calibration to different cone sizes (R=0.4 and R=0.7).
Plan to write a few standard jet selections to ESD (e.g., R=0.7 , R=0.4 cone, Kt)
Investigate other improvements in jet-finding and jet calibration if time permits.
2.
3.
4.
5.
6.
7.
•
•
•
•
F. Merritt
improved definition of truth.
improved noise suppression techniques.
more extensive studies of jet-finding with topological clusters.
additional parameters in sample weighting.
NSF Review
14-Jan-2005
21
F. Merritt
NSF Review
14-Jan-2005
22
Comparison with jet-finding applied to topological clusters:
F. Merritt
NSF Review
14-Jan-2005
23
Study variations in calibration for
different physics processes (F.P.)
F. Merritt
NSF Review
14-Jan-2005
24
Formation of U.S. Atlas Midwest
Physics Group
•
Spearheaded and organized by A. Gupta (U.C.) and Jimmy Proudfoot (ANL)
–
–
–
•
Tutorials on Athena reconstruction (given by Ambreesh)
–
–
–
•
Emphasis on physics analysis rather than software development.
Provides mutual support and common focus for midwest U.S. institutions
Monthly meetings, useful website.
compute environment, job setup, data access, histograms
how to modify the code
jets reconstruction, event analysis, ntuple production
Physics topics include:
–
–
–
–
–
–
–
Susy (Chicago group)
Higgs (Wisconsin)
Z+jets (ANL)
Top
Jet cross-sections
Di-boson production
Triggering and fast tracker
F. Merritt
NSF Review
14-Jan-2005
25
US Atlas Mid-West Physics Group
(http://hep.uchicago.edu/atlas/usatlasmidwest/)
Interested Individuals
Meetings, Agenda, and Minutes
Tutorials on Running Athena Reconstruction
Analysis with Root
Useful Data Sets
Identified Analyses
Links
Page maintained by: Ambreesh Gupta: mailto:agupta@hep.uchicago.edu, Jimmy Proudfoot: mailto:proudfoot@anl.gov
Last update: 13th December 2004
F. Merritt
NSF Review
14-Jan-2005
26
Plans for 2005 ….. and Beyond
•
High level of current activity:
– North American Atlas Physics Workshop (Dec 21-22, 2004); 4 Chicago talks:
•
•
•
•
•
•
“Jet Calibration and performance” – F. Merritt
‘Calorimeter response to hadrons from CTB” – M. Hurwitz
“Early Commissioning of the Atlas Detector” – J. Pilcher
“SUSY Studies in DC2” – A. Farbin
Workshop on calorimetry at BNL: Feb 2, 2005
Development of Chicago-based data processing
– Further development of grid-based computing tools
– Can have significant impact on Chicago physics capabilities
•
•
–
Need extensive background studies for many searches
Need high-statistics analysis for many calibration studies
Potentially very important for U.S. Atlas role and for grid development
– Tutorial organized by Amir Farbin for next Midwest Physics meeting (February 2005).
•
Preparations for Physics Workshop in Rome, June 2005.
– Need to produce/choose best hadron energy calibration constants by mid-February
•
And …..
F. Merritt
NSF Review
14-Jan-2005
27
Calorimetry in Atlas 2004 Combined Test Beam
(M. Hurwitz)
Beam
•
•
•
•
•
•
Data-taking May-October 2004
Pixel, SCT, TRT, LAr, TileCal, MDT, RPC integrated (not all at once)
Integrated triggers, e.g. full calo trigger chain used for first time
Mostly beam with no RF structure, except a few runs with a 25 ns bunched beam
Electron and pion beams contaminated with muons
Mostly 20-350 GeV, some Very Low Energy runs at 1-9 GeV
F. Merritt
NSF Review
14-Jan-2005
28
First correlation plot
150 GeV pion beam contaminated with electrons and muons
Electrons
F. Merritt
Pions
Muons
NSF Review
14-Jan-2005
29
Standalone Resolution (1)
Parametrize resolution:
σ/E = a  b/√E
F. Merritt
NSF Review
14-Jan-2005
30
Grid Computing input to NSF
Review
Rob Gardner
UC NSF Review
January, 2005
31
Overview of Grid Computing at UC

US ATLAS Distributed Computing at Chicago

Personnel:





Responsible for Grid execution software for ATLAS code




R. Gardner – L3 project manager for Grid Tools and Services
M. Mambelli – lead developer of DC2 Capone execution service
Y. Smirnov – DC2 production team and code testing
A. Zahn – UC Tier2 systems administrator
Data Challenge 2 (DC2) production software for Grid3
User production and distributed analysis
U.S. Grid Middleware contact to international ATLAS
U.S. Physics Grid Projects – Chicago contributions

NSF GriPhyN, iVDGL  Grid3, Open Science Grid




Coordination of Grid3 and Grid3 Metrics collection and analysis
Leading the Integration and validation Activity of the OSG
Integration of GriPhyN (Virtual Data) software with ATLAS
Prototype Tier2 center for ATLAS DC2 and Grid3, OSG
32
Chicago Grid Infrastructure

Prototype Tier2 Linux Cluster






NSF iVDGL project funded
High Performance / High Availability
64 compute nodes (dual 3.0 GHz
Xeon processors, 2 GB RAM)
3 gatekeepers and 3 interactive
analysis systems all Raid0
4 storage servers provide 16 TB of
attached RAID storage.
TeraPort Cluster




NSF MRI Grant, joint IBM project
Integration and interoperability with
the TeraGrid, OSG, and LCG
128 nodes with dual 2.2 GHz 64 bit
AMD/Opteron processors (256 total)
with 12 TB of fiber channel RAID, all
connected with Gigabit Ethernet.
Enterprise SUSE8 with the high
performance GPFS file system
33
Contributions

UC made leading contributions to
iVDGL/Grid3 and continues to
work on its successor, OSG
34
ATLAS Global Production System
Don Quijote “DQ”
data
management
prodDB
(CERN)
AMI
(Metadata)
Windmill
super
super
jabber
super
soap
soap
LCG
exe
Dulcinea
Lexor
RLS
LCG
super
NG
exe
Nordu
Grid
Capone
RLS
jabber
G3
exe
Grid3
Legacy
exe
RLS
LSF
USATLAS
35
UC Tier2 Delivery to ATLAS DC2
USATLAS
Fraction of
completed
DC2 jobs
CalTech_PG
4%
FNAL_CMS
4%
Others
4%
PDSF
4%
UBuffalo_CCR
4%
UTA_dpcc
17%
• Online May 2004
• Performance
comparable to BNL
(Tier1) DC2 production
UM_ATLAS
4%
UCSanDiego_PG
5%
BNL_ATLAS
17%
IU_ATLAS_Tier2
10%
UC_ATLAS_Tier2
14%
BU_ATLAS_Tier2
13%
9/04
36
U.S. ATLAS Grid Production
UC developed the Grid3 production code for US ATLAS
# Validated Jobs
140000
120000
G. Poulard,ATLAS
9/21/04
3M Geant4 events of ATLAS, roughly 1/3 of International
Plus digitization, pileup and recon jobs
Over 150K jobs executed
total
80000
60000
40000
LCG
NorduGrid
Grid3
Total
Competitive with peer
European Grid projects
LCG and NorduGrid
20000
0
40
62
40 3
62
40 6
62
40 9
70
40 2
70
40 5
70
40 8
71
40 1
71
40 4
71
40 7
72
40 0
72
40 3
72
40 6
72
40 9
80
40 1
80
40 4
80
40 7
81
40 0
81
40 3
81
40 6
81
40 9
82
40 2
82
40 5
82
40 8
83
40 1
90
40 3
90
40 6
90
40 9
91
40 2
91
40 5
91
8
Number of jobs
100000
-20000
Day
Days
37
Midwest Tier2 Proposal

Joint proposal with
Indiana University to US
ATLAS

Takes advantage of
excellent Chicago
networking (IWIRE,
Starlight) ~10Gbps

Leverage resources from
nearby projects (eg.
TeraGrid)
38
References






US ATLAS Software and Computing,
http://www.usatlas.bnl.gov/computing/
US ATLAS Grid Tools and Services
http://grid.uchicago.edu/gts
UC Prototype Tier 2 http://grid.uchicago.edu/tier2/
iVDGL: “The International Virtual Data Grid
Laboratory” http://www.ivdgl.org/
Grid3: “Application Grid Laboratory for Science”
http://www.ivdgl.org/grid3/
OSG: Open Science Grid Consortium
http://www.opensciencegrid.org/
39
From Amir Farbin’s talk at Tucson:
 The Atlas Computing Predicament:
 Situation for the past 6 months: You want to try an analysis… you’ll soon
discover:
Lots of important
 Software problems:
 9.0.x “reconstruction” release not quite ready
 ESD/AOD production has be unreliable until very recently
software
developments in
past 6 months
 No reconstruction output (everyone needs to reco themselves)
 Resource problems:
 Large pool of batch machines
•
•
CERN- overloaded… takes days until jobs start
BNL- has only 22 batch machines
 Resources busy w/ DC2 production and other users
 No place to run your jobs!
 Possible Reasons:
 Timing issues:
 Hardware purchasing ramp up?
 Tier 2 deployment?
 Conflict other Important Priorities:
 DC2 is a GRID exercise. It will soon be replaced by “Rome Production”.
 Tier 0 reconstruction is a computing exercise. It will mostly produce mixed events (not
very useful for studies). Only 10% of DC2 will eventually be reconstructed.
40
No large samples of reconstructed events available for analysis studies.
“How about the GRID3?”
---Rob Gardner
Up to 3000 processors available NOW in the US.
ATLAS is involved in DC2 “production” work (run by experts)
Individual users are not explicitly supported
Distributed analysis tools not yet implemented on the
GRID
Existing tools have specific (and limited) functionality (ie
production)
No concept of individual users…
Difficult to learn how the pieces fit together
But w/ help from Rob Gardner and his group (Marco Mambelli
& Yuri Smirnov) I was able to “hack” a working solution called
UserJobManager.
41
UserJobManager
 A collection of simple scripts which
 Install user transforms on GRID3 sites
 Everything needs to be “pre-installed” on site before jobs submission.
 Handle book-keeping of input/output
 100,000’s of input/output files.
 Submit/resubmit jobs… decide
 What samples to run on
 What sites have been reliable
 What failed jobs are likely to succeed if resubmitted
 In DC2 these tasks handled by a production system (database,
servers, clients, etc), production staff, and shifters.
 On a good GRID day (and there are many bad ones), I get 1000
reconstruction (ESD/AOD/CBNT) jobs done. (100K events/day)
 If interested (and adventurous) see:
http://hep1.uchicago.edu/atlas07/atlas/UserJobManager/instructions.txt
 This is a “hack”… if everyone starts using these tools the GRID will
break.
 0th step towards a bottoms-up approach to ATLAS user GRID
computing.
42
Datasets
ID
J1
J2
J3
J4
J5
J6
J7
J8
A4
A4
A4
A1
A2
A10
A8
A0
A9
A11
B2
H8
H9
Sample
dijet 17 < p_t < 35 GeV
dijet 35 < p_t < 70 GeV
dijet 70 < p_t < 140 GeV
dijet 140 < p_t < 280 GeV
dijet 280 < p_t < 560 GeV
dijet 560 < p_t < 1120 GeV
dijet 1120 < p_t < 2240 GeV
dijet 2240 < p_t GeV
W -> tau nu
W -> e nu
W -> mu nu
Z -> ee
Z -> mumu
Z -> tautau
QCD b-jet
top
DC2 Susy
DC1 Susy
Jet + gamma
Total
8.8.1
19950
19550
12950
18350
18500
17450
8050
7050
11000
22500
7800
3800
4800
106600
9.0.2
21100
18800
4700
1800
36900
47000
 Processed 400K events in 8.8.1 (ESD/CBNT)
and/or 9.0.2 (ESD/CBNT/AOD)
 Files sitting at UC, BU, IU, and BNL.
Registered in RLS (query ex:
“dc2test*A0*reco*aod.pool.root*”). Need
GRID certificate to access w/ gsiftp or DQ.
 CBNT ntuples available through
http://hep1.uchicago.edu/atlas11/atlas/datasamples
11300
65050
6200
10600
360200
141600
• Main problem now is that most interesting digitized datasets are in Europe.
• Problems w/ gsiftp servers and castor make transfers from Europe difficult.
• Yuri is trying new (expanded) version of DQ which will make transfers easier.
• Coordinating w/ people at BNL… they will begin copying files soon.
• Meanwhile I can copy ~1000 files/day using scripts which prestage data from
castor and scp to UC. Problems w/ UC’s Tier2 have stalled transfers in past week.43
Missing ET
Dijet
W
Z(ll)
Z()
GeV
QCD (b-jet)
SUSY DC2
SUSY DC1
Top
44
Summary
 DC2 + reco on GRID3 allowed us to begin
examining backgrounds to SUSY in full simulation…
(1st time?)
 Iowa State has developed AOD analysis… (recently
added MC wieghts for top events)
 UC & Iowa will collaborate…
 Understanding SUSY bkgs will be difficult
 Next steps:
 Explore techniques for estimating bkgs from data.
 Look into clever filtering of MC.
 Explore other topological variables.
 Explore signal extraction strategies: optimized cuts? ML fit?
MV analysis?
 Try smearing… How well do we need to understand our
detector before we can claim discovery?
45
Plans for 2005 … and Beyond
•
………
•
There still are many, many things left to do before first collisions in 2007 (!)
•
Further development of hadron energy calibration
–
–
–
•
Improve noise suppression using clustering algorithms
Extend and combine fitting approaches
Implement H1-based parameterization.
Improve and test hadronic calibration using various methods and benchmarks:
•
•
•
•
Gamma+jet
Z+jet
Dijet energy balancing
Isolated charged hadrons
•
Study sensitivity of calibration to physics process
•
Many important tasks involved in commissioning studies with Tile at CERN
–
–
–
–
•
•
Compete checkout of Tile calorimeter
Devise high-statistics monitoring and validation procedures for jet calibration and monitoring
Write and test online and offiline monitoring software for Tile
Will need a significant presence at CERN for parts of this program
Need to maintain and increase strong involvement in SUSY searches
…. and a great many other physics topics still remain !!
F. Merritt
NSF Review
14-Jan-2005
46
Download