IncPhoCDSOpen - Elementary Particle Physics Group

advertisement
•
•
•
•
Deadline for comments was Monday 4-3-2013
Received comments by 8 individuals/groups
Received no substantial comments
Comments were analysis questions,
comments on text and figures
• Answers to all comments have been discussed
with the Ed board
Helen Hayward - Open presentation ATLASSTDM-2012-16-001
2
• In general, reviewers were happy with the paper
(although some minor confusions which have
been addressed)
– “Very nice. Its great to see this analysis updated. ”
– “thanks for this interesting paper on the full 2011
statistics.”
– “A nice result, and one I am happy to see published. ”
– “This draft is in good shape and our comments are
minor.”
– “we find the paper well written and the analysis
complete..“
Helen Hayward - Open presentation ATLASSTDM-2012-16-001
3
Luminosity reference
•
- L 34: refs. [11] and [12] are incorrect. ATLAS-CONF-2011-011 concerns only
the 2010 luminosity determination. I suggest you delete the text in [11] and
replace it by the proper reference, which can be found at
http://inspirehep.net/record/1219960 . Then I see no need for ref. [12] any
more.
this was corrected, thanks.
Helen Hayward - Open presentation ATLASSTDM-2012-16-001
4
Physics comments - terminology
•
Line 353: I guess you are free to define whatever terminology you wish, but "combined cross
section efficiency" seems an odd term to me. Why not just "combined efficiency"?
(plus other similar/related comments)
•
Now we use the word “correction factors” and define the cross section in terms of the
correction factors. The correction factor is equivalent to the efficiency (assuming the purity=1),
but combines all other efficiency effects
•
The text of section 7 has been significantly improved figures updated.
•
Figure 3 is removed from the conf note to avoid confusion
Helen Hayward - Open presentation ATLASSTDM-2012-16-001
5
Physics: Egamma Recommendations
•
At least for what concerns the PID efficiency, it's not clear from the text whether you
use the latest EGamma recommendations for 2001 cut-based selection
•
We have applied all the standard Egamma tools. The photon ID efficiency is listed in
the columns 12,13 of Table 3 of the supporting note. In addition to these tools, we
also use systematic variations which should address possible uncertainties for the
high-pT region between 500 and 1 TeV, which may not be well covered by the standard
Egamma tools. As example, we have explicit cut variation on the isolation energy (500
MeV) and using HERWIG systematics for the correction factors.
Helen Hayward - Open presentation ATLASSTDM-2012-16-001
6
Physics: Egamma Recommendations
•
We provide data-driven scale factor to be applied on the fudge-factors corrected MC efficiency,
that would allow to profit - at least for the PID part - of very small systematics, and to get of all MC
based uncertainties. Do you use them? In the supporting note text, you speak about "the
transverse energy scale is evaluated by varying the resolution corrections to the simulation within
their uncertainties", that does not really seem to reflect what we recommend for PID efficiency.
Unless I misunderstood the procedure (it's well possible!) some of the errors on the efficiency
might well be overestimated.
•
Systematic uncertainties are likely overestimated since we wanted to be on safe side when doing
the cross section measurements in the region above 500 GeV, where there is no proper
validation of the standard tools. This region however, dominated by statistical uncertainties.
Helen Hayward - Open presentation ATLASSTDM-2012-16-001
7
Physics : Egamma
•
I find really huge here are the error bars. By reading the supporting note (e.g Fig 18) it seems
you can have up to 10% effect on the global efficiency because of the energy scale variation.
Is this really the case? Looks enormous to me, but I might be wrong. The Pythia/Herwig
difference is also rather large (up to 5%). Could you explain?
We have applied the energy rescaler to MC following the recommendation posted on the
twiki: https://twiki/cern.ch/twiki/bin/viewauth/AtlasProtected/EnergyRescaler. We do
not changing the data, so it contributes to the correction factor for the cross section
measurement (in case if we apply the tool to data and MC at the same time, there would
be no effect from the energy rescaler tool). The input to the energy rescaler is in GeV as
recommended. The effect of the energy scale is shown in the Table 3 of the supporting
note. The effect is somewhat asymmetric = about -1% - +3% (ET(gamma)~100 GeV) and
increases to -3 - +4.6% for ET(gamma)~ 750 GeV (where we have enough statistics).
Typically, effects from such “energy” scales on the final cross sections increase with ET
(while the relative energy scale uncertainty may decrease with ET)
The photon energy scale uncertainty quoted in the previous paper (35 pb-1, Phys. Lett.
B706 (2011) 150, page 3) is from 2% to 8%, saying that it is Eta dependent.
Helen Hayward - Open presentation ATLASSTDM-2012-16-001
8
Physics - Isolation
•
line 221 "The pileup correction varies and is computed as a function of the average number of
collision per events": What does this mean ? I thought that you are using the ambient energy
correction which does not use that ? Do correct the residual pileup effect when using the non
noise suppressed cells, one would have to apply ad-hoc corrections as function of Nvertex or
BCID, but this would be quite complicated and if you really want to get ride of pileup effects
completely it would seem much better to use the topo based isolation definition.
Usually, when we talk about “residual” pileup effect beyond the “ambient” correction on
ET(iso), we discuss this in the context of Jetphox predictions. This was done in the previous
gamma+jet paper. This is discussed in section 9 - putting more emphasis on “residual” pile up
effect which was studied, with our final answer- 2% effect.
•
The analysis was developed at the time when there was no “topocluster” isolation method.
Recently, we have added the systematics due to the topocluster isolation (see the supporting
note Table 3) and the effect was found to be very small. We have added topocluster isolation as
additional systematics
Helen Hayward - Open presentation ATLASSTDM-2012-16-001
9
physics
•
Is the Herwig/Pythia difference coming from a shape difference in dN/dEt ? From fig 4 both
generators look to have similar shape. If this does not come from this,from where does it
come ? Difference in isolation reco/true ? The difference in particle level/parton isolation
is not included there I guess, as it is included in the +-2% effect quoted line 535. Is this a
difference in fragmentation / direct ? But then it should be probably Et dependent.
HERWIG contributes to every aspect of the measurement 1) different modeling of the isolation
energy (the cut on isolation 7 GeV) at the truth MC level (different particle ID menu in HERWIG
and hadronisation). 7-GeV cut defines the cross section, and modeling of the isolation energy
is important. 2) effect on the purity in the correlation method (again can be traced back to
a difference the in soft QCD modeling) 3) difference in the truth -level which can affect the
overall unfolding;
Helen Hayward - Open presentation ATLASSTDM-2012-16-001
10
Physics
• line 578-582: Since Pythia+LO* pdf is a "arbitrary" LO normalization, I am
not sure one should insist on the fact that the Pythia normalization
agrees with data, as far as I understand this is pure luck. Because of the
same argument, I would not use fig 7 to conclude that the data favors a
fragmentation contribution (or we should make this argument based only
on the shape). If one really want to measure with data a fragmentation
contribution one should probably measure the cross-section as function
of isolation, which is of course another story.
We agree that the total cross sections in the LO ME have large scale
uncertainty (much larger than for NL0), and the agreement can be “pure
luck”. However, figure 7 shows that the soft QCD changes the shape of
the pT distribution, and we usually trust shapes of the LO MC (typically,
we only apply the k-factors to LO MC, without changing the shapes).
We like Fig.7 , but we did change the description of the figure
emphasizing the shape (rather than normalisation change)
Helen Hayward - Open presentation ATLASSTDM-2012-16-001
11
Textural – section 8, systematics
• At this point I have a general comment on Section VIII as a whole. In
some places (e.g. line 416) you write that the variation ranges from (in this
example) 0.5% to 5%, but you do not say as a function of what: one
assumes that you are referring to bins in the differential cross section, but
this is not clearly stated. In other places (e.g. line 423) you refer to
uncertainties that are, one assumes, the same for all bins in the
differential cross-section. But in some cases (line 483) you make a
statement about whether this is correlated between bins, and in other
cases you do not (from which I suppose the reader should conclude that
there is no correlation). In some cases (line 451) you state that the quoted
uncertainty is on the cross-section; in other cases (line 423) you do not
(this might just be a stylistic choice, since that is the only thing you are
measuring).
• Section 8 has been restructured to distinguish between systematic
errors on the signal purity and correction factor. It has been made clear
that the variation in the systematic error is as a function of pT
Helen Hayward - Open presentation ATLASSTDM-2012-16-001
12
Physics - Systematics
•
I don't quite understand the first systematics: the data/MC difference in isolation is adressed
by the second one. What is the purpose of changing by +-1 GeV the isolation definition in
both data and MC ? Why do you use +-1 GeV instead of another value ?
Since this is a cross section measurement, we added this uncertainty as for the previous
publication. It addresses a systematic shift between the true isolation (from the particle level)
and the reconstructed-level isolation
•
Related to the background systematics:
Reverting layer 1 cuts does select a background component which is not "pure" "single"
pi0. Of course this would still be OK as long as the isolation distribution for this background is
the same than for the background component contained a higher fraction of "single" pi0. Did
you check from MC simulation the background composition (in term of pi0 vs multipi0,
etc..) for events in the difference A,B,C,D regions ? In principle systematics from that should
be included in the R correlation systematics but it would be interesting to quantify a litle bit
these points as we are in an energy regime completely different from the ~50 GeV or so
regime where some of these methods were studied in details
This, in fact, very interesting question to ask using JF500 MC simulation.
We rely on the side band subtraction method using data. By looking at the region pT>600 GeV,
we have roughly 10% statistical uncertainty and ~12% systematic uncertainty, it is clear that any
mismodeling of double pi-0 rate cannot be too large effect compared to the quoted
uncertainties. (double-pi0 can affect the purity, for pT>600 GeV is 96%, which is only one
component of the correction factor for the cross section)
Helen Hayward - Open presentation ATLASSTDM-2012-16-001
13
Physics - Systematics
• As far as I understand in the "high" Et region, the scale factor is mostly
coming from the "matrix" method, which is comparing the rate of
tight/loose photons after background subtraction. If this matrix method is
done with the same sample as your measurement, I think effectively if you
use this scale factor, this is completely equivalent to in fact measuring the
cross-section with loose photons. So if you measure the cross-section
with tight photons in the same sample there is non trivial correlation in
the uncertainties. (and measuring cross-section with loose is better than
measuring scale factor from tight/loose times the cross-section from tight)
The paper uses 3 different “matrix” methods which should address any
correlation (or no correlation) case. The systematic uncertainty on the
final measurements at the level of %
Helen Hayward - Open presentation ATLASSTDM-2012-16-001
14
Physics - Systematics
•
About the systematics on the photon identification efficiency it would
be good to elaborate a little more in the paper on the way to check this
systematics in the high Et range. (for instance the matrix method check
tight/loose but then one needs something else to check loose efficiency).
We have improved the part related to the side-bands method. We
explained that we use several modifications of the matrix method. The
“problem” with high ET region is that purity getting so close to 1 (~96%),
while data statistical uncertainties increase (~10% at 600 GeV), so any
change in the matrix method generates a negligible effect on the
measurement (We use data to subtract background. Efficiency is a part
of the unfolding procedure after the subtraction )
In the Systematic section, we also elaborate more about the Et
dependence of the systematic checks.
Helen Hayward - Open presentation ATLASSTDM-2012-16-001
15
Physics - Systematics
• Is there in the systematics any effect related that the background
subtraction region could have a small contamination from
fragmentation and then lead to a small over subtraction of the
background ? I guess this can be checked by closure tests using
signal MC.
We do expect a contribution from fragmentation to the
background subtraction. However, the side-bands method is
largely based on the data itself, and the use several background
subtraction techniques should address such uncertainties. The
effect however should be considered from the point of view of a
potential impact on the measurement itself: for a large pT, the
purity is so high (>93%) that any variation due to different
subtraction method is still a small effect compared to the
statistical uncertainties which are above 10% for pT>600 GeV.
Helen Hayward - Open presentation ATLASSTDM-2012-16-001
16
Textural/physics
Line 569-570: Looking at the plots I'm not sure I agree with the statement made here. Given that
the central values of the NLO calculation with MSTW2009NLO PDFs lies (as stated) above the
central values obtained using CT10 PDFs, the agreement with data is worse (rather than better, as
is stated) anywhere the data lies below the theoretical predictions, which is the case for the full
range of ET>400 GeV in the barrel region (Figure 4).
we changed the sentence saying about “low Etgamma region”
Line 595: The HERWIG cross-section quoted here must be wrong….
(plus similar comments)
There is a typo for the HERWIG total cross section. The correct value is 187 pb. The paper was
corrected
Line 467: what is meant by "varying the photon identification in the simulation by its
uncertainty"?
we rephrased with a reference:
Uncertainty on the cross section due to insufficient knowledge of the photon reconstruction
efficiency was estimated by using different techniques used to calculated the photon
identification. This was done using the photon identification tool~\cite{ATLAS-CONF-2012123}, this affects the signal purity and the correction factor.
Helen Hayward - Open presentation ATLASSTDM-2012-16-001
17
Textural/physics
•
about the photon /pi0 separation : (related to some wording line around line 83
and also to some systematics on the background). At very high energy ( like used
there, few hundred GeV, up to TeV), the two photons from the pi0 are more
collimated than the layer 1 granularity (at 1 TeV Pt, the minimal separation
between the two photons is 10 times smaller than the eta layer 1 cell size in the
barrel) so there is not anymore single pi0 vs single photon discrimination.
Although layer 1 granularity is of course still very useful to reject jets which is often
made of close by particles and not of a single pi0 with nothing else around. So the
wording should probably be adapted.
The text is modified to consolidate with this statement.
This has been changed to “can be” instead of “sufficient” because of the high pt it is
not sufficient necessarily any more.
Helen Hayward - Open presentation ATLASSTDM-2012-16-001
18
figures
• - Fig 1 : maybe the x-axis should be expanded to go down to ~-10
GeV to see ~all the data.
We have extended X-axis to -10 as recommended
• - fig 15: What do you mean by the 4% correction factor ? Is it part
of understanding the ~10% shift between 2010 and 2011 data in
the overlapping Et bins ?
4% is the ratio NLO(E(iso)<7 GeV)/NLO(R(iso)<3 GeV). However,
the expected correction is larger due to a hadronistaion effect.
We are waiting for the results from Rivert (MC) to check this.
Therefore, at this moment, we do not apply this scaling.
Helen Hayward - Open presentation ATLASSTDM-2012-16-001
19
figures
Theory/data plots: What does the data represent? It's always
1 so I assume that it's data/data but then what are the error
bars since the errors for numerator and denominator are
100% correlated ? There should just be a line if you want to
guide the reader.
This is a new convention of the SM group when we have
several theory predictions on the same figure.
The meaningful part of the ratio is not data symbols (==1),
but the error bars (error/data) which are needed for
comparison with the NLO band.
Helen Hayward - Open presentation ATLASSTDM-2012-16-001
20
Textural comments
• All corrected
Helen Hayward - Open presentation ATLASSTDM-2012-16-001
21
Summary
• The Editorial Board recommends to proceed
with preparing the final draft and sending it to
the management for sign off asap
Helen Hayward - Open presentation ATLASSTDM-2012-16-001
22
Download