Reconstruction of Cascade-Like Events in IceCube

advertisement
Reconstruction of Cascade-Like
Events in IceCube
Diplomarbeit
zur Erlangung des akademischen Grades
Diplom-Physiker
(Dipl.-Phys.)
eingereicht an der
Humboldt-Universität zu Berlin
Mathematisch-Naturwissenschaftliche Fakultät I
von
Eike Middell
geboren am 5. April 1982 in Berlin
1. Gutachter:
Dr. Marek Kowalski
2. Gutachter:
Prof. Dr. Hermann Kolanoski
Berlin, den 28. Juli 2008
ii
iii
Contents
1 Motivation
1
2 Basic Principles of Neutrino Detection in Ice
2.1 Neutrinos and Weak Interactions . . . . . .
2.2 Deep Inelastic Neutrino-Nucleon-Scattering .
2.3 Physics of Cascades . . . . . . . . . . . . . .
2.3.1 Čerenkov radiation . . . . . . . . . .
2.3.2 Electromagnetic Cascades . . . . . .
2.3.3 Hadronic Cascades . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
3
4
5
8
8
9
13
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
15
15
16
21
22
22
23
24
26
4 Reconstruction of Cascade-Like Events
4.1 Prerequisites . . . . . . . . . . . . . . . . . . . . .
4.1.1 What to Reconstruct? . . . . . . . . . . .
4.1.2 Coordinate System . . . . . . . . . . . . .
4.1.3 Resolution and Bias . . . . . . . . . . . .
4.1.4 Reconstruction Chain . . . . . . . . . . . .
4.1.5 Dataset . . . . . . . . . . . . . . . . . . .
4.2 Charge Reconstruction and Feature Extraction . .
4.3 First Guess Algorithms . . . . . . . . . . . . . . .
4.3.1 Vertex and Time – CFirst . . . . . . . . .
4.3.2 Direction – Tensor of Inertia . . . . . . . .
4.3.3 Energy . . . . . . . . . . . . . . . . . . . .
4.3.4 Combining Seeds . . . . . . . . . . . . . .
4.4 Maximum Likelihood Reconstruction for Cascades
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
33
34
34
35
36
37
38
38
40
42
43
44
44
46
3 The
3.1
3.2
3.3
3.4
IceCube Detector
The Basic Concept of Photomultipliers
The Layout of the IceCube Detector .
Event Topologies . . . . . . . . . . . .
Light Propagation in Glacial Ice . . . .
3.4.1 Scattering and Absorption . . .
3.4.2 Ice Model . . . . . . . . . . . .
3.4.3 Photonics . . . . . . . . . . . .
3.5 Cascades from IceCube’s Point of View
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
iv
4.4.1
4.4.2
General Notes On Maximum Likelihood Estimation . . . . . .
The Reference Implementation - CscdLLh . . . . . . . . . . .
5 An Improved Likelihood Reconstruction for Cascades
5.1 Mathematical Form . . . . . . . . . . . . . . . . . .
5.1.1 A Likelihood for Waveforms . . . . . . . . .
5.1.2 A Likelihood for Pulses . . . . . . . . . . . .
5.2 Toy Model Studies . . . . . . . . . . . . . . . . . .
5.3 Gulliver . . . . . . . . . . . . . . . . . . . . . . . .
5.3.1 I3WFLogLikelihood . . . . . . . . . . . . . .
5.3.2 I3PhotorecLogLikelihood . . . . . . . . . . .
5.3.3 Assembling the Reconstruction for Cascades
5.4 Systematic Energy Underestimation . . . . . . . . .
5.5 Implementing the Corrections in the Reconstruction
5.6 Cascade Direction . . . . . . . . . . . . . . . . . . .
5.6.1 Minimization Strategies . . . . . . . . . . .
5.6.2 Quality Cuts . . . . . . . . . . . . . . . . .
5.7 High Energy Modification of the Likelihood . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
47
49
51
51
51
52
52
54
56
56
57
58
64
66
66
70
74
6 Summary and Outlook
79
A Software Versions and Module Configurations
85
B Selected Error Distributions
89
1
1 Motivation
Among the stable particles, the neutrino has unique properties, which make it an
interesting probe for the universe. Since it is only subject to weak interactions,
it escapes even dense astrophysical sources and can travel cosmological distances
without being absorbed or deflected. Therefore, neutrinos may give us the chance
to study high energetic phenomena over distances beyond the limitations of gamma
ray astronomy [1]. Above energies of about 100 TeV, the mean free path of photons
decreases because of interactions with the cosmic microwave background [2]. At these
energies, the observation with photons is limited to our galaxy, whereas the detection
of neutrinos would allow to observe more distant objects as well.
The neutrino shares with the photon an essential property: since it is neutral, it
points back to the source. This is a considerable advantage over charged particles like
protons, which are many times deflected in the galactic magnetic fields and therefore
obfuscate their origin.
The observation of neutrinos may provide an answer for the still open question on
the origin of cosmic rays. These are charged particles, mostly protons, that constantly
hit the atmosphere. The highest energy of cosmic rays measured so far reaches up to
1020 eV. The so-called bottom-up scenario assumes, that protons are accelerated to
these high energies in cosmic accelerators like supernova remnants [3] or the vicinity
of black holes in the center of active galaxies [4]. In interactions with the ambient
protons and photons in the accelerator’s environment, the emerging protons produce
charged pions. Since these decay into neutrinos, proton accelerators are also likely
emitters of high energetic neutrinos. So an observation of high energy neutrinos can
help to identify sources of cosmic rays [5].
Unfortunately neutrinos are difficult to detect and require large detection volumes. Currently the IceCube neutrino telescope is being build at the South Pole.
It is designed to provide an unmatched potential for the detection of high-energy
astrophysical neutrinos [6]. Among the presently operating H2 O-Čerenkov-Detectors
IceCube will have the largest detection volume. Significant improvements in the data
acquisition will allow detailed measurements of the interaction of neutrinos with the
Antarctic ice.
While the detector already records data [7], the analysis tools for the different
detection channels are still under development and hence differ in their maturity.
Especially for the analysis of neutrino-induced cascades the problem arises that the
current reconstruction software does not interpret all the information that the improved detector provides. For example the now available, detailed timing information
2
of the registered Čerenkov photons is not completely exploited. As long as improved
reconstruction tools are missing, the detector is limited to measuring only the location of the cascade and the deposited energy. The direction of the incident neutrino
and hence its origin cannot be determined from cascade-like events, yet.
This thesis presents an improved reconstruction tool for cascades. It will show
that a reconstruction of the cascade direction relies on three conditions: 1) the
total recorded timing information must be incorporated into the analysis, 2) the
optical properties of the Antarctic ice have to be considered. 3) the information
on the cascade direction has to be retrieved in a computational intensive maximum
likelihood estimation.
In this thesis, a cascade reconstruction algorithm has been developed and tested
on a Monte Carlo sample of electron neutrino events. The assumptions on light
propagation in the Antarctic ice are based on the results of Photonics, which is a
detailed simulation of photon propagation in the detector [8].
The thesis is structured as follows: At the beginning the basic principles of neutrino detection in ice will be presented. Afterwards, the components of the IceCube
detector will be described and their function and potential for cascade reconstruction
shall be pointed out. The results of the Photonics simulation will be visualized,
to give an impression of the assumed characteristics of cascades in ice. The fourth
chapter focuses on the available tools for cascade reconstruction. The method of
maximum likelihood will be introduced, which is the fundament of the improved reconstruction tool. Prior to its implementation in the IceCube software framework,
the characteristics of the method have been tested in a simplified model. This shall
be introduced in chapter five. Then the method is tested on a full-featured simulation. The role of minimization strategies is discussed, since it was found that they are
a crucial element in the reconstruction of the cascade direction. Finally an extension
of the algorithm that allows for the inclusion of systematic errors is presented.
3
2 Basic Principles of Neutrino
Detection in Ice
Since neutrinos are only subject to weak interactions, their detection is challenging
and quite different to the detection of other particles. For example a charged particle
in a wire chamber will have numerous electromagnetic interactions along its track
and is therefore visible to the physicist. The neutrino’s probability for even one weak
interaction in the same detection volume is much smaller. For accelerator experiments, neutrinos that were created in the experiment itself, are indirectly noticeable,
when the momenta and energies of all detected particles do not sum up to fulfill
energy and momenta conservation. It is their absence in the detector response that
makes them indirectly visible.
In contrast neutrino telescopes need a direct way to detect passing neutrinos at
reasonable counting rates. To derive any astronomical conclusions, they should also
be able to tell, where the neutrino comes from and what energy it has. For these
purposes one utilizes the fact that the interacting neutrino will produce charged
secondary particles. Provided that the detector medium has appropriate optical
properties and that the created particles are fast enough, they emit Čerenkov light.
This can be detected with light sensors such as photomultipliers. By measuring the
intensity and arrival time of this light at different points in the detector, one can
draw conclusions on the secondary particle and in turn on the incident neutrino.
To face the low interaction probability neutrino telescopes must provide an adequately dimensioned target to the passing neutrino. This explains the need for large
instrumented volumes and leads to the idea of instrumenting caverns, lakes, oceans
or glacial ice. For example the Homestake experiment consisted of 615 tons tetrachloroethylene (C2 Cl4 )1 , Kamiokande had a geometric volume of 3000 tons water and
was upgraded to Super-Kamiokande with 50000 tons water volume (values from [9]).
For higher energies this is not enough. The Baikal Neutrino Telescope NT-200 therefore encloses 105 tons of water. But because of the beneficial optical properties that
the lake provides, the effective detection volume reaches for extremely high energies
of 1 EeV about 10 Mtons [10]. New to this list, and the largest of all neutrino detectors is IceCube. In its planned 80 string configuration it will have a geometric
volume of 1 km3 .
1
Unlike the other presented experiments the Homestake experiment used a radiochemical detection
principle.
4
Naturally such instruments are also sensitive to charged particles that are not
caused by a neutrino interaction. Especially muons that were created in interactions
of cosmic rays in the atmosphere, trigger these detectors at rates that are several
magnitudes higher, than the expected counting rates of neutrino events. The detectors are therefore placed deeply in rock or underwater to shield the instrument
from these particles. In addition, highly efficient filter procedures must separate
signal events from the remaining background. Commonly the fact is utilized, that
only neutrinos can propagate through the Earth and hence arrive from below. Thus
the field of view of a neutrino telescope is the opposite hemisphere relative to the
detector location2 .
In the rest of this chapter the underlying physics processes will be discussed in
more detail.
2.1 Neutrinos and Weak Interactions
The Standard Model of particle physics (SM) describes the elementary constituents
of matter as well as the three elementary particle interactions. These are the electromagnetic, the weak and the strong interactions. Gravitation still resists an inclusion
into a unifying theory. The main building blocks are the matter-forming fermions
that are accompanied by bosons, which act as force carriers between them. This
scheme is summarized in the following table.
generation
quarks
leptons
I II III
u c
t
d s
b
e µ τ
νe νµ ντ
fermions
γ
g
W±
Z0
bosons
The group of fermions is made of six quarks ([u, d], [c, s], [t, b]) and six leptons
([e, νe ], [µ, νµ ], [τ, ντ ]), which appear in three generations. Thus, the neutrino is a
lepton that occurs in three flavors as electron-, muon- and tau-neutrino (νe , νµ , ντ ).
Each flanks a charged partner lepton, namely the electron e− , muon µ− , and tau
τ − . The electromagnetic force is carried by the photon γ, whereas the Z 0 and W ±
bosons mediate the weak interactions. Together with the gluons g that mediate
the strong interaction, they all form the group of force-mediating bosons. Each of
these particles has a corresponding antiparticle, although the photon and the Z 0 are
their own antiparticles. Whenever appropriate, the distinction between particle and
antiparticle is avoided in this thesis.
2
It should be noted that for very high energies a reconstruction of the energy allows a separation
from the lower energetic background. Therefore, observations above the horizon become feasible.
5
Unlike its partner lepton, a neutrino has no electric charge and will not participate
in electromagnetic interactions. It also does not carry a strong charge, which rules out
strong processes. So only only the weak force and gravitation remain as alternative
means to interact with the environment. The latter is too weak to be of significance
in experiments such as the one under consideration.
Another question of current interest comes up regarding the mass of the neutrino.
In the SM neutrinos are treated as massless. But observations of neutrinos released in
cosmic ray induced air showers (atmospheric neutrinos) as well as neutrinos generated
in fusion processes in the sun (solar neutrinos) disprove this assumption. For example
measurements of solar electron neutrinos resulted in lower observed rates than one
would expect from the light output of the sun. A solution to the problem was found
in neutrino oscillations. Neutrinos are produced and detected in weak interactions
as weak eigenstates. But their propagation from the point of creation to the point
of detection is determined by their mass eigenstates. If the lepton number is not
conserved, these states may form mixed states by superposition. If the neutrinos
have different masses, the distinct mass states will have different time variabilities.
So the fraction of a given neutrino flavor in the mixed state varies in time. Therefore
the flavor ratio observed at the detector differs from the one at the source.
The electromagnetic and weak forces are unified in the Glashow-Weinberg-Salam
(GWS) theory. Within the GWS-theory neutrinos undergo weak interactions with
other leptons and quarks by the exchange of electrically charged W ± and electrically neutral Z 0 bosons. Such processes are described by the coupling of so-called
charged (CC) or neutral currents (NC) to the respective W ± - or Z 0 fields. The weak
interactions are able to transform a neutrino to its corresponding lepton in the same
generation. Because of quark mixing, weak interactions may transform quarks across
generations as well.
2.2 Deep Inelastic Neutrino-Nucleon-Scattering
The deep inelastic neutrino-nucleon scattering is of special interest to this thesis. In
this process the neutrino interacts with a quark in the atomic nuclei of the detector
material [11]:
νl + N → l + X (CC)
νl + N → νl + X (N C)
(2.1)
(2.2)
where l = e, µ, τ , N denotes the nucleon and X the system of emerging hadrons.
The Feynman graphs for the two processes are given in figure 2.1.
In the case of a charged current interaction the neutrino will leave two detectable
fingerprints. First the created lepton l and any possibly following charged secondary
particle or decay product may be detected by its emittance of Čerenkov light (see
section 2.3.1). Secondly the struck nucleon is destroyed, what can be explained in
6
Figure 2.1: Feynman graphs for charged (left) and neutral (right) current interactions
in deep inelastic neutrino-nucleon scattering.
the quark parton model [11]. In the nucleon’s center of mass system the hit quark
and the two spectators move into opposite directions. Due to the confinement, they
have to combine with quarks created from the vacuum to form hadronic final states.
This results in a shower of hadrons at the point of interaction (vertex).
In neutral current interactions, the transferred momentum will result in a hadronic
shower as well, but the emerging neutrino escapes from the detector. Only the
hadronic cascade at the vertex is evidence of the deposited energy.
The low interaction probability results from the extremely small cross section for
neutrino-nucleon scattering and the even smaller ones for neutrino-electron scattering. The cross section for a neutrino with energy Eν to interact with an isoscalar3
nucleon evaluates to [12]:
 2
2
MW

2
2
2
(CC)
[xq(x, Q2 ) + xq(x, Q2 )(1 − y)2 ]
2
GF M Eν
dσ
Q2 +MW
i
h
=
(2.3)
2
2
MZ
1
dxdy
π
xq 0 (x, Q2 ) + xq 0 (x, Q2 )(1 − y)2 (N C)
2
2
Q2 +MZ
where M , MW and MZ are the masses of the nucleon and the mediating bosons, GF is
the Fermi constant, −Q2 denotes the invariant momentum transfer, and q, q, q 0 , q 0 the
quark distribution functions, which depend on the Bjorken variable x=Q2 /(2M (Eν −
El )) and y = (Eν − El )/Eν . This cross section rises linearly with the neutrino energy
up to a few TeV. Thereafter a large fraction of kinematically allowed regions in phase
space is suppressed by the propagator term in (2.3). This leads to a reduced growth
of the cross section.
As in equation (2.3), the cross section for neutrino-electron scattering is proportional to the mass of the target [11]. Since the electron mass me is much smaller than
the mass of the nucleon M , this process is suppressed. One exception exists in form
2
of the Glashow resonance [13] at Eνres = MW
/2me ≈ 6.3 · 1015 eV, where in resonant
−
ν e e -scatterings W bosons are produced. The cross sections for charged and neutral
current neutrino-nucleon scattering as well as the resonance are plotted in figure 2.2.
3
For simplicity isoscalar nuclei, which have equal numbers of protons and neutrinos, are considered.
7
Figure 2.2: Cross sections for neutrino-nucleon scatterings in CC (dashed) and
NC(dotted) interactions for neutrinos (blue) and antineutrinos(red). The
Glashow resonance is plotted in black. Picture taken from [14].
It is illustrative to translate the numeric absolute values of the cross section into
a mean attenuation length. The cross section is a measure for the interaction probability. For a cuboid with edge lengths (a, a, dx) that contains n particles per unit
volume, this probability calculates to:
P =
σna2 dx
= σndx
a2
(2.4)
For a neutrino flux Φ that passes the cuboid this results in an exponential attenuation
with the characteristic length L:
x
Φ
dΦ
= −ΦP = −Φnσ = −
→ Φ(x) = Φ0 exp −
(2.5)
dx
L
L
This mean free path can be calculated for a dense material like lead, with ρ =
11.34 g cm−3 , molar mass MP b = 207.2 g mol−1 and Avogadro’s constant NA =
6.022 · 1023 mol−1 :
L=
1
MP b
1
=
= 3.03 · 10−23
cm
nσ
ρσNA
σ [cm2 ]
(2.6)
To shield 99% of a beam of 1 TeV neutrinos with σ ≈ 10−35 cm2 one would need to
pile up 140 · 106 km of lead which is about 93% of the distance from the earth to the
sun.
8
2.3 Physics of Cascades
The electromagnetic and hadronic cascades contain charged particles, which can emit
Čerenkov radiation. In IceCube the detection of cascades bases on the measurement
of this radiation. That is why the effect of Čerenkov radiation shall be introduced
first. Afterwards the characteristics of electromagnetic and hadron cascades will be
presented.
2.3.1 Čerenkov radiation
When charged particles move through a relative dense medium like ice at speed
v = βc larger than the light’s phase velocity in this medium, they will emit Čerenkov
radiation [15]. The phase velocity is vp = c/n, where n is the refractive index of
the medium and c the speed of light in vacuum. A description of this effect already
results from classical electrodynamics [16]. By traversing the medium, the particle
polarizes the electron clouds of the surrounding atoms. In the following reconciliation
they emit electromagnetic radiation. Because of the common source, this radiation
is coherent and may constructively interfere, when the particle is fast enough. In
the Huygens construction shown in figure 2.3 the particle is an emitter of spherical
waves. These sum up to a notable wavefront in form of a cone with the half-angle
θC , for which the relation holds:
cos θC =
vp
1
=
≤ 1.
βn
v
(2.7)
The angle θC denotes the direction of the main light output. For a relativistic particle
in ice with n ≈ 1.33 it is θC ≈ 41◦ .
If the particle carries the electric charge ze, the number of photons Nγ emitted
with energies in [E, E + dE] per track length dx is given by [17]:
αz 2
1
d2 Nγ
=
1− 2 2 ,
(2.8)
dxdE
~c
β n
where α is the fine structure constant and ~ is the reduced Planck constant. Two
aspects are remarkable. First, Nγ is proportional to dE ∝ dν ∝ dλ/λ2 , so ultraviolet and blue wavelengths will dominate in a medium transparent for optical and
nearby wavelengths. Secondly, if one integrates 2.8 in a given wavelength interval
and assumes that n(λ) does not vary too much within this interval, one obtains:
Z λ2 1
dλ
1
λ2 − λ1
dNγ
2
2
1− 2 2
≈
2παz
1
−
= 2παz
.
(2.9)
dx
β n
λ2
β 2 n2
λ1 λ2
λ1
So the number of emitted photons depends on the track length of the particle. Finally
should be noted that the energy loss induced by Čerenkov light is negligible compared
to ionization losses [18].
9
Figure 2.3: Huygens construction of a Čerenkov light emitting particle. In the left
sketch it moves slower than the speed of light in this medium, while in
the right sketch it moves faster. The emitted spherical waves sum up to
a cone shaped light front following the particle.
2.3.2 Electromagnetic Cascades
An electron emerges from the charged current interaction of an electron neutrino. A
High energetic electron passing through matter will lose its energy mainly by emitting
bremsstrahlung [18]: in the presence of the coulomb field of a nearby nucleus the
electron may emit a energetic photon. If the photon energy is larger than twice the
electron’s rest mass, it can create an electron and a positron in the process of pair
production. These two particles can again emit bremsstrahlung photons, which in
the following leads to the development of an avalanche of electrons, positrons and
photons. This electromagnetic cascade grows as long as the created particles have
enough energy to continue the process. For electrons at lower energies, the losses
due to the ionization or excitation of the surrounding atoms gain importance, while
for photons Compton scattering and the photoelectric effect take over.
The energy loss of an electron due to bremsstrahlung in the length interval dx is
found to be proportional to its energy:
E
x
dE
=−
→ hEi = E0 exp −
,
(2.10)
dx brems
X0
X0
yielding an exponential distance dependence for the electron energy. It is characterized by the radiation length X0 . This is the mean distance after which the electron
energy has dropped to 1/e of its initial value E0 . The same length X0 is also important for high-energetic photons, since it denotes about 7/9 of their mean free path
10
for pair-production [17]. Usually one factorizes the density of the medium into dx
and X0 to ease tabulations of the radiation length. Tables of this quantity can be
found in [19]. For water the radiation length is X0 = 36.08 g cm−2 and the density
of ice is about ρice = 0.92 g cm−3 .
For lower energies the energy loss due to ionization can be described by the BetheBloch formula.
4πNA z 2 α2 Z
2me v 2
dE
2
(2.11)
=
ln
−β ,
dx ion
me v 2 A
I(1 − β 2 )
where me is the electron mass, z and β = v/c denote the particle’s charge and
velocity and the medium is characterized by its atomic number Z, mass number A
and effective ionization potential I. Avogadro’s number NA and the fine structure
constant α occur as well.
In order to define a boundary between the two energy regimes, in which the respective process dominates, the critical energy Ec is introduced. Its marks the cut-off
energy under which an electron will not further support the development of the cascade. Instead it deposits the energy by ionization and excitation processes. An
approximation formula for Ec in a solid medium with atomic number Z can be found
in [18]:
dE
dE
610 MeV
.
(2.12)
=
,
Ec =
Ec :
dx ion
dx brems
Z + 1.24
Because of the similarities in pair production and bremsstrahlung one can describe
electromagnetic cascades with a simple model that nevertheless is able to explain
characteristics of real electromagnetic cascades [17]. After one radiation length an
electron or positron will emit a photon, which creates an e+ e− pair after covering the
same distance (see figure 2.4). In each of these processes the energy of the primary
particle will be split equally between the two secondaries. Thus, the shower will
longitudinally develop in steps that are multiples of X0 . In each such generation t
the number of particles N is doubled:
t=
x
,
X0
, N = 2t
, E(t) =
E0
.
2t
(2.13)
In the limit there will be equal numbers of electrons, positrons and photons. This
process will continue until the energy drops below the critical energy Ec . So in the
last generation tmax the number particles Nmax is given by:
tmax =
ln(E0 /Ec )
,
ln 2
Nmax = exp(tmax ln 2) =
E0
.
Ec
(2.14)
Thus, the length of the shower scales logarithmically with its energy. Two other
important properties can be derived. The total number of particles with energies
11
Figure 2.4: Illustration of a simple model to mimic an electromagnetic cascade. After
each multiple of the radiation length each electron or positron (straight
lines) will emit a bremsstrahlung photon (curly lines) that again creates
a e+ e− -pair.
exceeding E is proportional to the shower energy:
Z t(E)
Z t(E)
exp(t(E) ln 2)
E0 /E
N (> E) =
N dt =
exp(t ln 2)dt ≈
=
.
ln 2
ln 2
0
0
(2.15)
Since 2/3 of the particles are charged, they can emit Čerenkov light. As shown in
equation (2.9), the number of Čerenkov photons emitted from a charged particle is
proportional to its total track length. By integrating the track lengths of all charged
particles in this model we obtain the total track length:
Z
2 E0
2 lmax
E0
N dt =
L=
∝
.
(2.16)
3 0
3 ln 2 Ec
Ec
The total track length integral is proportional to the energy of the incident electron.
Therefore the Čerenkov light yield of the resulting cascade is proportional to the
electrons energy as well. With this relation in hand, a reconstruction of the deposited
energy by counting the number of emitted Čerenkov photons becomes possible.
Compared to muons that can pass kilometers of ice before they have lost all of
their energy in ionization and bremsstrahlung processes, cascades occupy a rather
small region in space. The longitudinal energy deposition can be parametrized with
a gamma distribution [20]:
dE
(tb)a−1 exp (−tb)
= E0 b
,
dt
Γ(a)
(2.17)
12
where Γ is the gamma function [21], E0 is the cascade energy and t is the length in
units of the radiation length. In simulations of cascades in water, the parameters a
and b were determined to be [20]:
a = 2.03 + 0.604 ln(E0 /GeV ),
b = 0.633.
(2.18)
The gamma distribution has its maximum at tmax = (a − 1)/b. Thus, the maximum
length of the simulated cascades exhibits the same logarithmic proportionality to E0 ,
which the simple model predicted in equation 2.14. With the radiation length and
the density of ice, which were given above, a 100 TeV cascade will have its maximum
at about 5 m.
For very high energetic cascades the Landau-Pomeranchuck-Midgal (LPM) effect
has to be taken into account. Above 100 PeV the cross sections for pair production
and bremsstrahlung is suppressed because of interference between amplitudes from
different scattering centers [18]. This effectively yields an elongation of the cascade.
The LPM effect will not further be considered in this thesis but the opportunities it
provides for the reconstruction of cascades were discussed recently [14].
The transverse spread of the shower is given in units of the Molière radius
RM ≈ 21 MeV · X0 /Ec .
(2.19)
About 99% of the cascade energy is deposited in a cylinder with a radius of about
3.5RM [18]. For ice this means a spread of about 35 cm.
The angular distribution of the emitted Čerenkov light has to be to considered. I
equation 2.7 it is shown for a single particle, that a preferred direction for emission
exists in form of the Čerenkov angle ΘC . In the case of a single muon traversing the
detector, this well-defined angular distribution allows a reconstruction of the muon
direction by observing solely its Čerenkov light emission. For particle showers the
process is more complex because they consist of many particles with successively
decreasing energies and hence different directions. Because of the Lorentz boost
of the primary, particles created earlier in the shower have similar directions but
the cascade spreads out at later development stages. To address this complexity,
simulations of electromagnetic cascades were performed (see e.g. [20]). From these
we know that the angular distribution of Čerenkov photons is still peaked at the
Čerenkov angle, but also shows the expected contributions of other directions. A
parametrization of this distribution has been given in [20].
To conclude this section it can be summarized that highly energetic electrons create
electromagnetic cascades that emit Čerenkov light in a well-contained volume. This
light has an angular distribution that peaks at the Čerenkov angle θC . The number
of emitted photons relates to the energy of the primary electron.
13
2.3.3 Hadronic Cascades
As mentioned above, a high energetic deep inelastic neutrino interaction results in a
jet of hadronic particles that emerges from the interaction point. Such a hadronic
cascade is more complicated than its electromagnetic counterpart, since more processes are involved in its development. The Čerenkov light yield differs between them,
too. The constituents of a hadronic shower do not have to emit Čerenkov light. Not
all emerging particles will be charged, e.g. neutrons. Others may not fulfill the requirement for Čerenkov light emission in equation (2.7) because of their mass. Also
the necessity to consider the fraction of energy disappearing in the binding energies
of the hadrons, make hadronic cascades different [22]. As a result the important
energy-to-track-length relation, which is given for electromagnetic cascades in equation 2.16, does not hold for hadronic cascades. The latter will always emit fewer
photons. The fact that more processes are involved also makes the development of
hadronic cascades more vulnerable to statistical fluctuations than electromagnetic
cascades.
Nevertheless, hadronic showers will create neutral pions that decay into photons
and therefore create an electromagnetic sub-cascade. This electromagnetic contribution becomes more important for higher energies, since more π 0 are created. In total
this makes the hadronic cascade more electromagnetic-like [22]. In a first approximation hadronic cascades can therefore be treated as electromagnetic showers with
a reduced energy to account for the smaller light output. The scaling factor:
F =
Ehadron
Eem
(2.20)
was evaluated in [22] and is visualized in figure 2.5. According to this, hadronic cascades with a given energy have the same light output like an electromagnetic cascade
with roughly 80% of the energy. For higher energies the scaling factor approaches
unity. In IceCube simulations the scaling factor is slightly randomized to account for
fluctuations.
scaling factor
1.0
0.8
0.6
0.4
0.2
0.0 1
10
102
103
104 105 106
energy [GeV]
scaling factor
spread
107 108 109
Figure 2.5: Mean energy scaling factor for hadronic cascades. The standard deviation
(green) accounts for stochastic fluctuations. Values are taken from [22].
14
15
3 The IceCube Detector
In this chapter the IceCube detector shall be introduced. First the principles of light
detection with photomultipliers will be reviewed. Afterwards layout of the detector
and its data acquisition hardware will be described. This description will concentrate on those components, which are of relevance to reconstruction algorithm. A
subsection on event topologies shall point out, how cascades manifest in the detector.
Then the focus is put on the optical properties of the Antarctic ice and the simulation of light propagation in this medium. This finally allows to study the expected
distribution of Čerenkov light emitted by cascades in IceCube.
3.1 The Basic Concept of Photomultipliers
The light flux in neutrino telescopes ranges from a single to several thousand photons in a few microseconds. This makes the use of photomultiplier tubes (PMT)
[23] the first choice, since a PMT is able to generate a measurable electronic signal
from a single photon, has a large dynamic range and is fast enough to preserve the
information stored in the arrival times of the photons.
When photons enter the glass vacuum tube, they hit the photocathode. With
some probability they strike out photoelectrons via the photoelectric effect. [24]. The
electron yield of this effect depends on the material of the cathode and is quantified
in the ratio of emitted electrons to the number of incident photons. This is called the
quantum efficiency . The photocathode is followed by a set of increasingly positively
charged dynodes hit by the electrons (see figure 3.1). As a result of the acceleration
between the dynodes, the number of emitted electrons multiplies from step to step
yielding an electron avalanche at the final anode. From there they flow through a
resistor to the ground. The voltage drop U over the resistor R in a short time interval
∆t is proportional to the number of photoelectrons Ne . The gain G is the ratio of
electrons at the anode to the initial photoelectrons Ne . Therefore, it is a measure
for the amplification that the PMT has performed. It depends on the high voltage
between the dynodes and has to be determined during the calibration of the device.
The time-resolved recording of this voltage is called waveform. It encodes the arrival
times of the incident photons. These times are shifted by time that the electrons
need to traverse the PMT. For IceCube DOMs this PMT transit time is about 60 ns
[25]. The total number of recorded photoelectrons is obtained from the waveform by
16
integration. For small time intervals one obtains:
dQ
eNe
≈ RG
,
dt
∆t
∆t
Ne =
U,
eRG
Ne = Nγ .
U = RI = R
(3.1)
(3.2)
The relation 3.1 assumes that all electrons that reach the anode originate from the
incident photon and that they were equally accelerated. In reality the multiplication
process is of stochastic nature yielding significant variations in pulse shape and amplitude. Photoelectrons that were backscattered from the first dynode, or avalanches
starting at later dynodes, result in recorded charges much lower than expected for a
single photoelectron [26]. By histogramming the measured charge from several single
photoelectron pulses, one obtains for the IceCube PMTs the distribution shown in
figure 3.2. The peak in this histogram indicates the expected charge for one photoelectron. In this distribution the tail to lower charges indicates that the recording of
several single photoelectron pulses results in a mean collected charge of only 85% of
the expected charge.
3.2 The Layout of the IceCube Detector
The IceCube detector site is located at the geographical South Pole, where the
Antarctic ice provides much target material with good optical properties. The detector is currently constructed and should be finished in 2011. According to the
planned layout [6] it will consist of 4800 self-contained light sensors, the so-called
Digital Optical Modules (DOMs). The DOMs will be distributed over a volume of
1 cubic kilometer at a depth of 1.45 to 2.45 km. With hot water 80 holes are melted
into the ice. The horizontal distance between them is 125 m. Cables are deployed into
the holes and the DOMs are fixed to them. Besides the mechanical stability during
the refreezing, the cables provide each DOM with power and a communication link.
Each such string will hold 60 DOMs vertically spaced by 17 m. Since the experiment
is constructed around its predecessor AMANDA, a more densely instrumented region
is available. The IceCube detector is further accompanied by a surface air shower
detector called IceTop [6], which has its own rich physics program. But it can also
be used to enhance the analysis of IceCube events, e.g. by identifying background
events. When IceCube reconstructs a muon that coincides with a measurement in
IceTop, then it is most probably of atmospheric origin.
Each DOM contains a PMT with a large area photocathode, along with the necessary readout electronis, high voltage supply and a LED beacon for calibration
purposes - all that surrounded by a glass pressure housing. The device is sketched
in figure 3.4. For the PMT the R7081-02 from Hamamatsu with 10 dynodes and a
17
Figure 3.1: Schematic drawing of a photomultiplier. The incident photons hit the
photocathode, from which electrons are emitted then. These are accelerated in a set of positively charged dynodes until they are collected at
a final anode. The voltage drop over the resistor R is the recorded signal. For a clearer presentation the high voltage supply of the dynodes is
omitted.
Figure 3.2: The average charge response of IceCube PMTs to a single photoelectron
(red). The average is taken over the response from numerous PMTs, of
which some are plotted (black). Picture taken from [27].
18
Figure 3.3: Layout of the IceCube detector. Here the 80 strings of the inice detector
are shown, as well as the 160 IceTop tanks (each two form a station) and
the counting house at the surface. The predecessor detector AMANDA is
indicated by the blue cylinder. The comparison to the Eiffel Tower may
give an impression of the instrument’s size.
diameter of 25 cm was chosen. The photocathode yields a peak quantum efficiency of
about 25% at 420 nm [27]. Typically it is operated at a gain of 107 . Surrounded by
a mu-metal cage to shield it from the magnetic field of the Earth, the PMT is glued
to the pressure housing with a room-temperature vulcanizing gel. Next to providing
mechanical stability, the gel enhances the transmission of light from the glass to the
PMT. The electronics are fixed to the neck of the PMT. The spherical glass pressure
housing is made of borosilicate glass and represents a compromise between mechanical stability, optical transmission and potassium content. The material allows for
optical transmission down to wavelengths of 350 nm (UV) and is able to withstand
the immense pressures of up to 400 atm during the refreezing. The potassium content of the glass is critical owing to its fraction of 40 K. This isotope undergoes beta
decays. These produce Čerenkov light emitting electrons in the vicinity of the PMT.
19
Together with the contribution of the PMT itself, each DOM has therefore an average
noise rate of 700 Hz in total darkness [28].
The most important information for later event reconstruction is contained in the
arrival times of the photons. This information should be digitized and timestamped
as soon as possible to make it robust against distortion during transmission through
the roomy detector. Thus, it has to be done in the DOM. A single electron emitted
from the photocathode yields a pulse of 7 ns width. To digitalize such a pulse a
sampling rate of ≈ 300 MHz (3.3 ns/sample) is necessary. Together with a favorable
resolution of 14 bit and a restriction in power dissipation that the remote detector
site dictates, this is hard to achieve with conventional, permanently operating flash
ADCs. Therefore, a custom chip with an internal capacitor array, called the Analog
Transient Waveform Digitizer (ATWD), was designed. It starts sampling only on
request. It provides sampling speeds of 200 to 700 megasamples/second and has 4
input channels, which can capture 128 samples with a resolution of 10 bit. Three
of these are connected to different amplifiers, with gains successively increasing by
a factor of 8. By routing the PMT signal to all three input channels and choosing
the most amplified but still not saturated one, the whole ATWD covers a wide range
and effectively yields the desired resolution. The fourth channel is used during the
calibration of the device. To reduce the dead time of the DOM, the mainboard holds
two ATWDs that are alternately used.
When operated at a rate of about 300 MHz, this fast digitalization system has a
very narrow time window of about 450 ns. It is therefore complemented by a much
slower commercial 10 bit flash ADC (fADC), operated at 40 MHz (25 ns/sample)
with 256 readout bins. It allows to coarsely record the PMT response over 6.4 µs.
By routing it through a 11.2 m long stripline, the input of the ATWD is delayed by
75 ns. This leaves enough time to start the sampling.
To separate noise hits from the real signal one makes use of the fact that the
probability for two noise hits in neighboring DOMs is very small. Neighboring DOMs
are therefore connected with a separate cable allowing them to inform each other,
when their trigger threshold is exceeded. So a local coincidence (LC) criterion can
be implemented on hardware level: each DOM only starts the recording if one of its
neighbors has also seen a signal in a time window of typical 800 ns.
The whole logic is implemented on the DOM mainboard, which holds a Field
Programmable Gate Array (FPGA) controlled by a 32-bit CPU. Its software can be
updated after deployment, making the DOM adjustable to prospective requirements.
A schematic of the mainboard components is given in figure 3.4.
20
Figure 3.4: Top: Illustration of IceCube’s basic component: the Digital Optical
Module. Bottom: A schematic view of the DOM mainboard. The
described digitalization hardware is located in the upper left.
21
Figure 3.5: IceCube’s event display for different types of simulated events. DOMs
are represented by dots that are scaled and colored, when the DOM
triggered. The size denotes the number of registered photons, while the
color encodes the arrival time.
3.3 Event Topologies
The charged current interactions of the three neutrino flavors are illustrated in figure
3.5, which shows the IceCube event display for three example events. Depending on
the flavor of the primary neutrino and the type of interaction, one can distinguish
between two different types of events. They differ in the resulting hit pattern and in
the reconstruction strategy.
When a muon neutrino undergoes a charged current interaction (see equation 2.1),
it will generate a muon. Along the muon track through the detector its Čerenkov
cone illuminates nearby DOMs. These events are called track-like. Reconstruction
strategies were developed to infer the muon direction and energy [29, 30]. Especially
the direction of muons is well reconstructable as the continuous light emission along
the track gives many measurement points to guide the fit. There are also chances
that the muon passes several DOMs at small distances. Then the DOMs record more
useful information, which is less distorted by light scattering effects in the ice.
The hit pattern of a cascade looks different and events showing this topology are
called cascade-like. Compared to the dimensions of the detector, the region, in which
a cascade generates light, is so small that the cascade can be considered as pointlike. Centered at the interaction vertex, a spherically region of hit DOMs marks the
deposition of energy. A detailed description of reconstruction strategies for these
events will be given in the next chapter.
A special case of a cascade-like event can be found in the signature of a CC
interaction of a τ -neutrino. These so-called double-bang events show two cascade-
22
like hit patterns that may be well separated at PeV energies. The first one results
from the hadronic cascade at the vertex. At sufficiently high energies, the emerging
τ can travel far enough, to let its decay products develop an additional cascade in
another part of the detector.
The importance of the cascade channel becomes obvious. It is sensitive to electron
and τ -neutrino events as well as to neutral current interactions. It can also contribute to the reconstruction of highly energetic muons because of the occurrence of
bremsstrahlung cascades along their tracks.
3.4 Light Propagation in Glacial Ice
Between its emission and its detection light travels up to several hundred meters
trough the ice and is affected by scattering and absorption. One has to account for
these effects, to be able to deduce precise conclusions on the primary particle from
the recorded light. Thus, an overview of these processes as well as a description of
their implementation in the simulation and reconstruction of IceCube events will be
given in this section.
3.4.1 Scattering and Absorption
Scattering means the deflection of photons on microscopic regions, which have a
different refractive index than the surrounding medium. The experiment faces an
abundance of such scattering centra with varying sizes in form of air bubbles and dust
grains. The effect can be parametrized by two parameters. First, the geometric mean
free path λs , which gives the average distance between two scatters and secondly, the
average cosine hcos Θi of the angle between the photon direction before and after
the deflection. For the Antarctic ice one finds the scattering to be strongly forward
peaked with hcos Θi = 0.94 or hΘi = 20◦ , respectively [31]. Both parameters can
be combined to an effective scattering length λe denoting the length, after which
successive anisotropic scatters yield the same randomization as isotropic scattering.
When n scatters are therefore necessary, then this distance calculates to:
λe = λs
n
X
i=0
hcos Θii −−−→
n→∞
λs
.
1 − hcos Θi
(3.3)
Furthermore, visible and near ultraviolet photons can be absorbed due to electronic
and molecular excitation processes. Again this can be parametrized by a characteristic distance λa , after which the photon’s survival probability drops to 1/e. It is
called the absorption length.
For both effects the probabilities to occur are described by exponential distributions, which are characterized by the respective lengths or their reciprocal values,
23
Figure 3.6: Absorption and scattering coefficients of the antarctic ice. Picture taken
from [31].
the scattering/absorption coefficients:
be =
1
,
λe
a=
1
.
λa
(3.4)
3.4.2 Ice Model
Glacial ice is an optically inhomogeneous medium. Several of its properties, which
have an impact on the transmission of light, like temperature, pressure and concentrations of air bubbles or dust, change with increasing depth. In-situ measurements
of these properties were made with IceCube’s predecessor AMANDA and are now
complemented by measurements during the deployment of new IceCube strings [31].
Both analyze the detector response to artificial light sources in the detector volume.
The measurements result in an ice model that is is summarized in figure 3.6. It
shows the effective scattering coefficient and the absorption coefficient (absorptivity)
as functions of the wavelength and the depth. The wavelength interval that needs to
be considered here is limited in the ultraviolet by the transmission properties of the
glass pressure housing (compare section 3.2). For higher wavelength it is limited by
the dropping quantum efficiency of the PMT. The coefficients’ depth dependencies
reveal the layered structure of the ice, which originates from climatological changes
during the formation of the ice layer. Depths down to 1.3 km are rich on air bubbles,
which lead to strong scattering. By going deeper the pressure rises and the former
gaseous molecules are trapped in the molecular lattice of the ice [32]. Such a solid air
24
hydrate has a refractive index close to normal ice, which reduces scattering. Between
1.4 km and 2.1 km four distinct peaks in both plots show the existence of dust layers
that pervade the otherwise clear ice. Since IceCube will range from 1.45 km to
2.45 km, these will have an effect on measurements in the upper half of the detector.
The ice model illustrated in 3.6 was named millennium model. Recently, it was
modified to the aha model that addresses some problems of the millennium model.
It corrects an intrinsic smearing effect in the analysis method that was used to derive
the model. It also adds further information gathered from ice cores. It suggests that
the ice below the big dust layer and between the other dust layers is much cleaner
than expected. On the other hand scattering and absorption in the dust layers are
assumed to be stronger.
Finally the bulk ice model should be mentioned. It does not contain any layered
structure and therefore treats the whole detector medium as optically homogeneous.
It serves well for less elaborate calculations.
3.4.3 Photonics
Given its stochastic nature and the non-trivial inhomogeneity of the medium, light
propagation in IceCube defies analytical treatment. Instead, a detailed Monte Carlo
simulation is necessary that incorporates all presented effects by tracking each photon on its path through the detector. For this purpose the Photonics simulation
program [33] was developed. It is a freely available software package that can calculate the mean expected light flux and time distributions for various light sources
in different parts of the detector The tabulated results are used in the simulation of
IceCube events and can also be used for the reconstruction.
Photonics can generate light flux tables for point-like anisotropic light emitters
at rest, as well as for moving light sources or volume light sources. If one chooses the
characteristic angular distribution of a Čerenkov emitter, a point-like light source
adequately mimics a cascade in the detector. Furthermore, because of the principle of superposition, extended or continuous light sources like relativistic muons can
be simulated by integrating many such point-like emitters. In this thesis Photonics will only be used to calculate the light yield of single cascades. The following
description of the simulation will therefore be restricted to this special case.
The ice models are incorporated in the simulation in form of the group and phase
refractive indices ng , np , the average scattering angle hcos Θi and the introduced effective scattering and absorption lengths λe , λa . These quantities are functions of the
wavelength and depth. In the simulation [8] a given number of photons is created.
Each of them is tracked as it propagates through the medium. The length between
two scatters is randomly estimated from the exponential distribution characterized
by λe , while the survival probability is continuously updated according to the exponential distribution characterized by λa . Whenever a photon enters a region with
different optical properties, the distributions change accordingly.
25
At a given point and time the photon flux Φγ can be estimated from these calculations. Knowing the detection area and quantum efficiency of an IceCube PMT
and assuming of azimuthal symmetry, this flux can be translated in a photoelectron
flux Φnpe (number of photoelectrons per DOM and unit time interval). For efficient
tabulation it is recorded as a function of the delay time with the unit interval of 1 ns:
td = t − tgeo − tcscd ,
tgeo =
|~xcscd − ~xom |
.
cice
(3.5)
Without scattering, photons starting at the vertex of the cascade ~xcscd at time tcscd
would propagate straight to the detector at ~xom , where they arrive at time t. With
the speed of light in ice cice they need the time tgeo . The residual or delay time
td is the additional time that the photon needs due to scattering that increases its
path length. Earlier arrival times than tgeo cannot be caused by the emissions of the
cascade at ~xcscd .1
From the flux at point ~x one can derive the delay time probability density function
(pdf):
Φnpe (~x, td , C)
dP
(~x, td , C) =
,
(3.6)
dt
hµ∞ i (~x, C)
where C shall summarize the properties of the cascade2 that caused this photon
flux. The variable hµ∞ i denotes the integrated flux, which is the average number of
photoelectrons recorded by a DOM if the time window allows it.
Z ∞
∞
Φnpe (~x, td , C)dtd .
(3.7)
hµ i (~x, C) =
0
These two quantities contain by construction all necessary information to calculate
for a given combination of a cascade and a DOM the number of expected photoelectrons µ in a certain time interval. The whole timing information is contained in the
pdf dP/dt, while the mean number of photoelectrons scales the intensity according
to the cascade energy.
The tabulated results can be read by two interfaces: either with PSInterface or
with Photorec. Because of the tabular storage the numerical values are naturally
discrete. To compensate this, Photorec incorporates interpolation mechanisms.
When the successive queries cross cell borders, the interpolation ensures that the
returned values are continous. It was therefore chosen for this thesis.
In the selection of the tables one has to consider the binning of the table. For
layered ice fine grained tables with ∆z = 20 m and ∆Θ = 10◦ exist, but due to their
size they are difficult to handle. For an effective usage during the reconstruction,
1
This is not necessarily true for moving light sources like muons, as for these the point of emission
may move faster in ice than the emitted photons. Furthermore, some tolerance has to be given
to negative delay times caused by the limited time resolution of the hardware.
2
A description of the parameter set C is given in 4.1.1.
26
table description
name
millennium ice model 10 ns jitter
wl06v210 [34]
bulk ice model
swe 06 he i3-bulkptd [35]
∆z ∆Θ
80 m 60◦
—
Table 3.1: Used Photonics tables.
a logic would be necessary that loads only the needed parts of the tables into the
computer memory. Instead, the smaller, more sparsely binned tables with ∆z = 80 m
and ∆Θ = 60◦ were used in this thesis. Nevertheless, the memory consumption of
these tables is in the range of hundreds of megabytes. The sparse binning is partly
compensated by the interpolation mechanisms of Photorec. The used table sets
are listed in table 3.1.
The tables are available in a version, in which the delay time pdf is convoluted
with a σ = 10 ns Gaussian. This will be necessary for the reconstruction, because it
results in a non vanishing probability for negative delay times.
3.5 Cascades from IceCube’s Point of View
At this point all necessary tools are introduced to study the expected light yield from
a cascade in IceCube. The source is known to be anisotropic and approximately
point-like. But the measured light distribution is distorted by numerous scatterings. The question arises, how much of the initial information about the neutrino is
recorded by the detector.
To answer this, in a first step the development of the light field of a 100 TeV
cascade in bulk ice will be visualized. Due to the azimuthal symmetry of the shower
this can be done in two dimensions (x, z). The cascade will develop along the positive
z-axis. For surrounding points in the (x,z)-plane at consecutive times the expected
number of photoelectrons µ(~x, td , C) that a DOM at this point would measure in a
time interval of ∆t = 1 ns is calculated via:
(
hµ∞ i (~x, C) dP
(~x, td , C)∆t td ≥ 0
dt
µ(~x, td , C) =
.
(3.8)
0
td < 0
The case distinction emphasizes that only after the expiration of the geometric time
a DOM can notice photons from the cascade. For an illustration absolute numbers
are of less interest than the relative distribution. Therefore, µ is normalized. The
result from this calculation is given in figure 3.8 for selected development stages of
the cascade. After 50 ns the emission is forward peaked. Most of the light can be
found in the direction of the Čerenkov angle. As the cascade develops the photons
are scattered and the formerly narrow hemispheric light front broadens. Regions
27
Figure 3.7: Arrangement of a cascade and a DOM to study the dependency on their
distance and relative angle.
that the light-front already traversed still measure backscattered light. With further
increasing time this development proceeds. At about 600 ns the light field has a
diameter of about 200 m and shows nearly spherical symmetry. If one considers the
distance between two strings of 125 m, the spherical hit pattern of cascade-like events
in IceCube is understandable (see figure 3.5).
and hµ∞ i should be taken. As
Secondly a closer look on the distributions dP
dt
they are compared to commonly used analytic approximations, their dependence
on time and the distance between cascade and DOM is studied. The setup under
consideration is sketched in figure 3.7.
For the delay time pdf and the total mean number of photoelectrons (npe) parametrizations exist [22]. If the distance between the DOM and the cascade is larger
than a few scattering lengths, hµ∞ i can be approximated:
hµ∞ i (E, d) = I0
d
E
exp(−
),
d
λattn
(d λe ),
(3.9)
where d is the distance between the cascade and the DOM, E is the cascade energy, λattn is a characteristic length that governs the attenuation, λe is the effective
scattering length (3.3) and I0 is a normalization constant. For AMANDA the values
λattn ≈ 29 m and I0 = 1.4 GeV−1 m were found. For IceCube DOMs the normalization differs. It has to be adjusted by fitting equation (3.9) to the hµ∞ i from the bulk
ice Photonics tables. Since the approximation (3.9) does not consider the orientation of the cascade, hµ∞ i has been averaged over various angles Θ (see figure 3.7
and 3.9). The normalization increases to I0 = 3.3 GeV−1 m, which seems reasonable
if one considers that the detection area of IceCube PMTs is nearly twice as large as
the one of its predecessor. The exponential dependence on the distance d is still well
28
Figure 3.8: A 100 TeV cascade that develops parallel to the positive z-axis. Plotted
is the ratio of the expected number of photoelectrons at a given position
to the sum of all expected photoelectrons.
29
0.0030
0.0025
dP/dt [1/ns]
0.0020
Photonics @100m
Pandel @100 m
Photonics @250m
Pandel @250 m
0.0015
0.0010
0.0005
0.00000 500 10001500200025003000
delay time [ns]
Figure 3.9: Comparison of the tabulated results of a detailed simulation of the light’s
propagation with Photonics to common approximations. Left: Distance dependence of the total expected number of photoelectrons hµ∞ i.
is compared to the
Right: As a function of time the delay time pdf dP
dt
Pandel function (3.10) for two cascades at different distances but same
zenith angle Θ = 180◦ .
8
0.0020
4
0.0015
2
0
0.0010
-2
0.0005
-4
-60
Photonics @100m
Pandel @100 m
Photonics @250m
Pandel @250 m
dP/dt [1/ns]
log10(expected npe)
6
0.0025
parametrization
Photonics (layered ice)
100 200 300 400
distance along z [m]
500
0.00000 500 10001500200025003000
delay time [ns]
Figure 3.10: Same as figure 3.9, but for layered ice.
30
described by λattn ≈ 29 m. Similar distributions are plotted for layered ice in figure
3.10. As the slope changes in small depth intervals the effect of absorption in dust
layers becomes visible.
The delay time pdf can be approximated with a gamma distribution:
a(atd )b−1 exp(−atd )
dP
(td , d) =
,
dt
Γ(b)
1 cice
a= +
,
τ
λa
b = d/λ,
(3.10)
(3.11)
(3.12)
where Γ(b) is the gamma function. With the given substitution for a and b it is
commonly called the Pandel function [36]. The coefficients can be found in [22] and
are λ = 47 m, τ = 450 ns and λa = 98 m. For two cascades at 100 m and 200 m
distance to the detector, each at an angle of Θ = 180◦ , the Pandel function and the
tabulated values are plotted for different delay times in figure 3.9. Although not in
full agreement, the approximation and the simulation share the essential features.
With increasing distance to the DOM, scattering causes the light to arrive at later
times. A wider range of arrival times is covered. This behaviour is also shown in
the finer grained scan of the distance-time dependency of the pdf shown in figure
3.11. The usage of a more realistic ice model leads to a more complex distribution
of arrival times.
The visualization of a cascade’s development in figure 3.8 suggests that there is
little chance to derive the directional information of the cascade from the amplitudes
of the recorded signal alone. Using in addition the photon arrival time is more
promising. Hence, the delay time pdf as a function of Θ and td is inspected. Using
the bulk ice model, this is plotted in figure 3.12 for a cascade at 50 m, 150 m and
300 m distance. Parallel to the td axis, the contour plot resembles the characteristics
of the Pandel function. For an increasing angle between the cascade and the DOM
the probable arrival times shift to later times, since light needs to be scattered to
reach the DOM. At a distance of 50 m the forward-backward asymmetry seen in
figure 3.8 is clearly visible. It is lost as the distance increases, but nevertheless even
at 300 m a slight bend in the function is evidence of the orientation of the source. A
reconstruction that aims at resolving the direction of the incident neutrino will have
to exploit these small hints of anisotropy.
31
Figure 3.11: Scan of the tabulated delay time pdf for different distances.
Figure 3.12: Scan of the tabulated delay time pdf for different zenith angles Θ.
32
33
4 Reconstruction of Cascade-Like
Events
Several algorithms for the reconstruction of IceCube events exist. Some of them
provide variables, which are easily calculated allowing e.g. quick trigger decisions.
Others aim at extracting model parameters with the best possible precision.
The intention of this thesis is to enrich the already existing toolbox for the reconstruction of cascade-like events with an additional algorithm that aims for precision.
In part, the available tools still originate from AMANDA, where they have been
proven to be very powerful. But they do not exploit all the instrumental improvements that IceCube provides. Before a major DAQ upgrade in 2003, AMANDA did
not digitize the full waveform. The periods, in which the PMT signal exceeded a
given threshold, were encoded in up to 16 leading and trailing times and stored together with the measured peak amplitude [29]. Therefore, strong emphasis is put on
the least deflected and hence first photon. Many reconstruction algorithms provide
options to consider only this single photoelectron (SPE). For the interpretation of the
arrival time either of two models is used. One can assume direct light propagation
and deal with scattering by being tolerant to deviations between the recorded and
the expected arrival time. Alternatively the Pandel function (3.10)1 , which ignores
the inhomogeneity of the ice, is often used. To consider all recorded photons multiple
photoelectron (MPE) schemes exist as well. Because none of the available reconstruction algorithms succeeded in resolving the neutrino direction in cascade-like events,
they assume the light yield from cascades to be isotropic.
Now the enhanced hardware of IceCube and the proceeding development of analysis
tools open the opportunity to improve the tool set. The waveform is adequately
sampled to maintain its valuable information. The small time bins of the ATWD at
the front of the waveform resolve the least scattered photons, while the large, coarsely
binned time window of the fADC allows to collect all the light from the incident
particle to properly evaluate the energy. With the results from the Photonics
simulation, which can be interfaced through Photorec during the reconstruction,
it is possible to implement the inhomogeneous ice properties into the reconstruction.
With the maximum likelihood method a technique for statistical inference exists
that has the property to be optimal to many applications. Therefor, a maximum
1
Customary it is used in a modified form, that can deal with negative delay times [29, 37]. These
can be induced by electronic jitter or occur during the reconstruction.
34
likelihood reconstruction that incorporates the full waveform and interfaces the Photonics tables has been developed for this thesis.
This chapter will introduce the available tools that are used here. After the presentation of some prerequisites, the successive steps from the preprocessing of the
waveform to the most accurate reconstructions are discussed. The chapter will end
with an overview on the maximum likelihood method and a description of the available standard likelihood reconstruction for cascades.
4.1 Prerequisites
4.1.1 What to Reconstruct?
Generally a reconstruction algorithm aims at discovering and describing the particles
that are involved in producing a given detector response. As a necessary prerequisite,
one has to define for the particles and the detector response sets of parameters, by
which they are fully characterized. Within the available precision, the reconstruction
algorithm will calculate from the detector response the numerical values of the particle parameters. By discussing these parameter sets, this section will also introduce
the nomenclature, which is used further on.
The ,,particles” of interest to this thesis are the electromagnetic and hadronic
cascades, that start their development at the interaction point of the neutrino. In
the reconstruction electromagnetic and hadronic cascades are not distinguished. Here
the detector response shall be explained with the hypothesis of one electromagnetic
cascade. It is considered to be point-like but directed (see section 2.3.2)2 . The
cascade vertex is described by the three space coordinates ~x = (x, y, z). With the
time of the interaction tcscd and the two angles Θ and Φ of the neutrino direction,
six variables define the location and direction of the cascade3 .
The number of photons emitted by the cascade depends on the energy deposited by
the primary neutrino in the detector. In charged current interactions (see equation
2.1) an electron neutrino deposits its whole energy. It is distributed to the electromagnetic and hadronic cascades. When the interaction is of neutral current type,
only a part of the energy is given to a hadronic cascade (see equation 2.1), while
the escaping neutrino carries away the rest. The cascade energy has therefore to be
distinguished from the neutrino energy.
But also the simplification to explain all cascades in the event with one electromagnetic cascade has to be done with care. As discussed in section 2.3.3, hadronic
cascades can be approximated by electromagnetic cascades with lower energies. Thus,
the light output of this electromagnetic cascade is determined by the total energy of
2
The deeper motivation behind this is that Photonics describes only the light yield of electromagnetic cascades.
3
The used coordinate system will be discussed in the next chapter.
35
all light emitting particles, whereas the energy of hadronic cascades has to be scaled
down according to equation (2.20). This sum defines the reference energy:


for em cascades
Ei
X
Eref =
Ei , where Ei = F Ei for hadronic cascades .
(4.1)


particles i
0
for neutrinos
Eref is the energy of an electromagnetic cascade that would emit as much light as
the electromagnetic and hadronic cascades in the event. So it is the energy, that the
reconstruction algorithm should find.
Seven variables define the set of parameters of a cascade:
C = (t0 , x, y, z, Θ, Φ, Eref ).
(4.2)
Now the detector response shall be defined. The PMT signal is digitized and
recorded as ATWD and fADC waveforms with 128 and 256 samples (bins). Therefore,
the event can be written as a set of PMT signals of all triggered DOMs:
R = {(toi , ∆toi , noi )},
(4.3)
where the indices denote the ith waveform bin of the oth DOM. The bin’s lower edge
time and its width are toi and ∆toi , while the amplitude of the waveform (converted
via equation (3.1) into units of photoelectrons) is given by noi .
In the section 4.2 two other possible descriptions of the recorded signal will be
introduced. RecoHits denote reconstructed photoelectrons, whereas RecoPulses describe an aggregation of photoelectrons in a small time interval. These two share
with a waveform bin the constitutive characteristics, to be essentially a time interval, in which some charge has been recorded. They can all be described with the
collected charge noi , a time toi and a width ∆toi 4 . This common nomenclature will
later allow to apply the same formulas to all descriptions.
4.1.2 Coordinate System
Any position or direction is given relative to a coordinate system. The IceCube
coordinate system is right-handed and has its origin close to the geometrical center
of the complete 80 string detector. Its y-axis is pointing north towards Greenwich
(UK), the x-axis points to the ,,East”. The z-axis is normal to the earth’s surface and
points therefore vertically upward. Since the origin lies deep in the ice, the surface
is at z = 1947.07 m.
A particle direction is given by its zenith angle Θ and azimuth angle Φ (see figure
4.1). One should not confuse this with canonical spherical coordinates, where the tip
4
For RecoHits the width has no meaning. Solely for the sake of a common nomenclature it is
mentioned in this enumeration.
36
Figure 4.1: Sketch of the used coordinate system.
of the vector given by (r, Θsph , Φsph ) points to the direction of the particle. Here the
vector (r, Θ, Φ) points to the direction, where it comes from. The simple relations
between spherical coordinates and the IceCube coordinate system are:
Θsph = π − Θ,
Φsph = Φ.
(4.4)
4.1.3 Resolution and Bias
When it comes to the comparison of reconstruction algorithms, one has to quantify
their performance. This is investigated with the help of Monte Carlo studies, in
which the true values of the particle parameters are known and the resulting error
distributions can be studied. Usually the term error simply refers to the difference
between the true and the reconstructed value. For convenience one assigns a single
number to the error distributions, the resolution. The standard deviation σ of a
fitted Gaussian fits well for this purpose, if the errors are normal-distributed. This
is partly true for the cascade coordinates, the time and the zenith and azimuth
angles, for which the error is just the difference between the reconstructed and true
values. Apart from a Gaussian kernel, these error distributions often show tails to
larger values as well. For the events in the non-Gaussian tails the algorithm is not
appropriate. If these events are identifiable, selection criteria (cuts) can be defined to
remove them from the sample. The mean of the Gaussian should be located at zero.
If for all events the errors are systematically shifted away from zero, the algorithm
is called biased. This is expressed in the mean µ of the fitted Gaussian. Whenever
appropriate, resolutions are therefore given in the form µ ± σ in this thesis.
37
The error of the direction reconstruction is not well-described by the difference
between the true and the reconstructed values. It is better given by the cosine of
the angle between the true and the reconstructed directions5 . In this thesis the
term direction error refers to this definition. The directional resolution is then welldescribed by the median of the direction error distributions. For a given resolution
half of all events should have a direction error that is smaller or equal to this number.
In contrast, the errors of the zenith Θ and azimuth Φ angles still refer to the difference
between the true and the reconstructed values and the resolution to the σ of the
Gaussian. The energy is most conveniently expressed by its logarithm log10 E. So
the energy error is the difference between the logarithms:
Ereco
.
(4.5)
∆ log10 E = log10 Ereco − log10 Eref = log10
Eref
Again, the resolution is obtained from a Gaussian fit to the resulting error distribution. Here the reconstructed energy is not compared to the neutrino energy, but to
the reference energy (see equation 4.1), taking into account that only the ,,visible”
energy is reconstructable.
4.1.4 Reconstruction Chain
Modern particle physics is a computation intensive discipline. As the experiments,
the complexity of the analysis, the available processing power and the number of
involved physicists increase, a well-designed software framework becomes more and
more important. When the data analysis can be separated into individual tasks
that either by consecutive execution or by optional combination form the complete
analysis, modularization is essential. With modularization the need arises for a
proper definition of interfaces between the modules. An instance is required that
controls the interplay between the components. This is the job of the software
framework.
For IceCube the framework IceTray has been developed. Like the modules that
it controls, it is written in C++. Making use of an object oriented language, all
aspects that are relevant to the analysis were mapped to appropriately designed
classes. Objects of these classes can be serialized and stored to disk.
The framework is designed to feed a stream of such conditioned data through a
chain of modules. The stream is grouped into frames, which contain related objects,
e.g. all objects belonging to one event. When the framework gives control to a
module, the latter can read all objects in the frame, process the contained data and
finally store any computation result as additional objects in the frame. Following
5
To study the error distribution the histogram of the cosine is used instead of the angle. This
eliminates the purely geometrical effect of differently sized zenith-azimuth bins, which spherical
coordinates implicate.
38
modules will find the added objects and can use them as input for their own calculations. Frames that passed the whole processing chain and contain the additional
reconstruction results will finally be written back to disk. To ease the later analysis
of the calculation it is common to branch off selected results into more convenient
data structures like ROOT trees [38] or HDF5 tables [39].
4.1.5 Dataset
The used dataset 445 contains events from electron neutrinos for the 80 string configuration of IceCube (IC80). The energies of the neutrinos follow an E −1 power law
and range from 102 GeV to 109 GeV. A minimum number of 8 hit DOMs was required for all events. The distribution of the cascade parameters are shown in figure
4.2. The dataset uses the slightly outdated millennium ice model. Physics like the
elongation of the cascade due to the LPM effect or the transit time of photoelectrons
in the PMT are not considered. Problems of the charge reconstruction and DOM
simulation will be dissussed in the sections 4.2 and 5.4.
The drawbacks in the dataset were discovered during its usage, but were finally
found to be manageable. For the development of the reconstruction, the use of the
older ice model is not a problem. During the reconstruction, one has to use the
same ice model, which was used in the simulation of the dataset. The task of the
reconstruction, being effectively the inverse of the simulation, remains the same. If
simulated, the PMT transit time would be corrected by the FeatureExtractor
before the reconstruction (see 4.2). Its absence has therefore no significant consequences. The missing spatial extension of the cascade is advantageous for the used
reconstruction hypothesis of a point-like anisotropic light emitter. For the highest
energies this is an oversimplification. Hence, the results have to be interpreted with
care. The problems in the DOM simulation and charge reconstruction were remediable through the introduction of two correction factors. Datasets from recent versions
of the simulation software should not contain these flaws [25]. Recent IC80 datasets
are rare, though.
The numerical results of this thesis should be regarded as preliminary until they
are backed up with more recent datasets.
4.2 Charge Reconstruction and Feature Extraction
Common to all reconstruction attempts is the need to distinguish between the part
of the detector response that is relevant to the analysis and the inevitable admixtures
of the instrument. This process of feature extraction is of fundamental importance.
Therefore, it is performed in an own module, called the FeatureExtractor.
This module calculates the baseline of the raw waveforms and subtracts it. It also
corrects the transformer droop, which is as an undershoot in the waveform, that is
39
Figure 4.2: Distributions of cascade parameters. No cuts were applied. The y distribution is similar to the x distribution and therefore not shown. The
vertices are mostly uniformly distributed over the volume of IC80 (in x, y
and z it ranges approximately from −500 m to 500 m). One exception to
this can be found in the z distribution, which shows a smaller number
of events in the dust layer at z ≈ −100 m. The azimuth distribution
is flat. The reduction of events with Θ > 90◦ results from the shorter
interaction lengths of highly energetic neutrinos. Thus, fewer neutrinos
can pass the Earth and the flux of neutrinos entering the detector from
below decreases. The vertex time is arbitrary and acts solely as an offset
to all time variables. The two energy plots show the neutrino energy and
the reference energy (see equation (4.1)).
40
caused by the PMT base circuit [40]. The time electrons need to pass the PMT can
in principle be corrected for. In the IceCube PMTs it is about 60 ns [25], but it has
not been simulated in the used dataset.
The calibrated waveforms are stored in the frame and can be utilized in reconstructions that use them as input. But several reconstruction algorithms do not operate
directly on the waveform. They rather use the number of incident Čerenkov photons
and their arrival times. The waveform contains this information, but it is convoluted
with the intrinsic detector response function. The waveform may therefore be hard
to interpret.
Knowing the shape of a single photoelectron pulse, the FeatureExtractor
can unfold the desired information from the recorded waveform [40]. Two concepts
exist to report its findings. A RecoHit represents a reconstructed photoelectron with
its arrival time. Alternatively, several successive photolelectrons can be grouped
together to form a RecoPulse. It is defined by its charge, left edge time and width.
The ATWD waveform is always analysed and the FeatureExtractor prefers it
to the fADC. So if both waveforms show a peak at the same time, the resulting pulse
or hit originates from the ATWD. Whether the fADC readout is analysed at all, is
configurable by the user. The sets of feature extracted pulses and hits are finally
stored in the frame. Throughout this thesis, only the RecoPulses were further on
used.
To check the process of feature extraction, the calibrated waveforms were compared
to the reconstructed pulses. For two DOMs in the same event, this is illustrated in
figure 4.3. In the extraction of ATWD pulses the algorithms performs well. The times
of the extracted pulses agree with the peaks in the waveform. Pulses of the fADC
waveform are shifted in time, although in the time window, in which ATWD and
fADC readouts are both available, the peaks do agree. Furthermore, waveforms can
be found, in which the reconstructed pulses do not describe the recorded waveform
(see lower plot in 4.3).
This is the first indication that in this dataset pulses from the fADC are problematic. In section 5.4 this will be discussed further.
4.3 First Guess Algorithms
So-called first guess algorithms perform reconstructions with simplified and therefore
easily computable models to explain the recorded signal. They have two essential
tasks. On the one hand they provide powerful event describing variables, which are
used by the online triggers to decide, whether an event should be recorded or not.
This discrimination power is also used in the analysis of recorded events to select the
interesting ones. On the other hand they give an approximate answer to the same
questions that more complex reconstruction algorithms try to find the best possible
solution for. Therefore, first guess results are good starting points for more precise
0.35
0.30
0.25
0.20
0.15
0.10
0.05
0.00
-0.050
0.30
0.25
0.20
0.15
0.10
0.05
0.00
-0.050
6
5
4
3
2
1
00
pulse charge/e
fadc wf / npe
atwd wf / npe
41
500
1000
1500
2000
2500
3000
500
1000
1500
2000
2500
3000
500
1000
1500
2000
(wf-time - geotime) / ns
2500
3000
dist=196.03 m
pulse charge/e
fadc wf / npe
atwd wf / npe
0.8
0.6
0.4
0.2
0.0
0
0.5
0.4
0.3
0.2
0.1
0.0
-0.10
45
40
35
30
25
20
15
10
5
00
dist=253.87 m
500
1000
1500
2000
2500
3000
500
1000
1500
2000
2500
3000
500
1000
1500
2000
(wf-time - geotime) / ns
2500
3000
Figure 4.3: For two different DOMs in the same event these plots show the ATWD
and fADC waveforms together with the feature extracted pulses. For the
waveforms the unit was converted from Volts to photoelectrons via equation (3.1). By contrast the lower plot shows the integrated pulse charge
on the ordinate. The time window of the ATWD is emphasized by the
grey vertical lines. Along with each waveform its start time and the bin
width are stored in the frame. The time of each bin is calculated from
them. The geometrical time is subtracted for a clearer representation.
Top: The times of ATWD pulses agree with the peaks of the waveform,
whereas fADC pulses are shifted. Lower: The same shift in time for
fADC pulses is observable, but although the fADC waveform shows several nearly equally high peaks, the last pulse has a reconstructed charge
that is nearly three times larger than the other pulses.
42
Figure 4.4: Illustration of the time reconstruction of CFirst. It returns the least
possible time that yields a number of direct hits above a given threshold.
calculations. This chapter introduces the algorithms that will seed the likelihood
reconstruction. The used configuration of these modules is given in the appendix A.
4.3.1 Vertex and Time – CFirst
CFirst is a reconstruction algorithm for the vertex and the time of the cascade [22].
For events, in which the spherical hit pattern is contained in the detector, one can
use the center of gravity (COG) as an estimate for the cascade position. In this
picture the recorded charge is regarded as the weight of a DOM and the COG is the
weighted sum of the DOM positions ~xo :
P
wo~xo
.
(4.6)
~xcog = Po
o wo
There are several possibilities to calculate the weights wo . Either one can sum over
all pulses and weight each with its charge, or one can use only the first pulse and
weight it with the total collected charge. The amplitude weight power is an additional
exponent α that is commonly either 0 or 1, allowing to disable the weighting at all:
(P
nα
use all pulses
wo = Pi oi α
(4.7)
( i noi ) weight first pulse.
The algorithm also provides an estimate for the time of the interaction tcscd . After
the cascade position is approximated with the COG, one can calculate for each DOM
o a trial vertex time ttrial
:
o
ttrial
= to1 −
o
|~xo − ~xcog |
,
cice
(4.8)
43
where to1 denotes the time of the first pulse in the DOM. Without scattering this
should be identical for all DOMs and denote the time of the interaction (see equation
3.5). In reality one obtains a set of probable times, from which the best must be
selected. Therefore, one selects the DOMs inside a sphere with radius RDirectHit .
For these DOMs one assumes, that time delays due to scattering are negligible:
|~xo − ~xcog | ≤ RDirectHit . For each of these DOMs one probes, how many of the other
would
hit DOMs have a delay time td in the time interval [0, ∆tDirectHit ], if that ttrial
o
be the time of the interaction. Hits that fall into this interval are called direct hits.
The least trial time that yields a number of direct hits above a configurable threshold
NT riggerHits is returned as the result.
Since cascades in CFirst are considered as point-like, no estimate for the direction
is given and so the algorithm provides 4 out of 7 of the parameters to describe a
cascade (4.2). However, CFirst provides some additional variables that are suited
to discriminate cascade-like events from others. They are not further used in this
thesis but a description can be found in [22].
The algorithm’s performance on the whole dataset yields a vertex resolution of
about 20 m in x and z, whereas the error distributions feature very broad tails (see
figure 4.4). The time resolution is about 70 ns.
Figure 4.5: Reconstruction results of CFirst. No cut was applied. As in all other
histograms in this thesis, the rightmost bin is the overflow bin.
4.3.2 Direction – Tensor of Inertia
Another analogon to classical mechanics is stressed in the reconstruction module
TensorOfInertia. Next to a powerful discrimination between cascade- and muoninduced events, it also provides an estimate for the cascade direction. It exploits that
the hit-pattern of a muon is stretched along the direction of the track, whereas the
hit pattern of cascades is nearly spherical (see figure 3.5).
If one imagines the amplitude of the DOMs as masses again, the hit pattern would
44
describe a rigid body, for which one could calculate the tensor of inertia:
X
I k,l =
nαo δ kl ~x2o − xko xlo ,
(4.9)
o
where no denotes the amplitude of the DOM at position ~xo and δ kl is the KroneckerDelta. The amplitude weight power α has the same purpose as in CFirst. The
eigenvectors of this tensor denote the principal axes of inertia with the smallest
eigenvalue belonging to the longest axis. The direction of this axis is an adequate
estimate for the muon direction, while it gives poor results for cascades (see figure
4.7). However, it is slightly better than guessing the direction. So it will be used to
seed the likelihood reconstruction. This module also provides the COG as a guess
for the vertex, but in contrast to CFirst it provides no time.
4.3.3 Energy
A relation between the energy and the number of hit DOMs — usually called Nchannel
— suggests itself for cascades, whose spherical hit pattern is contained in the detector.
The higher the energy the more photons will be emitted by the cascade giving
also more distant DOMs a chance to detect light from the shower. A simple scatter
plot, which for several events plots the neutrino energy versus the number of hit
DOMs, is given in figure 4.8. Of course this method will fail for cascades outside
the detector. These outlying cascades would have to be brighter to illuminate the
same number of modules as a contained cascade. By cutting on the distance of the
vertex to the detector center, these outlying cascades can be removed from the plot.
A parametrization with the polynomial:
log10 (E1st ) = 1.7 + 0.7x2 + 0.03x3 ,
x = log10 (Nchannel ),
(4.10)
was found to perform sufficiently well to seed following, more elaborate reconstructions.
4.3.4 Combining Seeds
With the presented first guess algorithms estimates for all parameters of the cascade
are available, although they are not provided by a single module. As each module
stores its result in form of a particle describing object (the class is called I3Particle)
the likelihood reconstruction would have to select the needed information from several
objects. Unfortunately the following reconstructions are not designed to gather the
seed from several sources. If a change of these reconstructions is not wanted, a
combination of the parameter sets has to be done in advance.
In principle this harbours the danger of assembling estimates that were obtained
under different assumptions, e.g. if one would combine CFirst’s time with the COG
45
Figure 4.6: Reconstruction results of TensorOfInertia. No cut was applied.
Figure 4.7: Reconstruction results of TensorOfInertia. Only those events were
used, in which the reconstructed vertex is located in a cylinder centered
inside IC80 with a radius of 500 m and ranging in z from −500 to 500 m.
46
log10(energy/GeV)
9
8
7
6
5
4
3
2
1.0
1.5
2.0
log10(nchannel)
r<600m
r<300m
parametrization
2.5
3.0
Figure 4.8: The relation between the cascade energy and the number of hit DOMs
Nchannel . The scatter plot is done for two subsets of the dataset, which
contain only events, whose the true vertex is maximally 600 m or 300 m
away from the detector center. For the latter the cloud of data points
is parametrized with a polynomial, which then seeds further reconstructions.
of Tensor of Inertia that eventually used another weighting scheme. But first
this is only the starting point of a calculation that may be quite robust to such
misdirections. Secondly it should be harmless to mix the time and the vertex of
CFirst with the direction of Tensor of Inertia, as the CFirst just gives no
statement about the direction.
4.4 Maximum Likelihood Reconstruction for Cascades
In the following, maximum likelihood reconstructions for cascades will be discussed.
First the method will be introduced. Then the available likelihood reconstruction
CscdLlh will be reviewed and tested with the dataset.
47
4.4.1 General Notes On Maximum Likelihood Estimation
The problem, which shall be solved here, belongs to the category of statistical inference. For a measured detector response one would like to find the set of cascade
parameters that fits the observation best. Therefore, a measure is needed to decide
for two possible parameter sets, which of them is better supported by the observation.
Maximum likelihood estimation is a well-established method to solve such parameter estimation problems. For the parameters of interest it provides estimators that
have the desirable properties of being asymptotic unbiased and efficient [18, 41].
Under quite general conditions one can show that if Θ̂({x}) is a maximum likelihood estimator for the parameter Θ, for an increasing amount of data {x} it would
converge to the true parameter Θ and the variance of the estimator reaches the
Cramer-Rao Bound. This is a lower bound for any estimation procedure [42]. In this
sense maximum likelihood estimation is the optimal choice for many applications.
The method requires the prior knowledge of the probability P (R|C) of a detector
response R under the assumption that the cascade is described by the parameter set6
C.
This is available in the form of a function7 of R, in which the assumption C is fixed.
But the needed measure should provide the exact opposite: the measurement R has
been performed and acts as a fixed assumption, while the hypothesis is variable. So
the likelihood is defined as a function proportional to the known probability:
L(C|R) = aP (R|C),
(4.11)
where now R is fixed, C may vary and a is an arbitrary constant8 . This renaming is
seconded by an interpretation rule:
The Law of Likelihood. Within the framework of a statistical model, a particular
set of data supports one statistical hypothesis better than another if the likelihood of
the first hypothesis, on the data, exceeds the likelihood of the second hypothesis [43].
In addition a statement about its completeness is given:
The Likelihood Principle. Within the framework of a statistical model, all the
information, which the data provide concerning the relative merits of two hypotheses
is contained in the likelihood ratio of those hypotheses on the data [43].
So the likelihood serves as the desired measure. According to the Likelihood Principle an inference based on this method considers the whole available information of
the experiment.
6
The parameter sets C and R were defined in section 4.1.1.
This function calculates either the probability for discrete R or the probability density for the
continuous case.
8
Since in the following only likelihood ratios are important the constant a has no impact, but it
allows to use the same definition together for the discrete and continuous case.
7
48
With the possibility to compare each two sets of cascade parameters the step of
inference is de facto transformed into the search for the global maximum of the
likelihood function.
For a sufficiently complex likelihood function, analytic solutions are difficult to
obtain. Instead one uses numerical methods, which are usually formulated for the
equivalent minimization problem. Since it avoids numerical difficulties, one minimizes
the negative logarithm of the likelihood L:
Ĉ = arg min (− ln L(C|R)) .
(4.12)
C
When the experimental data consists of independent measurements R = {ri }, the
probability for the joint detector response is obtained by multiplication. This leads
to a summation for the log-likelihood L:
Y
X
P (ri |C) → L = −
ln P (ri |C) + const.
(4.13)
P (R|C) =
i
i
The additional constant comes from the factor in equation (4.11). It is of no use
to the estimator because it does not change the location of the minimum. For the
same reason it is legitimate to factor out those terms of P (R|C), which depend on
the data but not on the hypothesis. The remaining likelihood is what Edwards calls
minimal-sufficient statistics. Functions that differ from this by a constant factor are
for the purpose of inference equivalent.
At a first glance the renaming of L in equation (4.11) looks like distinction without difference, but it is not. First the so-defined likelihood does not have to fulfill
the probability axioms. It therefore does not have to be integrable and cannot be
normalized to unity. It also cannot be interpreted as the conditional probability
P (C|R). Apart from the fact that the probability of a hypothesis only makes sense
in a Bayesian perception of probability, one would also have to apply Bayes’ theorem:
P (C|R) ∝ P (R|C)P (C),
(4.14)
to obtain it [18]. This would involve the introduction of the prior P (C) denoting
the experimentalist’s belief, which of the several possible hypotheses are more likely
than others. It should be noted that in case the experimentalist has no clue, which
hypothesis to favor, his or her prior knowledge may be modeled with a flat prior
(P (C) = const.). In this case the likelihood function (4.11) and the conditional
probability (4.14) would be identical9
In Bayesian inference [45] one searches the parameter set C that maximizes equation (4.14). Therefore, in this alternative approach the likelihood function 4.11 plays
9
Nevertheless, it is problematic that this flat prior loses its meaning under parameter transformations. To get an impression of the discussion the books of d’Agostini [44] and Edwards [43]
serve well.
49
a central role as well. That is why the method of maximum likelihood was chosen
for this thesis. By augmenting the likelihood function with a prior, the change to
Bayesian inference could be performed at a later time.
4.4.2 The Reference Implementation - CscdLLh
A likelihood reconstruction for cascades called CscdLlh already exists within the
IceCube reconstruction framework. Originally developed for AMANDA, it has been
reimplemented for the use in IceCube. Two different formulations of the likelihood
function are available. They shall be sketched here as far as they are necessary for
the further discussion. A detailed description can be found in [22].
For the vertex reconstruction solely the timing information of the registered photons is exploited by constructing the likelihood from a product of Pandel functions
(see equation (3.10)) in either a form for single or for multiple photoelectrons (SPE
and MPE). Because the Pandel function models the scattering in the homogeneous
ice, delayed arriving photons are appropriately treated, which results in a better
vertex and time resolution.
The second formulation of the likelihood function is called PhitPnohit likelihood
and is used for the reconstruction of the energy. Therefore, a relation between the
expected number of photoelectrons µ and the cascade energy is necessary. With
the approximation in equation (3.9) and the assumption of a Poisson process in the
DOM, one can calculate the probability to see no hit:
cscd
Pnohit
= P (n = 0, µ) =
µ0
exp(−µ) = exp(−µ).
0!
(4.15)
The inverse probability of a hit is then given by:
cscd
Phit
= P (n > 0, µ) = 1 − exp(−µ).
(4.16)
To account for the probability of a noise hit Pnoise these are modified to:
cscd
cscd
Phit = 1 − Pnohit = Phit
+ Pnoise − Phit
Pnoise .
(4.17)
The final probability of the detector response, and hence the likelihood, is the product
over all DOMs:
Y
Y
L=
Phit
Pnohit .
(4.18)
all hit DOMs
all un-hit DOMs
By successive application of the Pandel likelihood and the PhitPnohit likelihood,
one can first obtain the vertex and time of the cascades and afterwards estimate its
energy. The performance of CscdLlh on the dataset with the SPE likelihood results
in a vertex resolution of about 10 m in x and z and a resolution for the logarithm of
the energy of about 0.3. A more differentiated picture of the algorithm’s performance
50
subsample selection criterion
only converged (all energies)
fiducial (all energies)
fiducial & 2 ≤ log10 (Eref ) ≤ 4
fiducial & 4 ≤ log10 (Eref ) ≤ 6
fiducial & 6 ≤ log10 (Eref ) ≤ 7
fiducial & 7 ≤ log10 (Eref ) ≤ 8
fiducial & 8 ≤ log10 (Eref )
resolution
x [m] z [m] t [ns] log(Eref )
10
11
–
–
8
9
–
.64
9
7
50
.27
8
9
79
.30
8
11
73
.30
8
15
–
.31
9
22
111
.37
Table 4.1: Resolutions obtained from a CscdLlh reconstruction of dataset 445. The
Pandel SPE likelihood has been used to estimate the vertex and time and
the PHitPNohit likelihood has been used to reconstruct the energy. A
definition of the resolutions is given in section 4.1.3. Where the obtained
error distributions are not well-described by a Gaussian fit, the numerical
values are omitted. The applied selection criteria are first that the algorithm has converged and secondly that the reconstructed vertex is located
in the fiducial volume of IC80. This is approximated with the cylinder
r ≤ 500 m, |z| ≤ 500 m. Finally the dataset is divided into subsamples of
events with similar deposited energy.
is given in table 4.1. The MPE likelihood is found to perform worse with a energy
dependent vertex resolution of about 14 m to 24 m. These results set the benchmark
for any improved algorithm.
The used approximations leave the above-mentioned opportunities for improvement. But also the form of the likelihood embodies a simplification by reducing
the Poisson probability to only two cases: a DOM was either hit or not. There is
considerable hope that one can improve the reconstruction performance with more
advanced algorithms.
51
5 An Improved Likelihood
Reconstruction for Cascades
In this chapter different likelihood functions that incorporate the whole waveform
and the results from the Photonics simulation, will be used for the reconstruction
of cascade-like events. After presenting their mathematical form, a simple model will
be introduced, which allows to study the algorithm. The implementation of a cascade
reconstruction in the IceCube software framework is presented and its performance
on events of a full-featured simulation is tested. Corrections to the likelihood are
introduced and quality parameters are discussed.
5.1 Mathematical Form
5.1.1 A Likelihood for Waveforms
The likelihood that will be used further on is constructed from the assumption that
the number of photoelectrons in each readout bin is following a Poisson distribution.
This formulation is already in use for the reconstruction of muon tracks [46]. As
is it has been used in equation (3.8), the tabulated results of Photonics allow to
calculate the expected number of photoelectrons µoi in a readout bin:
∞ dP
(~xo , toi − tgeo − tcscd ) + νnoise ∆toi .
(5.1)
µoi (x~o , toi , C) = hµo i
dt
With νnoise the noise rate of the PMT can be include into the expectation (see section
3.2). The Poisson distribution yields a measure, how likely the recorded bin content
noi is. This leads to the definition of the likelihood:
Y Y µnoi
oi
exp (−µoi ) .
L=
n
oi !
DOMs bins
o
i
Taking the negative logarithm yields:


X 
X
µoi

L=
noi ln
− ln (noi !) − No ln (µo ) ,
µo −
µo
DOMs
bins
o
i
(5.2)
(5.3)
52
P
where No = i noi is the total number of recorded photoelectrons in the DOM. The
approximation is made that the sum of the expectations for each bin should sum up
to the mean total expected number of photoelectrons:
µo =
X
µoi ≈ hµ∞
o i + νnoise ∆tevent .
(5.4)
i
Because of equation (5.1) this should be true, when the waveform is sampled over
most of the support of the normalized delay time pdf. To calculate the number of
noise hits the noise rate is multiplied with the time window of the DOM ∆tevent .
5.1.2 A Likelihood for Pulses
As the discussion on the event parameter set in section 4.1.1 indicates, both the
waveform bins and the reconstructed pulses enter the mathematical description as
time intervals, in which the DOM has recorded a given number of photoelectrons.
Thus, the same likelihood function (5.2) can also be be applied to feature extracted
pulses.
This is favorable, since the FeatureExtractor will unfold the single photoelectron response function from the waveform. Another important argument for this
approach is the processing speed. In the formulation for waveforms the calculation
has to loop over all bins of all hit DOMs. In equation (5.3) each bin contributes
as a summand to the log-likelihood. For time intervals, in which the DOM did not
record a signal, contributions to the likelihood are expensively calculated. The set
of feature extracted pulses does not contain these time intervals. In the extreme of
a DOM with only one pulse, the use of a pulse-based likelihood would reduce the
number of summands to one.
5.2 Toy Model Studies
With the Photonics tables at hand a beneficial opportunity arises to test the likelihood function (5.2) under simplified conditions. This is favorable because the calculation of the likelihood will finally be located deep inside a module in the reconstruction
chain. There, in a full-featured module, the possibilities to study the properties of
the likelihood function are limited.
If one temporarily steps out of the IceCube software framework, one can construct
a closed and simplified chain for simulating and reconstructing events. It uses the
Photonics tables to simulate events by randomizing the readout bin contents noi
according to the Poisson distribution for the expected number of photoelectrons µoi .
For such simulated events the likelihood can be calculated and used to reconstruct
53
Figure 5.1: A simulated waveform for a DOM 55 m away from the cascade. The
waveform is obtained by randomizing the mean expected waveform from
Photonics according to a Poisson distribution. Here, rates are plotted
to account for the different integration times of the unequal sized bins.
the cascade. In this toy model nearly all hardware effects1 are eliminated. Only
the mathematical properties of the likelihood and the characteristics of the light
propagation remain. So a manageable setup is created that allows to study the
different influences in the reconstruction, while the complexity is stepwise increased.
In this model the ATWD and fADC readouts are combined to one waveform, in
which the binning at the beginning is more dense than at the tail. So the first
128 · 3.5 = 448 ns represent the ATWD. For the rest of the waveform, until 6.4 µs, the
coarse binning of the fADC is used. A comparison between a simulated waveform to
the expected one is given in figure 5.1.
In a first step the likelihood can be visualized. The parameter space of the likelihood function has the 7 dimensions from equation (4.2). For an illustration, a 2-d
projection of the likelihood shall be shown. It is of interest, whether the likelihood
in the two directional dimensions has its minimum in the vicinity of the cascade
direction. For a selected event, in which this is the case, the likelihood is scanned
in the zenith-azimuth plane of the parameter space (see left plot in figure 5.2). The
vertex, time and energy of the cascade are fixed to their true values. Unfortunately
1
One hardware effect has to be considered, though. Without saturation effects, the number of
photoelectrons that a DOM nearby the cascade detects, can become very large. These obviously
unrealistic measurements are removed from the simulated events by ignoring each DOM, which
in total recorded more than 10000 photoelectrons.
54
resolution
x [m] z [m] log(E) dir. [deg]
bulk ice, dN
= const., time fixed
6
4
0.04
35
dE
dN
layered ice, dE = const., time fixed
7
5
0.07
52
−1
layered ice, dN
9
7
0.09
59
∝
E
,
time
fixed
dE
dN
−1
layered ice, , dE ∝ E , time re10
9
0.09
65
constructed
description
Θ [deg]
27
27
43
51
Table 5.1: Resolutions obtained from a toy model reconstruction for different stages
of complexity. Plots of the corresponding error distributions can be found
in the appendix.
the likelihood parameters are correlated as illustrated in the two other plots in figure
5.2. When the vertex is displaced, the minimum changes its location.
In a second step a downhill simplex algorithm [47] is used to minimize the loglikelihood. This reconstruction is tested on a sample of 4000 cascades that are randomly distributed over a sphere with radius 500 m around the detector center. So
they are all contained in the IceCube array. Their energy is uniformly distributed between 1 TeV and 1 PeV. To keep things simple, the time of the interaction is regarded
as known and bulk ice is used. Under these idealized conditions a vertex resolution
of about 5 m in x and y and 3 m in z is achieved. The zenith angle resolution is 27◦ .
It is remarkable that the vertex and direction errors are obviously related as can be
seen in figure 5.3. What is exemplarily shown in the likelihood contour plot in figure
5.2 seems to be generally true. Displacing the vertex and changing the orientation
of the cascade are correlated. This has implications on the minimization strategy,
which will be discussed in section 5.6.1.
In order to get an impression of the achievable resolution, the complexity of the
model is now stepwise increased. First, inhomogeneous ice is considered. Afterwards
∝ E −1 . In
the energy spectrum of real cascades is simulated using a power law dN
dE
a last step the time of the cascade is added as a free parameter. The results are
summarized in table 5.1.
5.3 Gulliver
The Gulliver framework [48] was developed to combine different IceCube reconstruction modules that use the maximum likelihood method. It identifies four different components, from which any likelihood reconstruction can be build:
ˆ Seed Preparation,
ˆ Minimizer,
55
Figure 5.2: The log-likelihood for a cascade in homogeneous ice is scanned in the
plane of directional coordinates. The other parameters are fixed either
at the true values of the cascade or at displaced vertex coordinates. The
correlation between the cascade parameters is obvious: if the vertex is
shifted, the minimum denoting the direction estimate moves away from
the true direction. For a clearer presentation the plotted values are normalized into [0, 1].
Figure 5.3: Relationship between vertex and direction reconstruction under idealized
conditions. The model assumes homogeneous ice and the time of the
vertex is fixed.
56
ˆ Parametrization of physical variables,
ˆ Likelihood Function.
In the step of ,,Seed Preparation” all necessary information is gathered from the
frame to build the first parameter set, for which the likelihood will be calculated and
where the minimizer will start. The task of finding the optimal hypothesis is done
by the minimizer. Minimizers are numerical solutions to optimization problems. To
keep this general, and thus allow for the exchangeability of minimizers, the physical
variables in the likelihood function are mapped to the parameter space that the
minimizer works on. In addition this mapping allows the minimizer to operate only
on a subset of the parameter space of the likelihood by introducing variable bounds
or by fixing variables to a specific value.
Reference implementations for these components exist. The ,,Seed Preparation”
can be done by I3BasicSeedService. It expects to find the vertex, time, and
direction information in one I3Particle object in the frame. This is not given
for cascades, what makes it necessary to perform the combination of seeds as it is
discussed in section 4.3.4. For the energy seed the parametrization in equation (4.10)
is used. For the variable mapping I3SimpleParametrization exists. It is mostly
a one-to-one map of the cascade parameters to the parameter space of the minimizer.
It contains only one transformation. The minimizer is working with the logarithm
of the cascade energy. The minimizer used here is Minuit [49].
For all likelihood functions that were discussed in the previous chapter, implementations are available.Additionally the likelihood function (5.3) in its formulation
for waveforms is implemented in I3WFLogLikelihood. The pulse-based version
is available through I3PhotorecLogLikelihood. Both provide the possibility to
query the Photonics tables through Photorec.
5.3.1 I3WFLogLikelihood
This implementation works on the calibrated waveforms of the ATWD and the fADC
that the FeatureExtractor provides. The user can decide, which waveform to
use. In this thesis only ATWD waveforms were chosen2 . There are several options
that affect the treatment of saturated waveforms. Like in the toy model of section
5.2, the option was chosen to ignore saturated DOMs. The probability of noise hits
was considered by setting the noise rate in (5.1) to 700 Hz.
5.3.2 I3PhotorecLogLikelihood
The input for this implementation is the set of feature extracted pulses. From which
digitization device these pulses originate is not known to the algorithm, since this
2
This decision is detailedly discussed in section 5.4
57
description
resolution
x [m]
z [m]
log(E)
dir. [deg]
−2 ± 10 −3 ± 7 −0.43 ± 0.15
80
waveform-based, fit all
parameters
pulse-based, fit all pa- −2 ± 10 −3 ± 7 −0.47 ± 0.13
rameters
waveform-based, keep −3 ± 11 −2 ± 7 −0.40 ± 0.16
direction fixed
pulse-based, keep direc- −2 ± 12 −2 ± 7 −0.44 ± 0.13
tion fixed
Θ [deg]
28 ± 48
80
28 ± 49
80
20 ± 47
80
20 ± 47
Table 5.2: Resolutions obtained from a reconstruction with no iterations
(I3Simplefitter) for the two available likelihood implementations. Two
cuts were applied: the reconstruction must report success and the reconstructed vertex must be located in the fiducial volume of IC80. For a
definition of the resolutions see section 4.1.3.
is decided in the FeatureExtractor. For the noise rate also here 700 Hz were
chosen.
5.3.3 Assembling the Reconstruction for Cascades
All the presented components are already used for muon reconstruction [46]. The
modularity of Gulliver allows to assemble a likelihood reconstruction for cascades
in a similar way. The frame that holds these components together, is given by two
implementations. The I3SimpleFitter takes the seed and performs the minimization in one step. If the minimizer converged, this single result is stored in the frame.
The other available choice, I3IterativeFitter, performs initially the same. But
after the first minimization, it takes the result as a seed for further iterations (see
section 5.6.1 for details).
Both likelihood functions were used together with I3SimpleFitter and produced
similar results that are summarized in table 5.2. The vertex resolution is about 10 m
in the x-y-plane and slightly better in z. This was expected, since the vertical
distances between the DOMs are smaller than the horizontal distance between the
strings. The direction of the cascade is not reconstructable. If one fixes the direction
to the seed value, the remaining parameters are reconstructed equally well.
Indeed, the pulse-based likelihood is about 30% faster than the other. Since both
likelihoods produce similar results, the faster pulse-based likelihood was favored for
further investigations.
58
Figure 5.4: Energy reconstruction error distributions for the pulse-based likelihood
(fixed direction, no iterations). The upper left histogram contains all
events that were successfully reconstructed and in which the vertex lies
inside the fiducial volume of IC80. The other plots were generated for
events in different energy ranges. The reconstruction exhibits a systematic energy underestimation, whereby all error distributions peak below
zero. The left borders of the histograms contain events, for which the
minimizer got stuck at a lower bound for the energy, but still claimed
successful operation.
5.4 Systematic Energy Underestimation
Both algorithms exhibit a systematic underestimation of the energy. This effect is
illustrated in table 5.2 and in the error distributions shown in figure 5.4. The error
distributions peak at about log10 (E) = −0.4. As a differentiation with respect to
the reference energy reveals, this peak is shifted to even smaller values for energies
above ≈ 10 PeV.
One should recall that the energy of the cascade affects the reconstruction solely
through the amplitude. It scales hµ∞
o i in equation (5.1) and therefore determines the
total expected charge in a DOM. The timing information in the pdf is unaffected.
Thus, to understand the energy underestimation, a DOM-wise comparison between
the expected total charge and the recorded charge is worthwhile. This comparison
59
is done in a two dimensional histogram. For each hit DOM in an event the recorded
total charge and the expected mean number of photoelectrons are calculated. These
two numbers define the bin, to which the DOM contributes one entry. To increase
the statistic, the plot is made from several events with different energies.
(1)
The expected mean number of photoelectrons3 µo is the sum over the expected
light yield from the electromagnetic and hadronic cascades in the event. The energy
of hadronic cascades has to be scaled down appropriately to obtain the correct light
yield (see equation (2.20)).
(
∞
hµ∞
o i (Eem ) + hµo i (F Ehadr ) CC event
(1)
µo =
(5.5)
hµ∞
NC event.
o i (F Ehadr )
This expectation can be compared to the reconstructed charge no or to the Monte
Carlo hits nmc
o . The comparison to the Monte Carlo hits is shown in figure 5.5.
Here, the detector response and the charge reconstruction is neglected. Ideally the
distribution should be centered around the green dashed line, that marks the identity
(1)
relation µo = nmc
o . The comparison shows a very good agreement, which confirms
(1)
the correctness of the calculation of µo and assures that the dataset was simulated
with consistent Photonics tables.
Figure 5.5: Comparision between the mean expected number of photolectrons and
the Monte Carlo hits. These are the recorded photoelectrons before the
hardware simulation and charge reconstruction is performed. The comparison shows a good agreement.
(1)
Figure 5.6 shows the same comparison between µo and the total recorded charge,
which is calculated as the sum of the charges of all feature extracted pulses:
X
no =
noi .
(5.6)
i
3
A number in the variable name will allow to distinguish this quantity from later definitions.
60
Below 1 and above 10000 expected photoelectrons deviations from the identity relation can be understood, because the detector simulation considers noise and saturation effects. Incident photons will create a minimum signal of about 1 pe. Significant
lower recorded values should result from noise induced peaks in the waveform4 . An
expectation below 1 pe results from distant DOMs with a very low hit probability.
At higher amplitudes the limited DOM sensitivity to high photon fluxes is observable
as a saturation plateau. The comparison to the Monte Carlo hits in figure 5.5 shows
a hard cut-off at 10000 pe. In figure 5.6 no DOM recorded more than 6150 pe.
The range between the noise and saturation regions shows a nontrivial distribution.
Up to about 300 expected photoelectrons the identity seems to hold, while for higher
(1)
µo the reconstructed charge does not sum up to the expectation. Furthermore, the
width of the distribution is surprisingly broad. For example the plot suggests that
if one expects to see 1000 pe in one DOM, the measurement would lie between 50 pe
and 500 pe.
In figure 5.6 the pulses are gathered from the waveforms of the ATWD and the
fADC. If one restricts the FeatureExtractor to analyse only the ATWD waveforms, the distribution becomes smoother (see figure 5.7). It lies systematically under
the estimation, though. A comparison of both distributions in 5.6 and 5.7 suggests
that for DOMs with a large expected number of photoelectrons, mainly ATWD pulses
enter the histogram. The agreement for lower expectations in figure 5.6 seems to be
due to fADC pulses that are missing above 300 pe. After the unexpected behaviour
of the FeatureExtractor in figure 4.3 this is the second hint that in this dataset
pulses from the fADC are problematic. It is due to a bug in the fADC simulation
that has been corrected recently [25]. For this reason readouts from this device will
be ignored.
Using only pulses from the ATWD, the recorded number of photoelectrons should
be smaller than the prediction by Photonics. The expectation is calculated from
the tabulated hµ∞
o i (see equation (3.7)). It gives the mean expected number of
photoelectrons for a DOM, provided that it has a sufficiently long time window to
record the charge. The time window that would be necessary to collect all photons
from the cascade grows with the distance between the cascade and the DOM. As
figure 3.11 indicates, for a distance of 250 m between the DOM and the cascade,
the time window of the ATWD of 448 ns is much smaller than the width of the
delay time pdf. So one needs to calculate the expected charge from equation (5.1)
by integrating the pdf over the limited time window. The integrated probability
(1)
functions as a correction factor to µo .
µ(2)
o
=
fATWD µ(1)
o ,
Z
t1 +448 ns
fATWD =
t1
4
dt0
dP 0
(t − tgeo − tcscd ),
dt
(5.7)
The waveforms are recorded in units of Volts and converted into units of photoelectrons (see
equation (3.1)).
61
Figure 5.6: Comparison between the charge in all feature extracted pulses and the
mean expected number of photelectrons predicted by Photonics. The
pulses are gathered from the ATWD and fADC waveforms. The expected
(1)
number of photoelectrons is µo from (5.5).
Figure 5.7: Same as figure 5.6, but pulses are only extracted from the ATWD.
Figure 5.8: Charge comparison as in figure 5.7, but with an applied correction that
(2)
considers the small time window of the ATWD. The abscissa shows µo =
(1)
fATWD · µo .
62
expected photoelectrons
(center of slice)
100.00
100.25
100.75
101.25
101.75
102.25
102.75
103.25
103.75
seen photoelectrons
correction factor
(mean of the Gaussian)
100.00
1.0000
0.25
10
1.0114
100.62
0.7382
1.05
10
0.6245
101.54
0.6100
2.02
10
0.5944
2.53
10
0.6056
103.02
0.5913
3.35
10
0.3948
Table 5.3: Correction factors for the expected number of photoelectrons. For values
above 103.79 = 6150 pe the correction should stay constant. Therefore, the
nearest data point at 103.75 denotes the maximal used correction factor.
where t1 is the time of the first ATWD bin. For the purpose of creating the plot
in figure 5.8 the integral was calculated by summing up the pdf in 1 ns steps. By
applying the correction factor fATWD , the distribution moves closer to the identity
line and its width shrinks as well.
The time window correction has to be interpreted as a procedure, that is necessary
to compare the cropped detector response to the Photonics prediction. It also has
to be applied to those terms in the likelihood that account for the total expected
charge in the DOM. After the time window correction is applied a small mismatch
remains.
This mismatch will now be parametrized. Afterwards, the parametrization will be
used to scale down the expectation and to center the distribution on the identity
relation. The noise and saturation features may not be affected, so the correction
should not be applied to expectations below 1 pe and above the maximal seen charge
in this sample of 6150 pe. For these purposes the distribution in figure 5.7 is sliced
parallel to the ordinate and each slice is filled into a histogram. The histograms are
then fitted with a Gaussian to obtain the mean of the distribution peaks hno i (see
figure 5.10). From these one can calculate, by how much the expectation has to be
downscaled.
hno i
(2)
(5.8)
µ(3)
fphot = (2) .
o = fphot µo ,
µo
The histograms are shown in figure 5.10 and the resulting correction factors fphot
are given in table 5.3. They are further illustrated in the upper and central plots of
figure 5.9.
In the range between 10 pe and 2000 pe the mean measured charge amounts to
63
about 60% of the prediction by Photonics. From the average single photoelectron
response of the photomultiplier a lowering to 85% could be explained (see figure 3.2).
The additional deviation is not understood. It suggests further investigation of the
DOM simulation and the performance of the FeatureExtractor, but this would
go beyond the scope of this thesis. These components are also reported to be robust
for unsaturated pulses [50]. So chances are that the problem is an artefact of the
dataset. Therefore, the deviation should now be corrected to study the influence on
the energy reconstruction.
-2
0
4
2
log10(expected npe)
6
8
4
6
8
4
2
log10(expected npe)
6
8
correction factors
applied correction function
-2
0
2
log10(seen npe)
log10(correction)
log10(seen npe)
5
4
3
2
1
0
-1
-2
1
0
-1
-2
-3
-4
5
4
3
2
1
0
-1
-2
-2
0
Figure 5.9: The upper plot shows the distribution from figure 5.8 and its parametrization. The correction factors that would center the distribution on the
green dashed identity line, are given in the central plot. The finally applied correction that does not affect the noise and saturation regions, is
plotted in red. The success of the correction is illustrated in the lower
plot.
(2)
The points in figure 5.9 were interpolated with a cubic spline function fphot (µo ). In
order to keep the noise and saturation regions unaffected, the scaling must start above
64
1 pe and stay constant above 103.79 = 6150 pe. The application of the correction yields
the result shown in the lower plot of figure 5.9. The distribution is finally centered
at the identity relation.
To conclude this section the adjustments shall be summarized. The two correction
factors fphot and fATWD are necessary to correct the prediction by Photonics in
order to get a reasonable agreement between the expected and recorded number of
photoelectrons.
The factor fphot is regarded to correct the prediction by Photonics to fit the
observation in this dataset. So formula (5.1) is modified to:
∞ dP
(~xo , toi − tgeo − tcscd ) + νnoise ∆toi .
(5.9)
µoi (x~o , toi , C) = fphot hµo i
dt
The second adjustment applies to the general case that the detector response is reduced to ATWD pulses. When one estimates the total expected charge in a DOM,
the reduced time window has to be considered. In the likelihood formula the approximation in equation (5.4) has to be modified to:
X
µoi ≈ hµ∞
(5.10)
o i fATWD .
i
5.5 Implementing the Corrections in the
Reconstruction
The results from the last section have been incorporated into the pulse-based likelihood. In contrast to the previous study, the computation time becomes a problem
now. A single uncorrected minimization of the likelihood needed about 20 s/event on
the DESY cluster. If this should be performed iteratively, the computation time for
a single event will rise to a few minutes, depending on the number of iterations. The
computations of the correction factors will slow down the reconstruction additionally.
Thus, the interpolation with a spline should be avoided. Since the correction factor
fphot is constant at about 60% for a wide range of input parameters, the spline can
be simply replaced. The following function, which linearly interpolates the range of
1 to 5 pe and is otherwise constant, was found to perform well:


hµ∞
o i ≤ 1
1
∞
∞
fphot (hµo i) = 1.1 − 0.1 · hµo i 1 < hµ∞
(5.11)
o i < 5.


0.6
hµ∞
o i ≥ 5
The implementation of I3PhotorecLogLikelihood was modified to apply the
correction function (5.11). In a simple fit of all 7 cascade parameters,5 the energy
5
Again only converged events with a reconstructed vertex inside the fiducial volume were selected.
65
180
800
2000
160
µ = −0.11 700
µ = −0.10
µ = −0.06
140
600
σ =0.19
σ =0.20 1500
σ =0.23
120
500
100
400
1000
80
300
60
200
500
40
100
20
0-4 -2 0 2 4 6 8 0-4 -2 0 2 4 6 8 0-4 -2 0 2 4 6 8
seen npe; exp. -1.75
seen npe; exp. -1.25
seen npe; exp. -0.75
3000
3000
3000
µ =0.03 2500
µ =0.25 2500
µ =0.62
2500
σ =0.28
σ =0.32
σ =0.28
2000
2000
2000
1500
1500
1500
1000
1000
1000
500
500
500
0-4 -2 0 2 4 6 8 0-4 -2 0 2 4 6 8 0-4 -2 0 2 4 6 8
seen npe; exp. -0.25
seen npe; exp. 0.25
seen npe; exp. 0.75
2000
1600
1200
µ =1.05 1400
µ =1.54 1000
µ =2.02
1500
σ =0.26
σ =0.26
σ =0.27 1200
800
1000
1000
800
600
600
400
500
400
200
200
0-4 -2 0 2 4 6 8 0-4 -2 0 2 4 6 8 0-4 -2 0 2 4 6 8
seen npe; exp. 1.75
seen npe; exp. 1.25
seen npe; exp. 2.25
800
600
500
700
µ =2.53
=3.35
500 µ =3.02
400 µσ =0
600
σ =0.25
σ =0.22
.17
400
500
300
400
300
200
300
200
200
100
100
100
0-4 -2 0 2 4 6 8 0-4 -2 0 2 4 6 8 0-4 -2 0 2 4 6 8
seen npe; exp. 2.75
seen npe; exp. 3.25
seen npe; exp. 3.75
350
300
180
160 µ =3.59
300 µ =3.43
250 µ =3.50
140 σ =0.11
σ =0.13
250 σ =0.16
200
120
200
100
150
80
150
100
60
100
40
50
50
20
0-4 -2 0 2 4 6 8 0-4 -2 0 2 4 6 8 0-4 -2 0 2 4 6 8
seen npe; exp. 4.75
seen npe; exp. 4.25
seen npe; exp. 5.25
90
40
80 µ =3.64
35 µ =3.66
70 σ =0.07
30 σ =0.07
60
25
50
20
40
15
30
10
20
5
10
0-4 -2 0 2 4 6 8 0-4 -2 0 2 4 6 8
seen npe; exp. 5.75
seen npe; exp. 6.25
Figure 5.10: Slices of the distribution given in figure 5.8. For a given range of expected photoelectrons that is denoted by the green lines, the recorded
charge is filled into a histogram and fitted. The correction factors are
determined by centering the peak between these lines.
66
resolution improved to ∆ log10 (E) = −0.27 ± 0.12 (see figure 5.11). For all other
parameters the reconstruction performance did not change.
In contrast, the application of fATWD is problematic. Three methods to calculate
this correction factor were tested: the summation of the pdf in 1 ns steps, a Gaussian
quadrature integration [47] and an analytic integration of the Pandel function (3.10).
But independent of the respective accuracy of the integration method, all these
attempts resulted in significantly worse resolutions for all parameters. The problem
is, that the introduction of the correction factor in equation (5.7) results in a strong
correlation between the time and the energy of the cascade. By shifting the time of
the interaction, the minimizer has an additional handle on the total expected charge
of the DOM (see equation (5.10). This seems to create local minima, which are
consequentially detected by the minimizer. Fortunately, the time window correction
can be approximated as well. A constant factor of:
fATWD = 10−0.3 = 0.5,
(5.12)
fixes the remaining underestimation of the energy (see figure 5.11 and table 5.4).
At energies between 10 TeV and 1 PeV the energy reconstruction performs with
∆ log10 (E) = 0.10 better than CscdLlh, which in the same energy interval achieved
∆ log10 (E) = 0.3. This improvement probably comes from the more accurate amplitude prediction that the Photonics tables provide. As shown in figure 3.10, dust
layers have a strong impact on the expected number of photoelectrons. This influence
is now properly treated inside the reconstruction.
The vertex resolutions of about 10 m and 7 m in x and z are no real improvement
to the 8 m and 9 m from CscdLlh, though.
5.6 Cascade Direction
Considering the direction of the cascade results in additional computational effort but
essentially in no direction resolution. The resolutions of the other cascade parameters
did not significantly change, when the direction was fixed (see table 5.2). It shall
be shown that two more measures are necessary to reconstruct the direction of the
cascade.
5.6.1 Minimization Strategies
The toy model study in section 5.2 reveals the correlation between the vertex and
the direction reconstruction. Under the very idealized conditions of the model —
but with the same minimization strategy to reconstruct all parameters at once — a
vertex error of 10 m leads to a direction error of 40◦ . (see figure 5.3). Under the more
realistic conditions of a full-featured detector simulation, a similar vertex resolution
67
Figure 5.11: Left: Energy error distribution after the application of fphot . Right:
Energy error distribution after the application of fphot and fATWD = 0.5.
selection criterion
converged & inside IC80
& 2 ≤ log10 (Eref ) ≤ 4
& 4 ≤ log10 (Eref ) ≤ 6
& 6 ≤ log10 (Eref ) ≤ 7
& 7 ≤ log10 (Eref ) ≤ 8
& log10 (Eref ) ≥ 8
x [m]
−2 ± 10
−2 ± 13
−2 ± 9
−2 ± 9
−1 ± 11
−2 ± 15
resolution
z [m]
log(E)
dir. [deg]
−3 ± 7
0.02 ± 0.13
80
−2 ± 6
0.03 ± 0.15
78
−3 ± 6
0.04 ± 0.10
81
−4 ± 7 −0.11 ± 0.18
81
−5 ± 10 −0.68 ± 0.3
84
−7 ± 14 −1.44 ± 0.32
79
Θ [deg]
28 ± 50
16 ± 55
27 ± 51
33 ± 46
36 ± 43
38 ± 45
Table 5.4: Resolutions obtained from a all parameter reconstruction with no iterations (I3Simplefitter) and a pulse-based likelihood that contains the
correction factors fphot and fATWD . The results for the given energy ranges
require the reconstruction to have converged. The vertex has to lie inside
IC80, too. For a definition of the resolutions see section 4.1.3.
68
yields no direction resolution. Unlike the simplistic model, here the likelihood seems
to have local minima that complicate the task of the minimizer.
Therefore, a strategy is needed that protects the reconstruction from deceptive
findings. This can be achieved by performing the minimization several times but with
different starting points. As the result on chooses the set of cascade parameters with
the smallest log-likelihood value. This minimization strategy is already implemented
in I3IterativeFitter. After finding the first minimum, the found vertex, time and
energy are used to seed the next iteration. The direction of the cascade is changed,
though. The number of iterations Niter is constant and set by the user. Ideally the
available iterations are used to seed the minimizer with very different directions. So
one has to choose Niter points from the directional parameter space. These should be
uniformly distributed even if Niter is small. Therefore, a Sobol sequence [47] is used
to select these starting points.
Like I3SimpleF itter, the iterative approach lets the minimizer work on the parameter space that is defined by the parametrization service. So one can control, which
of the cascade parameters will be reconstructed. With these tools, three different
strategies can be implemented.
ˆ Perform Niter iterations and always reconstruct all parameters.
ˆ Perform Niter iterations, but reconstruct only the time,vertex and energy, while
the direction is fixed. In this scheme the final direction of the cascade is one of
the seed values. Therefore, the resolution strongly depends on the number of
iterations.
ˆ One can use the result of the second strategy to seed one final complete reconstruction.
The results of the three strategies are summarized in table 5.5. Compared to the
results in table 5.4 they are encouraging. By considering different orientations of the
cascade, the minimizer succeeds in a better determination of the vertex. All strategies
result in a vertex resolution for converged events inside IC80 of about 6 m in x and y
and 4 m in z. Even more promising is the distribution of direction errors. With the
first strategy, half of the mentioned events reconstruct with a direction error smaller
than 57◦ . For the third strategy this angle amounts to 58◦ . A differentiation by
energy suggest that the reconstruction performs best for reference energies between
10 TeV and 1 PeV. For these a direction resolution of 44◦ is achieved.
The first strategy results in the best resolution – but at enormous costs. When
the 7 dimensional minimization is executed 32 times the computation time rises
to ≈ 10 min/event. The speed can be doubled by using the other two strategies,6
whereas the loss in precision is very small. The final all parameter fit of strategy 3
improves the direction resolution slightly from 50◦ to 47◦ .
6
Since the second strategy is part of the third, its time has not been measured separately. It
69
resolution
x [m]
z [m]
dir. [deg] Θ [deg]
log(E)
tcscd [ns]
Strategy 1 – 32 iterations, fit all 7 parameters (≈ 10 min/event)
converged
−2 ± 9 −2 ± 5
73
22 ± 40 −0.04 ± 0.14
0 ± 25
& inside IC80
−2 ± 6 −2 ± 4
57
14 ± 31 −0.01 ± 0.10
−4 ± 21
& 2 ≤ log10 (Eref ) ≤ 4 −2 ± 8 −1 ± 4
62
8 ± 28
0.01 ± 0.12
7 ± 20
& 4 ≤ log10 (Eref ) ≤ 6 −2 ± 5 −2 ± 3
44
11 ± 25 0.00 ± 0.08
−4 ± 15
& 6 ≤ log10 (Eref ) ≤ 7 −2 ± 5 −3 ± 4
48
19 ± 29 −0.12 ± 0.17 −15 ± 18
& 7 ≤ log10 (Eref ) ≤ 8 −2 ± 8 −5 ± 8
73
31 ± 45 −0.64 ± 0.28 −35 ± 32
& log10 (Eref ) ≥ 8
−2 ± 13 −8 ± 11
83
38 ± 45 −1.41 ± 0.31 −81 ± 60
Strategy 2 – 32 iterations with fixed direction
converged
−2 ± 9 −2 ± 6
72
19 ± 35 −0.03 ± 0.15
−1 ± 27
& inside IC80
−2 ± 6 −2 ± 4
59
13 ± 31 −0.00 ± 0.11
−5 ± 22
& 2 ≤ log10 (Eref ) ≤ 4 −2 ± 8 −1 ± 4
65
6 ± 31
0.02 ± 0.13
7 ± 20
& 4 ≤ log10 (Eref ) ≤ 6 −2 ± 5 −2 ± 4
50
9 ± 27
0.00 ± 0.08
−5 ± 16
& 6 ≤ log10 (Eref ) ≤ 7 −3 ± 5 −3 ± 4
48
19 ± 26 −0.12 ± 0.17 −16 ± 17
& 7 ≤ log10 (Eref ) ≤ 8 −2 ± 8 −5 ± 7
68
36 ± 31 −0.66 ± 0.28 −36 ± 31
& log10 (Eref ) ≥ 8
−2 ± 15 −8 ± 11
78
31 ± 34 −1.41 ± 0.31 −79 ± 57
Strategy 3 – 32 iterations with fixed direction, final fit of all 7 parameters (≈ 5 min/event)
converged
−2 ± 9 −2 ± 6
71
19 ± 36 −0.03 ± 0.15
−1 ± 26
& inside IC80
−2 ± 6 −2 ± 4
58
13 ± 31 −0.01 ± 0.11
−4 ± 21
& 2 ≤ log10 (Eref ) ≤ 4 −2 ± 8 −1 ± 4
65
6 ± 31
0.01 ± 0.13
8 ± 20
& 4 ≤ log10 (Eref ) ≤ 6 −2 ± 5 −2 ± 4
47
8 ± 27
0.00 ± 0.09
−4 ± 16
& 6 ≤ log10 (Eref ) ≤ 7 −3 ± 5 −3 ± 4
48
18 ± 27 −0.12 ± 0.17 −16 ± 18
& 7 ≤ log10 (Eref ) ≤ 8 −1 ± 8 −5 ± 7
70
26 ± 37 −0.66 ± 0.28 −34 ± 30
& log10 (Eref ) ≥ 8
−2 ± 15 −8 ± 11
80
34 ± 36 −1.40 ± 0.31 −80 ± 58
selection criterion
Table 5.5: Reconstruction results for the pulse-based likelihood. Three different iterative minimization strategies are compared. Three cuts were applied: select
only converged events, select only converged events, whose reconstructed
vertices are contained inside a cylinder with r ≤ 500 m, |z| ≤ 500 m, select
only converged events inside the cylinder where the reference energy is in
a given range. For a definition of the resolutions see section 4.1.3. Especially for cascades with a reference energy between 10 TeV and 1 PeV the
reconstruct performs well.
70
5.6.2 Quality Cuts
If one succeeds in identifying badly reconstructed events, one can remove them from
the sample. This in turn improves the resolution of the remaining events. The
standard cut applied to most of the presented results is the requirement that the
reconstructed vertex is located inside a cylinder with 500 m radius and 1000 m height.
This selection improves the results, but it has not been further motivated, yet.
The cut judges the quality of the reconstruction solely by the reconstructed vertex.
It presumes that in regions on the border or outside of IC80, the reconstruction
performs worse. To check this, one can divide the detection volume into small bins,
to which mean vertex or energy errors can be assigned.
This is done in figure 5.12 with two-dimensional histograms. The cylindrical coordinates r, φ and z are introduced. All events that have their vertex in the ring
[r − r + ∆r, 0 − 2π, z − z + ∆z] fall into one bin.7 This can be done either for the
true or the reconstructed vertex. For all events falling into one bin, the vertex and
energy errors are averaged.8 This mean error is assigned to the location of the bin.
The histograms for the vertex error illustrate the decrease of the resolution for
cascades that have the true vertex outside IC80. Also it reveals that vertices of not
contained events are reconstructed closer to the detector. They fall into the bins that
are located on the border of IC80. In turn these bins will have large mean vertex
errors. The cylinder that was chosen for the vertex cut is drawn into the histograms.
It was adequately dimensioned to exclude the border bins.
The histograms also illustrate that fewer events are located in the central dust
layer at ≈ −100 m. The average errors do not indicate a resolution decrease. But the
worse optical properties of this region suggest to analyse how the resolutions change,
if events from the dust layer region are omitted. One can therefore create a dust layer
cut that discards events with reconstructed z-coordinates in [−200 m, 50 m]. Finally
the ice below the central dust layer has the best optical properties (see figure 3.6).
A deep ice cut restricts the cylindrical volume only to the region below z = −200 m.
The histograms of the mean energy error suggest that the radius of the cylinder can
be increased, if one is only interested in the energy. Because of the decreasing vertex
resolution one should expect that precision for all other cascade parameters is lost
as well. Therefore, this cut is not further investigated. But it could be exploited in
analyses that only aim for a good energy resolution.
The cuts and the remaining geometrical volumes are summarized in table 5.6. Of
about 15000 events in the dataset roughly 40% fall into the cylinder. Further cuts
on the vertices lead to a reduction of the sample, which is proportional to the loss of
should not deviate too much from the ≈ 5 min/event, though.
Since the r and z coordinates are divided into equally sized slices, the bins with small radii are
smaller and therefore less populated.
8
For the energy error the averaging has to be done with the ratio Erec /Eref . The plotted logarithm
is the logarithm of the calculated average.
7
71
name
inside IC80
dust layer
deep ice
description
volume [km3 ] ratio
r ≤ 500 m
0.79
1
|z| ≤ 500 m
r ≤ 500 m
−500 m ≤ z ≤ −200 m
0.59
0.75
50 m ≤ z ≤ 500 m
r ≤ 500 m
0.24
0.3
−500 m ≤ z ≤ −200 m
Table 5.6: Summary of utilized cuts that reduce the detector volume. The remaining
volume is given as well. The volumes of the dust layer and deep ice cuts
are compared to the volume of the first cut.
resolution
x [m]
z [m]
dir. [deg] Θ [deg]
log(E)
tcscd [ns]
Strategy 1 – 32 iterations, fit all 7 parameters (≈ 10 min/event)
dust layer
−2 ± 6 −2 ± 4
55
14 ± 35 0.00 ± 0.09
−4 ± 19
& deep ice
−2 ± 5 −2 ± 3
46
13 ± 24 0.00 ± 0.09
−4 ± 16
& 2 ≤ log10 (Eref ) ≤ 4 −2 ± 7 −2 ± 4
55
7 ± 21
0.01 ± 0.12
6 ± 17
& 4 ≤ log10 (Eref ) ≤ 6 −2 ± 4 −2 ± 3
30
11 ± 16 0.01 ± 0.07
−2 ± 8
& 6 ≤ log10 (Eref ) ≤ 7 −3 ± 4 −3 ± 3
34
16 ± 21 −0.11 ± 0.16
16 ± 21
& 7 ≤ log10 (Eref ) ≤ 8 −2 ± 9 −5 ± 5
70
35 ± 38 −0.60 ± 0.24
35 ± 38
& log10 (Eref ) ≥ 8
−3 ± 16 −6 ± 11
88
55 ± 35 −1.4 ± 0.26
55 ± 35
Strategy 3 – 32 iterations with fixed direction, final fit of all 7 parameters (≈ 5 min/event)
dust layer
−2 ± 6 −2 ± 4
66
13 ± 30 0.00 ± 0.10
4 ± 19
& deep ice
−2 ± 5 −2 ± 3
48
13 ± 27 0.01 ± 0.10
−5 ± 17
& 2 ≤ log10 (Eref ) ≤ 4 −2 ± 6 −1 ± 3
52
5 ± 27
0.01 ± 0.12
6 ± 16
& 4 ≤ log10 (Eref ) ≤ 6 −2 ± 4 −2 ± 3
33
9 ± 20
0.01 ± 0.08
0±8
& 6 ≤ log10 (Eref ) ≤ 7 −3 ± 4 −2 ± 3
37
15 ± 21 −0.12 ± 0.17 −14 ± 15
& 7 ≤ log10 (Eref ) ≤ 8 −2 ± 9 −3 ± 5
65
31 ± 32 −0.61 ± 0.24 −28 ± 19
& log10 (Eref ) ≥ 8
−4 ± 16 −6 ± 11
83
48 ± 30 −1.4 ± 0.27
−74 ± 53
selection criterion
Table 5.7: Effect of the quality cuts from table 5.6 on the reconstruction results from
table 5.5. The cut on the reference energy is applied to events that pass
the deep ice cut. For a definition of the resolutions see section 4.1.3.
72
Figure 5.12: Mean vertex and energy errors of different detector regions. The detector
is divided into bins and the mean vertex and energy errors of all events
having their vertex in this bin are assigned to this region. This is done
for the true and reconstructed vertex, respectively.
volume.
After applying these quality cuts, the resolutions improve significantly (see table
5.7). In the clear ice and for reference energies between 10 TeV and 1 PeV a direction
resolution of 30◦ is achieved. The corresponding zenith resolution is 16◦ . For all
events that survive the deep ice cut the direction resolution is still 46◦ . This seems
not very good, when it is compared with the direction resolution of muons, which is
about 1◦ − 3◦ in IC80 [30]. But for the cascade detection channel this resolution is
of previously unachieved quality. The vertex resolution of about 4 m and the energy
resolution of 0.07 are also better than all other presented results for the dataset.
The deep ice cut reduces the geometric volume drastically to only 0.24 km3 . If one
applies only the dust layer cut, the volume is 2.5 times bigger, but the resolution
73
for events of all energies decreases from 46◦ to 55◦ . Furthermore, the result worsens
for events outside the given energy range. Especially for events above 10 PeV the
reconstruction yields a bad performance. Therefore, in the next section an extension
for higher energetic events will be discussed.
90
direction resolution [deg]
80
70
60
50
40
30
20
10
02
strategy 3, inside IC80
strategy 1, inside IC80
strategy 3, deep ice
strategy 1, deep ice
3
4
5
6
log10(Eref/GeV)
7
8
9
Figure 5.13: Obtained direction resolution for different minimization strategies and
quality cuts.
74
5.7 High Energy Modification of the Likelihood
The energy reconstruction worsens for higher energies. All presented results indicate
that for energies above 1 PeV the mean of the Gaussian, which defines the energy
resolution, is shifted to negative values. The scatter plot in figure 5.14 shows the
reconstructed energy versus the reference energy. Above an energy of a few PeV, the
slope of the distribution decreases. This effect has also been reported for a waveform
reconstruction of muons [46]. It should mainly result from the saturation that the
likelihood does not describe, yet. However, the comparison between the recorded and
the predicted charge reveals another effect that gains importance at high energies.
Figure 5.14: This scatter plot shows the reconstructed energy versus the reference
energy. The results are obtained with minimization strategy 3 and the
deepice cut has been applied. The distribution is fitted with a broken
linear function with slopes m1 and m2 to guide the eye. Above 1 PeV
the energy reconstruction worsens.
The likelihood assumes that the deviation of the recorded charge from the prediction follows a Poisson distribution. For a Poisson distribution with parameter µ the
relative error decreases with increasing µ [21]:
σPoisson
1
2
σPoisson
= µ,
=√ .
(5.13)
µ
µ
75
In figure 5.7 the width of the charge distribution is much larger than expected from
a Poisson process. It was reduced by considering the time window of the ATWD.
Now a closer look on the remaining width of the distribution from figure 5.9 shall be
taken.
For the same data the twofold corrected expectation from equation (5.8) is opposed
(3)
to the ratio of the seen to the expected charge no /µo . In figure 5.15 the logarithm
of this ratio is plotted. The black dashed
lines indicate the logarithmic Poisson
(3)
standard deviation log10 1 ± σP oisson /µo . Except for the noise and saturation
affected regions, the Poisson error describes the slope of the distribution well.
But for larger numbers of photoelectrons the decrease of the standard deviation
is not reflected in the data, since the distribution stays broad. To account for this
effect, the assumption of a Poisson process can be extended by introducing a relative
error floor for larger charge predictions. This can be done by replacing the Poisson
distribution with a Gaussian, which allows to control the variance. If the variance
of
(3)
2
2
the Gaussian is chosen to scale with the square of the prediction σGauss
= µo /α ,
the curve describing the error in figure 5.15 can be spread to fit the data. The
constant of proportionality is determined by α.
So one can distinguish two regions. One is governed by the Poisson error and the
other by the Gaussian error. At the intersection µt the standard deviations of the
Poisson and Gaussian shall merge, so one has to require continuity for their functional
descriptions:
σPoisson (µc ) = σGauss (µc ),
µc = α2 .
(5.14)
Thus, the constant α plays two roles in this extension. It denotes the border of
the two regions and it is the inverse of the relative error floor. This construction is
illustrated for a supposed 20% error in figure 5.15.
Now one can incorporate the error floor into the pulse-based likelihood. Effectively
the Poisson likelihood compares for a given hypothesis every measured pulse to its
predicted charge. Large deviations relative to the narrow Poisson prediction are
penalized by a large contribution to the log-likelihood for that hypothesis. This can
be relaxed by comparing large pulses to a Gaussian with guaranteed minimum width:
an error floor. The likelihood will therefore consist of two parts. One that compares
a part of the pulses to a Poisson distribution and another that compares the other
pulses to a Gaussian distribution.
One has to clarify, which criterion decides, whether a pulse is compared to the
Poisson or Gaussian distribution. The motivation of the extension used the predicted
charge to distinguish both regions. In the likelihood this should be avoided, since the
predicted charge depends on the cascade hypothesis. The corresponding distribution
for a measured pulse could change, while the minimizer traverses the parameter
space. The same argument holds for the standard deviation of the Gaussian. The
width of the distribution should not depend on the hypothesis.
76
Figure 5.15: Comparison between the twofold corrected expectation from equation
(3)
(5.8) and the ratio of the seen to the expected charge: no /µo . The
plotted distribution is the same as in the lower plot of figure 5.9. The
Poisson standard deviation and a patched error curve, which includes
an error floor, are plotted as well.
Therefore, the following extension has been implemented. Depending on the constant α a charge threshold nt = α2 is defined. Pulses with a charge noi below this
threshold are compared to a Poisson distribution. Larger pulses are compared to a
Gaussian with σ = noi /α. The likelihood can then be formulated:
2
Y
Y Y µnoi
α
oi
2 (noi − µoi )
√
exp −α
exp (−µoi )
L=
n
!
2n2oi
2πnoi
oi
o
i
i
hit noi ≤nt
noi >nt
.
Y
exp(−µo )
(5.15)
o
nohit
The implementation of this likelihood has the beneficial feature of resembling the
Poisson likelihood for large values of α. If the threshold nt is above the saturation
level, all pulses are treated with the Poisson likelihood.
77
The reconstruction was done for different values of α (corresponding to a 5%, 10%, 20%
and 50% relative error floor). The third minimization strategy (32 iterations with
fixed direction, final fit of all parameters) has been used. The result for events that
pass the deep ice cut are given in table 5.8. It compares the direction and energy resolutions in two energy ranges. The best performing energy interval 104 GeV-106 GeV
is compared to the range of 107 GeV-108 GeV. Additionally the relation between
the reference and the reconstructed energies for different error floors are illustrated
in figure 5.16. To guide the eye, a broken linear function with the slope m1 below
the energy Ec and m2 above is fitted through the scatter plot. The values for these
parameters are also given in table 5.8.
Indeed, the introduction of an error floor improves the reconstruction of highly
energetic events. But at the same time it worsens the reconstruction for events with
lower energies. Further studies would be needed to evaluate the potential of this
extension for the reconstruction of highly energetic cascades. However, since such
cascades are not adequately described within this dataset, their discussion shall end
here.
Finally it shall be pointed out, that the introduced error floor may help in the
treatment of systematic errors. When the reconstruction is applied to real data the
need to cope such error could arise. The presented extension could prove valuable
for this purpose.
78
error
5%
10%
20%
50%
m1
0.92
0.92
0.90
0.88
deep ice 10 TeV-1 PeV
m2 Ec [GeV]
log10 (E)
dir. [deg]
6.27
0.46
10
0.01 ± 0.08
33
6.22
0.47
10
0.01 ± 0.08
34
0.59
105.7
0.0 ± 0.09
36
4.87
0.68
10
−0.12 ± 0.19
41
deep ice 10 PeV-100 PeV
log10 (E)
dir. [deg]
−0.61 ± 0.24
65
−0.61 ± 0.23
62
−0.74 ± 0.15
70
−0.78 ± 0.11
85
Table 5.8: Results from a reconstruction with the high energy extension for different
relative error floors. Strategy 3 was used (32 iterations with fixed direction,
final fit of all parameters) and the deep ice cut applied. For a definition
of the resolutions see section 4.1.3.
Figure 5.16: This scatter plot show the reconstructed energy versus the reference
energy from a reconstruction with the extended likelihood formulation.
The results were obtained for relative error floors of 5%, 10%, 20% and
50%. The results are obtained with minimization strategy 3. The deep
ice cut has been applied. The distribution is fitted with a broken linear
function with slopes m1 and m2 to guide the eye. Above 1 PeV the
energy reconstruction worsens.
79
6 Summary and Outlook
At the end of this study the following conclusions can be drawn: the reconstruction of
cascade-like events in IceCube can be significantly improved, when the reconstruction
algorithm considers both the full waveform information and the characteristics of
light propagation in the detector medium. The performance of the algorithm is
further enhanced by taking the direction of the cascade into account. This requires a
computational intensive minimization procedure, by which the reconstruction of the
cascade direction becomes feasible.
The following considerations preceded the development of the algorithm: in cascadelike events the deposited energy, the neutrino direction as well as the time and location of the interaction are of interest.
All these pieces of information are contained in the arrival times and the intensities of the emitted Čerenkov photons. The results from the detailed simulation of
light propagation in ice, Photonics, offer the possibility to study the light yield
of cascades. In this thesis was argued that although the emission from the cascade
is anisotropic, it is impossible to retrieve the neutrino direction from the recorded
amplitudes at different locations in the detector. Nevertheless, hints on this direction
are still contained in the arrival times of the photons.
To retrieve this remaining information a maximum likelihood estimation, which
analyses the timing information with the help of the predictions by Photonics, has
proven to be powerful. In this thesis the method was tested under simplified conditions, before it was implemented in the IceCube software framework. This toy model
study suggested that in principle the neutrino direction is determinable. Moreover,
it revealed a strong correlation between the vertex and direction reconstruction.
A likelihood reconstruction for cascades was then implemented in the IceCube
software and its performance in reconstructing electron neutrino events was studied.
To compensate for a systematic energy underestimation, two corrections had to be
implemented in the algorithm. For cascades that are contained in the detector and
that have energies between 10 TeV and 1 PeV, a energy resolution of ∆ log10 (E) = 0.1
was achieved. The direction could not be determined, though.
This thesis suggests that one can deal with the correlation between the vertex and
direction reconstruction by performing the reconstruction iteratively. For cascades
with energies between 10 TeV and 1 PeV and a vertex in the region of the detector
with the clearest ice, this results in a direction resolution of 30◦ and an energy
resolution of ∆ log10 (E) = 0.07.
These achieved resolutions could have a significant impact on the cascade detection
80
channel of IceCube. Since so far the neutrino direction could not be obtained from
cascade-like events, diffuse analyses had to be performed. The achieved direction
resolution will at least allow to distinguish upgoing and downgoing event signatures.
Together with the improved energy reconstruction, this may yield powerful selection
criteria for interesting events. Due to the immense computational effort needed by
the algorithm, it will require an effective prefilter, though.
Before this algorithm is applied in an analysis some further studies should be done.
First, the reconstruction should be tested with a dataset that is also accurate for
higher energies. The improved aha ice model should be tested. Although the coarsely
binned but interpolated tables performed very well, the fine-grained Photonics
tables could improve the resolution.
Moreover, one should try to backup this Monte Carlo study with real data: each
DOM contains a set of LEDs by which it can play the role of an anisotropic light
source in the ice. Being able to determine the orientation of a flashing DOM would
increase the trust in the capabilities of the algorithm.
But also the likelihood function itself provides the possibility to enhance the algorithm. So far the local coincidence criteria is not contained in the formulation of
the likelihood. Furthermore, a possibility to improve the reconstruction of events
above 10 PeV and to cope with systematic errors has been introduced in the last
chapter. It yielded promising results. That is why one finally can conclude: The
presented reconstruction of cascade-like events in IceCube allows for a estimation of
the direction of the incident neutrino and it provides many opportunities for further
investigations.
81
Bibliography
[1] J. G. Learned and K. Mannheim, “High-Energy Neutrino Astrophysics,”
Annual Review of Nuclear and Particle Science 50 (2000) no. 1, 679–749.
http://arjournals.annualreviews.org/doi/abs/10.1146/annurev.
nucl.50.1.679.
[2] R. J. Gould and G. Schréder, “Opacity of the Universe to High-Energy
Photons,”Physical Review Letters 16 (February, 1966) 252+.
http://dx.doi.org/10.1103/PhysRevLett.16.252.
[3] A. M. Hillas, “Cosmic Rays: Recent Progress and some Current Questions,”
Sep, 2006.
http://arxiv.org/abs/astro-ph/0607109v2.
[4] The Pierre Auger Collaboration, “Correlation of the Highest-Energy Cosmic
Rays with Nearby Extragalactic Objects,”Science 318 (November, 2007)
938–943. http://dx.doi.org/10.1126/science.1151124.
[5] F. Halzen, “High-Energy Neutrino Astronomy: from AMANDA to Icecube,”
Oct, 2003.
http://arxiv.org/abs/astro-ph/0311004v1.
[6] “IceCube Preliminary Design Document,” tech. rep., The IceCube
Collaboration, October, 2001.
http://www.icecube.wisc.edu/science/publications/pdd/pdd.pdf.
[7] The IceCube Collaboration, “First Year Performance of The IceCube Neutrino
Telescope,” Apr, 2006.
http://arxiv.org/abs/astro-ph/0604450.
[8] J. Lundberg, P. Miocinovic, K. Woschnagg, T. Burgess, J. Adams,
S. Hundertmark, P. Desiati, and P. Niessen, “Light tracking through ice
and water – Scattering and absorption in heterogeneous media with
Photonics,” Aug, 2007.
http://arxiv.org/abs/astro-ph/0702108v2.
[9] M. Fukugita and T. Yanagida, Physics of Neutrinos.
Springer, August, 2003.
[10] Aynutdinov, V. et al. , “Search for a Diffuse Flux of High-Energy
Extraterrestrial Neutrinos with the NT200 Neutrino Telescope,” Aug, 2005.
http://arxiv.org/abs/astro-ph/0508675.
82
[11] N. Schmitz, Neutrinophysik.
Teubner Verlag, 1997.
[12] R. Gandhi, C. Quigg, M. H. Reno, and I. Sarcevic, “Neutrino interactions at
ultrahigh energies,” Physical Review D 58 (1998) no. 9, 093009+.
http://dx.doi.org/10.1103/PhysRevD.58.093009.
[13] S. L. Glashow, “Resonant Scattering of Antineutrinos,”Physical Review 118
(April, 1960) 316+. http://dx.doi.org/10.1103/PhysRev.118.316.
[14] B. Voigt, Sensitivity of the IceCube Detector for Ultra High Energetic Electron
Neutrino Events.
PhD thesis, Humboldt-University to Berlin, 2008.
[15] P. A. Čerenkov, “Visible Radiation Produced by Electrons Moving in a
Medium with Velocities Exceeding that of Light,” Physical Review 52
(1937) no. 4, 378+. http://dx.doi.org/10.1103/PhysRev.52.378.
[16] J. V. Jelley, Čerenkov Radiation and its applications.
United Kingdom Atomic Energy Authority, 1958.
[17] D. H. Perkins, Introduction to High Energy Physics.
Cambridge University Press, April, 2000.
[18] W. M. Yao, “Review of Particle Physics,”Journal of Physics G: Nuclear and
Particle Physics 33 (July, 2006) 1–1232.
http://dx.doi.org/10.1088/0954-3899/33/1/001.
[19] Y.-S. Tsai, “Pair production and bremsstrahlung of charged leptons,”Reviews
of Modern Physics 46 (October, 1974) 815+.
http://dx.doi.org/10.1103/RevModPhys.46.815.
[20] C. H. Wiebusch, The Detection of Faint Light in Deep Underwater Neutrino
Telescopes.
PhD thesis, RWTH Aachen, 1995.
[21] I. N. Bronstein, K. A. Semendjajew, and G. Musiol, Taschenbuch der
Mathematik.
Verlag Harri Deusch, fourth ed., 1999.
[22] M. P. Kowalski, Search for Neutrino-Iduced Cascades with the AMANDA-II
Detector.
PhD thesis, Humboldt University to Berlin, 2003.
[23] C. Grupen, Teilchendetektoren.
BI-Wiss.-Verl., 1993.
[24] A. Einstein, “Über einen die Erzeugung und Verwandlung des Lichtes
betreffenden heuristischen Gesichtspunkt,” Annalen der Physik 322 (1905)
132–148. http://dx.doi.org/10.1002/andp.19053220607.
83
[25] C. Portello-Roucelle, “Latest news from the DOMsimulator,” in IceCube
Collaboration Meeting Spring 2008.
2008.
[26] R. Dossi, A. Ianni, G. Ranucci, and Yu, “Methods for precise photoelectron
counting with photomultipliers,” Nucl. Instrum. Meth. A451 (2000)
623–637.
[27] M. Inaba, “General performance of the IceCube detector and the calibration
results,” in International workshop on new photon-detectors PD07.
June, 2007.
http://pos.sissa.it//archive/conferences/051/031/PD07_031.pdf.
[28] K. Hanson and O. Tarasova, “Design and production of the IceCube digital
optical module,”Nuclear Instruments and Methods in Physics Research
Section A: Accelerators, Spectrometers, Detectors and Associated
Equipment 567 (November, 2006) 214–217.
http://dx.doi.org/10.1016/j.nima.2006.05.091.
[29] The Amanda Collaboration, “Muon Track Reconstruction and Data Selection
Techniques in AMANDA,” Jul, 2004.
http://arxiv.org/abs/astro-ph/0407044v1.
[30] S. Grullon, D. J. Boersma, and G. Hill, “Photonics-based Log-Likelihood
Reconstruction in IceCube,” tech. rep., June, 2008.
[31] Ackermann, M. et al., “Optical properties of deep glacial ice at the South
Pole,”Journal of Geophysical Research 111 (July, 2006) D13203+.
http://www.agu.org/pubs/crossref/2006/2005JD006687.shtml.
[32] S. L. Miller, “Clathrate Hydrates of Air in Antarctic Ice,”Science 165 (August,
1969) 489–490. http://dx.doi.org/10.1126/science.165.3892.489.
[33] “Photonics.”
http://photonics.tsl.uu.se/index.php.
[34] C. Wiebusch and J. Lundberg, README of the wl06v210 photonics tables,
2006.
http://photonics.tsl.uu.se/tablesdir2/wl06v210/README_wl06v210.txt.
[35] J. Lundberg, README for the swe 06 he i3-bulkptd photonics tables, 2006.
http://photonics.tsl.uu.se/tables-dir/swe_06_he_i3-bulkptd.README.
[36] D. Pandel, “Bestimmung von Wasser- und Detektorparametern und
Rekonstruktion von Myonen bis 100 TeV mit dem Baikal-Neutrinoteleskop
NT-72,” 1996.
84
[37] N. van Eijndhoven, O. Fadiran, and G. Japaridze, “Implementation of a Gauss
convoluted Pandel PDF for track reconstruction in Neutrino Telescopes,”
Sep, 2007.
http://arxiv.org/abs/0704.1706.
[38] “The ROOT System Home Page,” 2008.
http://root.cern.ch/.
[39] “HDF Group,” 2008.
http://hdf.ncsa.uiuc.edu/HDF5/.
[40] D. Chirkin, “Feature Extraction of IceCube Waveforms,” tech. rep.
[41] C. Scott and R. Nowak, “Maximum Likelihood Estimation.”
Http://cnx.org/content/m11446/latest, May, 2004.
http://cnx.org/content/m11446/latest/.
[42] D. Johnson, “Cramer-Rao Bound.” Http://cnx.org/content/m11266/latest,
August, 2003.
http://cnx.org/content/m11266/latest/.
[43] A. W. F. Edwards, Likelihood.
The Johns Hopkins University Press, expanded edition ed., October, 1992.
[44] G. D’Agostini, Bayesian Reasoning in Data Analysis: A Critical Introduction.
World Scientific Publishing Company, 2003.
[45] R. O. Duda, P. E. Hart, and D. G. Stork, Pattern Classification (2nd Edition).
Wiley-Interscience, November, 2001.
[46] S. Grullon, D. J. Boersma, G. Hill, K. Hoshina, and K. Mase, “Reconstruction
of high energy muon events in IceCube using waveforms,” in 30th
International Cosmic Ray Conference.
2007.
[47] W. H. Press, S. A. Teukolsky, W. T. Vetterling, and B. P. Flannery, Numerical
Recipes in C++: The Art of Scientific Computing.
Cambridge University Press, February, 2002.
[48] “Documentation for Gulliver,” 2008.
http://www-zeuthen.desy.de/~schlenst/icecube/docstrunk/gulliver/index.html.
[49] F. James, “MINUIT - Function Minimization and Error Analysis - Reference
Manual.” Http://wwwasdoc.web.cern.ch/wwwasdoc/minuit/minmain.html,
2000.
http://wwwasdoc.web.cern.ch/wwwasdoc/minuit/minmain.html.
[50] A. Olivas. Private communication, 2008.
85
A Software Versions and Module
Configurations
The IceCube reconstruction software IceRec was used in its version V02-01-00.
Photonics was used in its version ”1.50: starlight release 1d”
Listing A.1: Extract of the script that defines the processing chain
1
2
3
#################################################
# SERVICES
#################################################
4
5
6
7
8
9
10
t r a y . AddService ( ” I 3 B a s i c S e e d S e r v i c e F a c t o r y ” , ” s e e d p r e p ” ) (
( ” InputReadout ” , ” FEPulses ” ) ,
( ” TimeShiftType ” , ”TNone” ) ,
( ” NChEnergyGuessPolynomial ” , [ 1 . 7 , 0 . , 0 . 7 , 0 . 0 3 ] ) ,
#(” FixedEnergy ” , 1 0 0 . 0 * I 3 U n i t s . TeV) ,
( ” FirstGuess ” , ” forgedseed ” ))
11
12
t r a y . AddService ( ” I3GSLRandomServiceFactory ” , ” I3RandomService ” )
13
14
15
16
17
18
t r a y . AddService ( ” I 3 G u l l i v e r M i n u i t F a c t o r y ” , ” m i n u i t ” ) (
( ” MaxIterations ” ,10000) ,
( ” Tolerance ” , 0 . 1 ) ,
( ” Algorithm ” , ”SIMPLEX” )
)
19
20
21
22
23
24
t r a y . AddService ( ” I3PDFPhotorecFactory ” , ” p h o t o r e c ” ) (
( ”PSI Name” , ” P S I n t e r f a c e ” ) ,
( ” HypothesisType ” , ” Cascade ” )
)
25
26
27
28
29
30
31
n o i s e r a t e = 700 * I 3 U n i t s . h e r t z
threshold in NPE = 0.1
86
32
33
34
35
36
37
38
39
40
41
t r a y . AddService ( ” I 3 P h o t o r e c L o g L i k e l i h o o d F a c t o r y ” , ” w f g u l ” ) (
( ” I n p u t P u l s e s ” , ” FEPulses ” ) ,
( ” NoiseRate ” , n o i s e r a t e ) ,
( ” E v e n t H y p o t h e s i s ” , ” Cascade ” ) ,
( ” SkipWeights ” , F a l s e ) ,
( ”PDF” , ” p h o t o r e c ” ) ,
( ” EventLength ” , −1) ,
( ” GaussianErrorConstant ” , 1000) ,
( ” U s e B a s e C o n t r i b u t i o n s ” , True )
)
42
43
44
45
46
47
48
49
50
51
52
53
54
55
t r a y . AddService ( ” I 3 S i m p l e P a r a m e t r i z a t i o n F a c t o r y ” , ” simpar ” ) (
( ” StepT ” , 1 0 . 0 * I 3 U n i t s . ns ) ,
( ” StepX ” , 5 0 . 0 * I 3 U n i t s .m ) ,
( ” StepY ” , 5 0 . 0 * I 3 U n i t s .m ) ,
( ” StepZ ” , 5 0 . 0 * I 3 U n i t s .m ) ,
( ” StepLogE ” , . 1 ) ,
( ” StepZenith ” , 0.1* I3Units . radian ) ,
( ” StepAzimuth ” , 0 . 1 * I 3 U n i t s . r a d i a n ) ,
( ”BoundsX” , [ −2000.0 * I 3 U n i t s .m, + 2 0 0 0 . 0 * I 3 U n i t s .m] ) ,
( ”BoundsY” , [ −2000.0 * I 3 U n i t s .m, + 2 0 0 0 . 0 * I 3 U n i t s .m] ) ,
( ”BoundsZ” , [ −2000.0 * I 3 U n i t s .m, + 2 0 0 0 . 0 * I 3 U n i t s .m] ) ,
( ”BoundsE” , [ − 1 , 1 1 ] )
)
56
57
58
59
60
61
#################################################
# MODULE CHAIN
#################################################
62
63
64
65
66
67
68
69
70
71
72
73
74
75
### Parameters as i n e m a i l 6 December 2007 ( P a t r i c k Toale )
t r a y . AddModule ( ” I3CFirstModule ” , ” C F i r s t ” ) (
( ” ResultName ” , ” C F i r s t ” ) ,
( ” InputType ” , ” RecoPulse ” ) ,
( ” R e c o S e r i e s ” , ” FEPulses ” ) ,
( ” FirstLE ” , F a l s e ) ,
#
(” FirstLE ” , True ) ,
( ” MinHits ” , 8 ) ,
( ” DirectHitRadius ” ,1000.) ,
( ” SmallClusterRadius ” ,1000.) ,
( ” LargeClusterRadius ” ,1000.) ,
( ” TriggerWindow ” , 2 0 0 . ) ,
(” TriggerHits ” ,3) ,
87
( ” EarlyHitStart ” , −1500.) ,
( ” EarlyHitStop ” , −200.) ,
( ”AmpWeightPower” , 0 )
76
77
78
79
)
80
81
82
83
84
85
t r a y . AddModule ( ” I 3 T e n s o r O f I n e r t i a ” , ” CascadeToI ” ) (
( ” InputReadout ” , ” FEPulses ” ) ,
( ”Name” , ” CascadeToI ” ) ,
( ” MinHits ” , 8 ) ,
)
86
87
88
89
90
91
92
93
94
95
96
97
98
t r a y . AddModule ( ” I 3 S e e d F o r g e ” , ” s e e d f o r g e ” ) (
( ” type ”
, ” cascade ” ) ,
( ” time ”
, ” CFirst ” ) ,
#(” time ”
, ”mc” ) ,
( ” p o s i t i o n ” , ” CascadeToI ” ) ,
( ” d i r e c t i o n ” , ” CascadeToI ” ) ,
( ” energy ”
, ”NaN” ) ,
( ” mctree ”
, ”ICMCTree” ) ,
( ” m c p a r t i c l e ” , ” me Cascade ” ) ,
(” result ”
, ” forgedseed ”)
)
99
100
101
102
#
# gulliver likelihood fit
#
103
104
105
106
107
108
109
110
111
t r a y . AddModule ( ” I 3 I t e r a t i v e F i t t e r ” , ” s i m p l e l l h ” ) (
( ” SeedService ” , ” seedprep ” ) ,
( ” RandomService ” , ”SOBOL” ) ,
( ” P a r a m e t r i z a t i o n ” , ” simpar ” ) ,
( ” LogLikelihood ” , ” wfgul ” ) ,
( ” Minimizer ” , ” m i n u i t ” ) ,
( ” N I t e r a t i o n s ” , 32)
)
88
89
B Selected Error Distributions
On the following pages some of the resulting error distributions are shown.
90
Figure B.1: Result of the toy model reconstruction. Here bulk ice was used, the
energy spectrum is flat and the time was regarded as known.
91
Figure B.2: Result of the toy model reconstruction. Here layered ice was used, the
energy spectrum is flat and the time is regarded as known.
92
Figure B.3: Result of the toy model reconstruction. Here layered ice was used, the
energy spectrum follows an E −1 power law and the time is regarded as
known.
93
Figure B.4: Result of the toy model reconstruction. Here layered ice was used, the
energy spectrum follows an E −1 power law and all parameters of the
cascade, including the time, get reconstructed.
94
Figure B.5: Results of a full-feature reconstruction with minimization strategy 1 and
the deep ice cut.
95
Figure B.6: Results of a full-feature reconstruction with minimization strategy 1 and
the deep ice cut. Only events with reference energies between 10 TeV
and 1 PeV are selected.
96
97
Zusammenfassung
Diese Arbeit beschäftigt sich mit der Rekonstruktion kaskadenartiger Ereignisse in
dem Neutrinoteleskop Icecube. Bei dieser Ereignisklasse handelt es sich um elektromagnetische oder hadronische Schauer, welche bei Energien unter 10 PeV charakteristische Längen von wenigen Metern aufweisen und in diesem Detektor als punktförmig
angenommen werden können. Derartige Kaskaden weisen zwar bei ihrer Entstehung
eine anisotrope Verteilung der ausgesendeten Photonen auf, allerdings geht ein Teil
dieser Richtungsinformation durch Streuung im Eis verloren. Es wird ein Rekonstruktionsalgorithmus entwickelt, der auf Grundlage einer möglichst vollständigen
Ereignisbeschreibung, einer detaillierten Simulation der Ausbreitung von Photonen
im antarktischen Eis sowie eines Maximum-Likelihood-Ansatzes die Energie, Vertexpostion und Richtung der Kaskade bestmöglich ermittelt.
98
99
Erklärung
Hiermit versichere ich, dass ich die vorliegende Arbeit selbständig verfasst und keine
anderen als die angegebenen Quellen und Hilfsmittel verwendet habe.
Ort und Datum
Unterschrift
Download