Search for Supersymmetry

advertisement
Search for Supersymmetry
in the Same Sign di-lepton and one lepton channels
Gunn Kristine Holst Larsen
Division of Experimental Particle Physics
Department of Physics
University of Oslo
Norway
Thesis
submitted for the degree
Master of Science
October 2008
Acknowledgements
First of all I would like to thank my supervisor professor Farid Ould-Saada for suggestions,
comments and guiding me with my master thesis, and for being a great source of inspiration.
With him everything is possible.
I would also like to thank the EPF-group for a good working environment and for giving
master students the opportunity to go to several conferences and to CERN. A special thanks
to post-doc Borge Kile Gjelsten and post-doc Bjorn Hallvard Samset, for helping me walk
to the end. Also thanks to PhD student Katarina Pajchel and to post-doc Yuriy Volodumyrovych Pylypchenko for help when it was needed.
I am also thankful to my fellow master students Eirik Gramstad, Maiken Pedersen, Havard
Gjersdal and Lillian Smestad for good company.
Thanks to my family and to Auke-Dirk Pietersma for having an infinitely big supply of help
and support when I needed it, and thanks to Romke Dam.
iv
Contents
Introduction
ix
1 Theoretical Framework
1.1 The Standard Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.1.1 Symmetries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.1.2 Creating the Lagrangian From the Gauge Principle . . . . . . . .
1.1.3 Higgs Mechanism . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.2 Why Beyond the Standard Model? . . . . . . . . . . . . . . . . . . . . .
1.2.1 The Hierarchy Problem . . . . . . . . . . . . . . . . . . . . . . .
1.2.2 Experimentally Motivated Questions . . . . . . . . . . . . . . . .
1.3 Supersymmetry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.3.1 Additional Conserved Operator . . . . . . . . . . . . . . . . . . .
1.3.2 The Minimal Supersymmetric Standard Model, MSSM . . . . . .
1.4 Supersymmetry Breaking . . . . . . . . . . . . . . . . . . . . . . . . . .
1.4.1 Minimal Gravity Mediated Supersymmetry Breaking, mSUGRA
1.5 Phenomenological Effects . . . . . . . . . . . . . . . . . . . . . . . . . .
1.5.1 Quadratic corrections . . . . . . . . . . . . . . . . . . . . . . . .
1.5.2 Cosmology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.5.3 Coupling constants . . . . . . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1
1
3
5
8
8
9
9
10
10
11
12
13
14
14
14
14
2 Observing Particles In An Experiment
2.1 Arranging For High Energy Collisions, The LHC
2.2 Behavior of Particles in Matter . . . . . . . . . .
2.3 ATLAS, A Toroidal LHC ApparatuS . . . . . . .
2.3.1 Magnet system . . . . . . . . . . . . . . .
2.3.2 Inner Detector . . . . . . . . . . . . . . .
2.3.3 Calorimeters . . . . . . . . . . . . . . . .
2.3.4 Muon Spectrometer . . . . . . . . . . . .
2.3.5 Forward Detectors . . . . . . . . . . . . .
2.3.6 Trigger and Data Acquisition . . . . . . .
2.3.7 Detector Control System . . . . . . . . . .
2.3.8 GRID . . . . . . . . . . . . . . . . . . . .
2.4 Simulated Data . . . . . . . . . . . . . . . . . . .
2.4.1 Event Generation . . . . . . . . . . . . . .
2.4.2 Detector Simulation . . . . . . . . . . . .
2.4.3 Digitalization . . . . . . . . . . . . . . . .
2.4.4 Reconstruction . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
17
18
18
20
21
22
24
26
27
27
28
28
29
29
29
30
30
v
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
vi
CONTENTS
2.5
Work
2.5.1
2.5.2
2.5.3
At CERN . . . . . . . . . . . . . . . . . .
SCT End-Cap Installation . . . . . . . .
Static Web Page for the SCT DSC . . .
Data Quality-shift at M5, pre-beam test
. . . . .
. . . . .
. . . . .
and M8
3 Data Sets and Strategy
3.1 Cut And Count Method . . . . . . . . . . . . . . . .
3.2 Optimizing The Cut According To The Significance
3.3 Characteristics Of The mSUGRA Points . . . . . . .
3.4 List of Data Sets And Cross Sections . . . . . . . . .
4 Lepton Isolation
4.1 Standard Object Definition: Electrons, Muons, Jet
4.2 Study Of Lepton Isolation . . . . . . . . . . . . . .
4.2.1 Scan Isolation Cut On EtCone20 Variable .
4.2.2 Scan pT Cut For Leptons . . . . . . . . . .
4.3 Lepton Purity . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
5 Prospects for Supersymmetry in Same Sign di-Lepton
Final States
5.1 Important Variables and Backgrounds . . . . . . . . . . . .
5.1.1 Effective Mass, Mef f . . . . . . . . . . . . . . . . . .
5.1.2 Further Discriminating Variables . . . . . . . . . . .
5.1.3 Main Standard Model Background Processes . . .
5.1.4 Effect Of Cuts On Different Processes . . . . . . . .
5.2 Discovery Potential In The Same Sign Channel . . . . . . .
5.2.1 Charge Misidentification . . . . . . . . . . . . . . . .
5.3 Discovery Potential In The One Lepton Channel . . . . . .
5.3.1 Cutting Variables After One Lepton Selection . . . .
5.3.2 Optimize With Matrix Approach . . . . . . . . . . .
5.3.3 Conclusion . . . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
30
30
31
32
.
.
.
.
35
35
36
37
42
.
.
.
.
.
47
47
48
53
56
58
and One-Lepton
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
67
67
68
70
75
78
80
86
87
87
88
88
6 Conclusion
91
A Notation
A.1 Pauli Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
A.2 Gell-Mann Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
A.3 The Dirac Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
93
93
93
94
B Note On alpgen And pythia Systematics
95
C Etcone20 Efficiency And Rejection
99
D One Lepton
101
D.1 Variables After One Lepton Selection . . . . . . . . . . . . . . . . . . . . . . . 101
D.1.1 Cut Step By Step For Optimized Cut . . . . . . . . . . . . . . . . . . 102
CONTENTS
vii
E Software
105
E.1 ROOT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
E.1.1 Cuts In Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
viii
CONTENTS
Introduction
High Energy Particle Physics is the study of properties of all elementary particles that have
been discovered. These are the three families of leptons and quarks, the intermediate gauge
bosons and some hundreds of hadrons that are made of quarks and antiquarks. Examples of these are protons and neutrons, made of u,u,d and u,d,d combinations respectively,
and the positively charged pion (π + ) made of a u-quark and an anti d-quark. In addition
the discipline of particle physics also includes particles that have not yet been discovered,
such as the hypothetical Higgs scalar boson, responsible for generating particle masses, supersymmetric particles, introduced to help grand unification of the fundamental forces of
nature and to explain dark matter in the universe, heavy neutrino and additional hadrons.
For the discovered particles (listed
in figure 1.2) and their antiparticles, the Standard Model explains the experimental measurements up to this day. This framework predicts how particles decay and it explains scattering and
annihilation processes in a colliding experiment. The increase
of the center of mass energy in
the collision gives rise to heavy
particle production. The Tevatron, a p − p̄ collider experiment currently taking data in Illinois, has up to now operated
with the world’s highest center
√
of mass energy of nearly s =
2TeV. Physics beyond the Stan- Figure 1: ATLAS first event with beam, 10th of Septemdard Model has not been seen ber 2008.
there so far.
However, at a even higher energy, some new physics can be expected. Supersymmetry is an
interesting extension of the Standard Model, where all Standard Model fermions have undiscovered boson partners and all Standard Model bosons have undiscovered fermion partners.
These are very exciting days in the history of particle physics. The Large Hadron Collider
(LHC) at CERN is about to start up. It will have a proton-proton center of mass energy of
√
s = 14TeV, and a much higher luminosity than earlier colliders. On the tenth of September
ix
x
Chapter 0. Introduction
2008 there was the first successfully injection of beams circulating the whole LHC in both
directions. This is a machine that makes us dream! Figure 1 shows the first event recorded
in the ATLAS detector. When the detector is fully understood, aligned and calibrated, data
might show new physics and give hints to what the underlying dynamics is. As an example,
Supersymmetry can be discovered up to the TeV scale.
In the low energy universe there is only need for a handful of particles to describe nature.
The atoms are made up of first generation particles: protons(uud), neutrons(udd) and electrons. The fundamental forces that act upon the matter particles are the electroweak force,
the electromagnetic and the weak force, unified in the Standard Model, and the strong force.
Gravity is to weak too be considered at the electroweak scale. Heavier particles are unstable
and need higher energies to take part in annihilation and decay processes, therefore they can
be created in experiments. As an example the second generation quark charm, was seen for
the first time in 1974 in the Standford Linear Accelerator (SLAC). A full set of three generations of quarks and leptons have already been observed. Following the Big Bang theory, the
heavier particles were also accessible at a very early time in the universe, when the energy
density was a lot higher than now. When the universe expands the energy density falls, and
the annihilation processes are frozen out. Metaphorically this can be compared to the phase
transitions between ice (a state of low energy with less symmetry), water and gas (state of
high energy with high symmetry). Particle physics attempts to answer the question: what
is the original symmetry of the Universe that led to the world we leave in? In this sense the
LHC experiments can be said to be a microscope looking back in time.
The first step before claiming a discovery, is to measure deviations from the Standard Model.
Then the search for specific signatures of new particles and new phenomena in the various
proposed new theories may start. Does the Higgs boson(s), squarks, sleptons, neutralinos or
charginos exist? The next step will be to measure the properties of the particles to confirm
and settle a new physics model. To achieve high enough statistics for many processes, LHC
must run for several years at high luminosity.
The main goal of the LHC collider is to detect the Higgs particle, which is responsible for the
electroweak symmetry breaking generating particle masses. This is the last missing piece of
the Standard Model. As the Standard Model is not thought to be the fundamental theory
valid at higher energy scales, there exists the possibility for a new discovery at LHC. There are
already many models on the market to explain observations in a more fundamental manner.
Theories like Technicolor, small extra dimensions and Supersymmetry are some of the proposals. It is important to know the characteristics of each model beyond the Standard Model, to
be able to rule out proposals or to hold some as more likely than other, once data is collected.
Supersymmetry is a symmetry that relates fermions and bosons. Meaning that all matter
particles will have scalar partners, and the gauge bosons and Higgs scalars will have fermion
partners. It is also required to introduce one additional Higgs doublet, so that there are two
doublets, leading to five Higgs particles instead of one in the Standard Model. In the simplest
realization scenario each Standard Model particle has one supersymmetric partner: a sparticle. The consequence is that the amount of particles is doubled, and phenomenologically
there is a whole new set of possible decay vertices. Supersymmetry is theoretically attractive
because it provides elegant solutions to open questions in the Standard Model. The Minimal
xi
Supersymmetric Standard Model (MSSM) is the name of the direct supersymetric extension
of the Standard Model, with a minimal particle content. No sparticles with the same mass
as the Standard Model particles have been observed, so Supersymmetry cannot be an exact
symmetry of nature. The MSSM needs to be spontaneously broken and unified with gravity,
since gravity is expected to become important at high energies. This need for spontaneous
symmetry breaking is complied with a hidden sector where Supersymmetry is broken and
then communicated to the MSSM scale by a messenger particle.
There has been searches for Supersymmetry since the 1980s, when it was first realized that
Supersymmetry could protect the large hierarchy between the weak scale and the Grand Unified Theory (GUT) scale, where the electroweak and strong force are unified. In early 1980s
√
the MAC and MARC 2 collaborations at SLAC( s ∼ 29 GeV) and the MARK J, CELLO,
√
TASSO and JADE collaborations at DESY( s ∼ 47 GeV) already performed searches at the
e+ e− colliders.
In this thesis a certain realization of Supersymmetry, the mSUGRA model, where gravity is
the messenger particle, is the object of investigation. The discovery potential for Supersymmetry in the Same Sign di-lepton and the one lepton channels is examined within six different
mSUGRA points, for early LHC data corresponding to one inverse femto-barn. Good prediction for Standard Model background mimicking this signature is essential for a discovery
and will therefore be looked into. The Same Sign channel is expected to be a clean signal
since the cross section for such a Standard Model final state is very low. Events with Same
Sign leptons from Standard Model physics will often come from leptons that are not from
the hard collisions, fake leptons and charge misidentification. In the case of Supersymmetry
the gluino is a Majorana particle, meaning the gluino is its own antiparticle. Gluino production may give rise to two decay chains leading to either Same Sign or Opposite Sign di-leptons.
Before real data exist, the analysis is based on Monte Carlo simulations. From the simulated
data sets, detector and physics performance of any theoretical model can be studied. Good
preparation is important to understand the data when it is ready. The simulated data is
run through reconstruction algorithms just like real data, before it is analyzed. This thesis
will be one of the last using Monte Carlo simulated data concerning prediction for the LHC
experiments. From now on the exciting work of confirmation and validation will start.
The thesis is organized as follows:
In chapter 1 the theoretical framework of the Standard Model is briefly introduced. Then
some reasons for considering physics beyond the Standard Model are explained, and it is
noted why Supersymmetry is a good candidate. In particular the mSUGRA realization of
Supersymmetry is explained.
Chapter 2 will give a overview of the arrangement for investigating instances of particles, the
ATLAS detector at the LHC ring.
Chapter 3 introduces the computer tools and specifies the data sets that have been used in
the thesis.
Chapter 4 considers the lepton objects. A optimized lepton isolation is searched for.
In chapter 5 a search for Supersymmetry in the Same Sign di-lepton channel and one lepton
channel is performed, and chapter 6 summarizes the results.
xii
Chapter 0. Introduction
Chapter 1
Theoretical Framework
The basis of the theoretical framework in elementary particle physics is quantum mechanics, symmetries and special relativity. These three ingredients make up the Quantum Field
Theory, which is the application of relativistic quantum mechanics to dynamical systems of
fields. Fields are preferred over a single particle description, since it handles annihilation and
creation of particles. Ordinary relativistic quantum mechanics violates causality.
Here it will be looked into what symmetries are important in the theoretical framework, and
what Supersymmetry adds in this context. This will be a way to understand the motivation
behind Supersymmetry. There will also be a brief discussion of the positive phenomenological
effects that makes Supersymmetry a popular model to study for experimentalists.
The paper “Quantum Theory of the Emission and Absorption of Radiation” written by Dirac
in 1927 [18] is regarded the foundation of quantum field theory. He quantizes the classical
fields so that the quanta can be taken to be the particles. Scattering of particles are described
as field quanta interacting among themselves through exchange of quanta of another field. For
electromagnetism this interchange is realized by electrons interacting by exchanging photons.
When a field is quantized the number of particles may change, and there will be annihilation
and creation of particles. In the equations this is carried out by operators, much similar to
the familiar ladder operators for the energy states in a harmonic oscillator.
1.1
The Standard Model
The Standard Model explains everything about the elementary particles that the world is
made up of. The particles are the fermions (leptons and quarks) and the bosons listed in
figure 1.2 and their group representation are given in table 1.4, where the Standard Model
particles are the fields without tilde. It is a consistent, renormalizable quantum field theory.
When radiative corrections are included, experimental data from a fraction of an electron
volt up to the electroweak scale of a few hundred GeV, is explained. Electroweak precision
measurements were done at LEP, colliding electrons on positrons from 1989 (at 91 GeV) to
2000 (at 209 GeV).
A good illustration of the success of the theory is the measurement of the anomalous magnetic
moment (g − 2) for electrons. Here higher order corrections to the electro magnetic coupling
1
2
Chapter 1. Theoretical Framework
constant, must be included for theoretical prediction to agree with experimental data. This
is a low energy measurement where the theoretical calculation agrees with experiment with
more than ten significant figures. However the same quantity calculated for muons does not
provide the same astonishing agreement, since the predicted value is off by ∼ 3.4 standard
deviations compared to the Standard Model. This is the type of measurement that motivates
that there might be something more to look for.
Figure 1.1: Particle content of the Standard Model. In addition there is the not yet discovered
Higgs boson.
All the fermions, which are the matter particles, are spin- 12 particles obeying Fermi-Dirac
statistics. This means that there can be no two particles in the same quantum state. Bosons
are the force carriers in the theory, characterized by being integer spin particles following
Bose-Einstein statistics. This is reflected in the commutation and anti-commutation relations for the boson and fermion fields, respectively. The photon (γ), W boson and Z boson
are spin 1 particles. Gravity is not included in the Standard Model and the hypothetical spin
2 graviton, a rank two tensor proposed to exchange this force, have never been observed.
In addition to the particles in figure 1.2, the Higgs boson is included in the Standard Model.
This is a spin 0 particle, being the only scalar present in the model. The Higgs boson has
not been observed by experiment, but it is an essential component in the Standard Model.
All gauge bosons and leptons need to be massless for the Lagrangian to be SU(2)L ×U(1)Y
gauge invariant, which does not agree with experimental observations. The Higgs scalar
1.1 The Standard Model
3
field is introduced with a non zero ground state so that the electro weak symmetry can be
spontaneously broken without destroying the SU(2)L ×U(1)Y gauge invariance.
Lagrangian density
The Lagrangian density L = L(φr , φr,α ) of fields are constructed in such a way that the field
equations follow by means of Hamiltonian’s principle. It is a function of the fields φrRand the
partial derivatives of the field φr,α = ∂x∂α φr . The action integral is defined S(Ω)= Ω d4 xL.
Minimizing the action by taking the derivative and setting it equal to zero, δS(Ω) = 0, will
lead to the Euler-Lagrange equation:
∂
∂L
∂L
− α(
)=0
∂φr
∂x ∂φr,a
(1.1)
In classical mechanics the Lagrangian is given as kinetic energy minus potential energy,
L = T − V . When the Euler-Lagrange equation is applied on the Lagrangian, the equations
of motion is obtained. In quantum field theory this corresponds to the equation of motion
for the fields, being the interactions. The Lagrangian density is here constructed in such a
way that all possible interactions are included.
1.1.1
Symmetries
In quantum field theory there is a set of symmetries that leads to conserved quantities. Examples of such symmetries are translational and rotational invariance leading to conservation
of linear and angular momentum. The Lagrangian density is constructed so that it is invariant under all symmetries of nature. When symmetry transformations are applied on the
Lagrangian, the Lagrangian has the same functional form before and after the transformation. This will lead to a conserved quantity, according to Noeters theorem [23].
The symmetries that the Standard Model Lagrangian is invariant under are the space time
symmetries (discrete and continuous) and the gauge symmetries (global and local). Continuous symmetries are related to important conservation laws, and discrete symmetries can
tell something about which reactions are allowed. An important note about the discrete
symmetries is that they are not independently conserved symmetries. The parity transformation, P, reflects all space coordinates so that a left-handed coordinate system gets to be
a right-handed one,P ψ(x, t) = ψ(−x, t), the charge conjugation C, replace a particle with
its antiparticle, meaning that physical laws must be the same for particles and antiparticles,
and time reversal T, makes a process running backwards in time, equivalent to reverse the
direction of motion. It is possible to derive from causality of physical events that the combined CPT symmetry must hold. Creation of the Standard Model Lagrangian is based on
the gauge group SU(3)c ×SU(3)L ×U(1)Y . Global gauge symmetries are transformations of
a field with a parameter, Ta , which has the same value for all space-time points. The more
advanced form of gauge symmetries is when the parameter is dependent on the space-time
point, Ta (x). Invariance under local gauge symmetries forces introduction of gauge bosons,
that are the force carriers in nature. The space-time point x is a 4-vector. Here follows a
brief discussion of the discrete symmetries, whose extension will include Supersymmetry, and
an explanation of the gauge principle.
4
Chapter 1. Theoretical Framework
Proper Space Time Symmetries
When looking at a physical system from two different frames related by a Lorentz transformation, the two descriptions must be equivalent.Quantum-mechanically the two descriptions
are related by a unitary transformation U like
¯ ®
|Ψi → ¯Ψ′ = U |Ψi
(1.2)
The unitary operator can be written U=eiǫa Qa . And the operator Qa is a hermitian matrix
Qa = Q†a , commuting with the Hamiltonian, [Qa , H] = 0. To require the Lagrangian to be
invariant for the above transformations means requiring the Lagrangian to have the same
functional form before and after the transformation.
L(φ(x), ∂µ φ(x)) = L(φ′ (x′ ), ∂µ φ′ (x′ ))
(1.3)
The form of the Lagrangian is the same when expressed in the original, x, and the transformed, x′ , space-time points.
The generators Qa of these symmetry transformations are part of the Poincare group. The
group has ten independent generators, Mµν and Pµ . Where Mµν is an antisymmetric tensor
with six conserved quantities; orbital angular momentum and intrinsic spin angular momentum and Pµ generates space-time displacements with eigenvalues that are conserved 4momenta. This is the full symmetry of special relativity; translations1 , rotations and boosts.
The last two are called the Lorentz symmetries, which is linear transformations that leave
origin fixed.
According to the Coleman-Mandula theorem [17] there are no further conserved operators
with non-trivial Lorentz character, than the ten operators in the Poincare group. It is explained in their paper that “We prove a new theorem on the impossibility of combining
space-time and internal symmetries in any but a trivial way.” As an illustrative example,
assume an additional conserved symmetric tensor charges Qµν [8]. If the operator act on a
particle state with momentum p, |pi, and the charge is conserved, an unphysical situation
occurs. In equation 1.4 the tensor is written in the general form αpµ pν + βgµν , where α, β
are constants, pµ is 4-momentum and gµν is the metric tensor given by specifying the scalar
products of each pair of basis-vectors in a basis. When the operator act on a two particle
state equation 1.5 is obtained, if the operator acts on one particle at the time.
Qνµ |pi = (αpµ pν + βgµν )|pi
1 2
Qνµ |p p i =
(α(p1µ p1ν
+
p2µ p2ν )
(1.4)
1 2
+ 2βgµν )|p p i
(1.5)
With inelastic scattering, 1 + 2 → 3 + 4 , there will be conservation of 4-momentum (eq 1.7)
and the assumed conservation of the symmetric tensor charge, meaning conservation of the
eigenvalues in equation 1.5 leading to equation 1.6.
p1µ p1ν + p2µ p2ν = p3µ p3ν + p4µ p4ν
p1µ
1
Displacement between space time points.
+
p2µ
=
p3µ
+
p4µ
(1.6)
(1.7)
5
1.1 The Standard Model
There is only the trivial solutions p1µ = p3µ , p2µ +p4µ or p2µ = p3µ , p1µ +p4µ , to this set of equations.
Only forward and backward scattering by the conserved symmetric tensor charge Qµν , which
is indeed unphysical. In section 1.3.1 it will briefly be looked into how the Coleman-Mandula
theorem is omitted for Supersymmetry.
Gauge symmetries
Local gauge symmetries are continuous symmetries that are generated by generators that are
space-point dependent. The Standard Model is a structure based on the gauge group SU(3)c
⊗SU(2)L ⊗U(1)Y . Here U(n) is the unitary group with all n×n unitary matrices, the special
unitary group SU(n) is a subgroup of U(n) containing all n×n matrices with determinant
1.
³ ´
e
The subscript “L” means that only left-handed fields feel the weak interactions, νe . (For
L
antiparticles it is the right-handed component that feel the week interaction.) The c denotes
color and Y denotes hypercharge. Why exactly this gauge group gives a good theoretical
description of the observation of nature, there is no deep understanding for.
Invariance of the matter Lagrangian for local gauge symmetries makes it necessary to introduce new gauge fields. These force fields are included to compensate for extra terms in the
Lagrangian after the gauge transformation. The extra terms come from the derivative acting
on the space time dependent generator of the transformed matter field. The new spin 1 boson
fields will be coupled to the matter field through the replacement of the ordinary derivative
with the covariant derivative, ∂µ → Dµ , see table 1.1.
Chirality
The chirality or handedness of a particle is an important quantity. Parity is not conserved in
weak processes, and this is explained by only letting the left handed component of a particle
feel the weak interaction. The left and right handed projections are obtained by chirality
operators PL and PR
1
ψL(R) = PL(R) ψ = (1 − (+)γ 5 )ψ
(1.8)
2
Where γ 5 is given in equation A.8. For massless fermions helicity and handedness is the
same, while for massive particles helicity is not a Lorentz invariant quantity while chirality
is defined to be Lorentz invariant. Helicity tells whether spin and momentum have the same
direction, and it is clear that this will depend on the reference frame. Chirality is the two
possible states of a relativistic fermion, and it is either right handed (R) or left handed (L).
1.1.2
Creating the Lagrangian From the Gauge Principle
The Dirac Lagrangian describing a free spin 1/2 particles are written
L0 = Ψ̄(x)(iγ µ ∂µ − m)Ψ(x)
(1.9)
The field transform under the unitary transformation like
Ψ(x) → Ψ′ (x) = U Ψ(x)
(1.10)
iω a (x)Ta
∼ e
a
Ψ(x)
∼ (1 + iω (x)Ta )Ψ(x) = Ψ(x) + δΨ(x)
(1.11)
(1.12)
6
Chapter 1. Theoretical Framework
Procedure to Couple Matter Field To spin-1
1. Invariance for a symmetry transformation
2. Commutation relation for generators
3. Associate generators Ta
4. Covariant derivative
5. Add gauge boson Lagrangian
6. The gauge field transform like
Fields
δψ(x) = iω a (x)Ta ψ
c T
[Ta , Tb ] = ifab
c
with a spin-1 particle Aaµ
∂µ → Dµ = ∂µ − iAaµ (Ta )
a F aµν
Lg = − 41 Fµν
a = ∂ Aa − ∂ Aa + f a Ac Ab
Fµν
µ ν
ν µ
bc µ ν
a ω b (x)µAc (x)
δAaµ (x) = ∂µ ω a (x) − fbc
µ
Table 1.1: More details of the procedure can be found in [15]
The unitary transformation, U, for the different
U (1)Y − singlets: u(L,R) , d(L,R) , e(L,R) groups are listed in table 1.2. It comes clear that
µ ¶ µ ¶
the U(1) transformations must be rotations of sine
u
glet fields, which are the three families of quarks
,
SU (2)L − doublets:
d L νe L
and the leptons. These fields carries charge, and the
   
ur
dr
physical interaction vertex consist of one photon and
SU (3)c − triplets:  ub  ,  db 
two leptons or quarks of the same type. The SU(2)
ug
dg
is a rotation of a up type state to a down type state.
It is only the left handed components of the fields
that feel the weak interaction, and so only the left
handed components are organized in doublets. The SU(3) rotates the quarks from one color
state to another. These interactions are the strong interactions, and the fields are written as
triplets.
Gauge Group
Spin-1 field Aaµ
Generator Ta
U(1)Y
SU(2)L
SU(3)c
Bµ
3 Wµa
8 Gαµ
1
2 τa
1
2 λα
Y
Unitary transformation matrix U
eig1 Y ω(x)
eig2 τa ωa (x)/2 , a=1..3
eig3 λα ωα (x)/2 , α =1..8
Table 1.2: Gauge groups of the Standard Model
Here λα are the eight 3×3 Gell-Mann matrices shown in appendix A.2, and τa are the three
2×2 Pauli matrices shown in appendix A.1. These matrices do not commute among them self,
and this is the reason why we get gluon’s and W, Z boson self interacting terms. See point
2 in table 1.1. We say that they carry color and weak charge respectively. Photons couple
to charge, but do not carry charge themselves, so there is no photon self interaction. Seen
from the equations no photon self interaction term will occur in the Lagrangian since it is a
scalar function that is the generator in U(1). And this scalar function commute with itself.
c are the structure constants of the respective groups, numbers characteristic
In table 1.1 fab
for the algebra involved.
The Gαµ are the eight massless gluons. Wµa and Bµ are related to the physical bosons of
the weak interaction, W ± , Z, and the photon from QED. Equation 1.13 show how the mass
eigenstates are written as linear combinations of the gauge fields.
1.1 The Standard Model
µ
Bµ
Wµ3
¶µ
cos θW
sin θW
− sin θW
cos θW
¶
µ ¶
Aµ
=
Zµ
7
(1.13)
The Weinberg angle θW is determined by how the Noeter current from the different groups
are related to each other. (U(1)em , U(1)Y and the third component of the conserved current
from SU(2)L ).
JYµ (x) = sµ (x)/e − J3µ (x)
(1.14)
By combining equation 1.13 and 1.14 and requiring that the gauge field Aµ (x) is the electromagnetic field coupling only to the electric charges, we get g2 sin θW = g1 cos θW = e. Where
e is the electromagnetic coupling constant, g1 is the coupling constant for hyper charge and
g2 is the coupling constant for the weak interactions. We can rewrite the relation between
the coupling constants to
g1 g2
e= p 2
g1 + g22
(1.15)
From this we write a SU(2)L ⊗U(1)Y gauge invariant interaction. The electromagnetism and
the weak interaction are unified into one gauge theory. From experiment we have sin2 θW =
0.2326 ± 0.0008, so weak and electromagnetic interaction can not be decoupled. Decoupling
would mean sin θW = 0.
The Simplest Example: Quantum Electrodynamics
Here is a small example to illustrate how the rules in table 1.1 are used. It is straight forward
to generalize from U(1) example to SU(2) or SU(3). In the QED case the Dirac field undergo
the transformation ψ(x) → ψ ′ (x) = e−iqf (x) , which is called a local phase transformation since
the phase factor qf (x) is dependent on the space time point x. The free field Lagrangian L0
given in equation 1.9 will under this transformation not be invariant
L0 → L′0 = L0 + q ψ̄(x)γ µ ψ(x)∂µ f (x)
(1.16)
because the derivative ∂x (ψ(x)e−iqf (x) ) gives two terms. The trick called the minimal substitution, is to add a term to the derivative to obtain the covariant derivative, so that
∂µ → Dµ = [∂µ + iqAµ x]. With the covariant derivative the Lagrangian will then consist of
the free field Lagrangian plus a interaction term Lg .
L0 = L0 − q ψ̄(x)γ µ ψ(x)Aµ (x)
= L0 + Lg
where Aµ (x) is defined to transform in such a manner that the unwanted term in equation 1.16
is canceled out. That means Aµ (x) → A′µ (x) = Aµ (x)+δAµ (x) = Aµ (x)+∂µ f (x). The covariant derivative undergo the same transformation as the field itself Dµ ψ(x) → e−iqf (x) Dµ ψ(x).
There is no self interaction terms for the photon (Aµ (x)) because the generator f (x) commute with itself. The photon does not carry charge, and it only couples to particles carry
electric charge. For SU(2) gauge fields where the generators do not commute the variation
of the field get a extra terms δW µ ∼ −∂ µ + ig[ω, W µ ], leading to self interaction vertexes [23].
8
1.1.3
Chapter 1. Theoretical Framework
Higgs Mechanism
It is not possible to put mass terms by hand in the Lagrangian and at the same time keep
the electroweak gauge invariance. Either the world is gauge symmetric or the particles have
mass, but the Higgs Mechanism provides a solution to keep both the features. Masses for
the fields appear by considering spontaneously broken symmetries. Within the electroweak
framework an additional weak isospin doublet with non zero vacuum expectation value is
introduced
µ
¶
φa (x)
Φ(x) =
(1.17)
φb (x)
The fields φa,b (x) are complex spin 0 field transform according to the unitary transformations
SU(2) and U(1) given in table 1.2. When the vacuum expectation value of the field is non
zero, h0|φ(x)|0i =
6 0, the Higgs potential
V (Φ) = µ2 Φ† Φ + λ(Φ† Φ)2
(1.18)
must have µ2 < so that the potential energy surface has a circle of minima given by
Φ†0 Φ0 = |φ0a |2 + |φ0b |2 =
−µ2
2λ
(1.19)
Since the state of lowest energy must be related to the minima of the potential, λ > 0 is
required so the potential is bounded from below. The state of lowest energy is not unique
since any phase
q angle θ giving the direction on the complex φ-plane provide the same po2
iθ
tential (Φ0 = −µ
2λ e ). Choosing one particular ground state gives spontaneous symmetry
breaking. Electroweak symmetry is broken but the electromagnetic gauge transformation is
exact to assure the photon to be massless. Leptons get masses through Yukawa terms in
the Lagrangian, that is terms connecting the matter fields and the Higgs field. From the
Higgs mechanism four real fields are added to the field content. One is taken to be the Higgs
particle and three are in the unitary gauge transformed away. The freedoms of these three
fields are eaten by the W ± and Z bosons so they acquire mass. Two examples of masses:
1
1
mW = vg ml = √ vλl
2
2
(1.20)
Where λl is a dimensionless Yukawa coupling constant, and g is coupling constant. A value
for v can be obtained experimentally. Starting from a complex scalar field and massless real
vector fields, the Higgs mechanism provides a real scalar field and massive real vector fields.
1.2
Why Beyond the Standard Model?
It is an obvious question to ask; If there is not really any observed problems with the Standard Model, why do we need to go beyond it?
Many parameters are put ad-hoc in the Standard Model. Particle masses and mixing patterns
is not understood, neither is the choice of gauge group and particle representations.
1.2 Why Beyond the Standard Model?
1.2.1
9
The Hierarchy Problem
In the Standard Model the most serious problem of the theory is the “hierarchy problem”.
This is not a problem with the Standard Model itself, because it is a renormalizable theory.
When we go from tree level to include loop corrections we always obtain finite results. Even
if we extend the virtual momenta in the loop integrals to infinity, we
have a well defined
R Λstill
4
theory [30]. We calculate loop integrals to a cut off parameter Λ.
d kf (k, ) The problem
is that we physically do not think that the cut off parameter should go to infinity. This is
motivated by the fact that we expect new physics at higher energies. At the Planck scale
gravity should become important, and gravity is not included in the Standard Model.
There is a factor 1017 between the electroweak scale and the Planck scale. When quantum
gravity becomes important at the Planck scale, there must be some new new physics. The
scale of the electroweak theory is set by the vacuum expectation of the Higgs field, v. All
masses in the theory are set by this parameter. The Planck scale is given my the Planck
mass, Mp .
v ≈ 246GeV
−1/2
Mp = (GN )
(1.21)
≃ 1.2 × 10
19
(1.22)
The symmetry breaking vacuum expectation value is related to the W boson mass, MW =
g2 v/2, and MW can be measured. The Planck scale we get from measuring the strength of
the gravitational force, GN . When there exist a scale for new physics that differ several order of magnitude from the scale set by v, there occur problems when we go beyond three level.
There is always a cut off dependence in mass terms of renormalizable theories. For the
matter fields it is a mild logarithmic dependence, and this is not viewed as a problem. The
problem shows up when scalar fields are included in the Lagrangian. Here we will get a
quadratic cut off dependence. δm ∼ Λ2 Loop contributions from fermions and gauge bosons
give contributions of opposite sign [30]. This is an important observation that we will make
use of when arguing for Supersymmetry. The total loop correction is written on the form
2
(λ − gf2 )Λ2 φ† φ + λ(MH
− m2f ) ln Λ
(1.23)
in [8]. We see the result that fermion antifermion loop contribution, gf2 , has opposite sign
from the boson self interaction contribution, λ.
Also from the theoretical point of view there are a few more points. We would always like to
justify all assumptions in our theory. We do not have a motivation for choice of gauge group,
and we do not have a understanding of the mixing parameters. The Higgs scalar field are put
by hand, and we have no understanding why the squared mass parameter must be negative.
1.2.2
Experimentally Motivated Questions
At Super-Kamiokande neutrino oscillations has been observed, but neutrinos are massless in
the Standard Model. This means that a neutrino that is created in one of the lepton families,
can later be observed in another family state. Such a mixing between the states is only
allowed with non zero neutrino mass. Gravity is not included in the model, even it gravity
10
Chapter 1. Theoretical Framework
certainly exists. Cosmological observations suggest that most of the universe consists of dark
matter and dark energy. The dark matter is not interacting electromagnetically since it has
not been seen, but only observed by gravitational effects on other objects. Observation of
red-shift in supernovae and measurement of the cosmic microwave background suggest that
the universe expansion is accelerating. To explain this there is a need for the dark energy.
The Standard Model have no good candidates to explain [30] these phenomena.
Type
SM matter
Dark matter
Dark energy
Present of Universe
5%
25 %
70 %
Table 1.3: Energy composition of the universe
It is clear that the Standard Model is not perfect, but rather the best we got. Still remembering that it indeed have had great success to explain and predict experimental data.
1.3
Supersymmetry
Supersymmetry is just yet another symmetry of nature. It is a proposed symmetry between
fermions and bosons. Having a symmetry between two so fundamentally different classes of
particles, is alone very interesting. This can provide a understanding of the origin of the
difference between these fundamentally different particles that is distinguished by the spin.
1.3.1
Additional Conserved Operator
In section 1.1.1 it was mentioned that the Coleman-Mandaula theorem tells that it is not
possible with larger space time symmetry, and hence no new conserved operators that are
not Lorentz scalars (operators that transform non-trivial under Lorentz transformations)
can exist. All these operators were of such a character that when they work on a state of
spin J the product is also a state of spin J. However, anti-commuting spinor charges was not
taken into account. The generators for Supersymmetry is exactly anti-commuting spinors Qa .
This extension of allowed operators is generalizing the special theory of relativity, and the
new fermionic freedom reveal themselves as super partner particles. The new operator Qa
acting on a state of spin J must return a spin state J±1/2 (in the simplest version). Mass
and all other quantum numbers stays the same for this transformation.
Qa |Ji = |J ± 1/2i
(1.24)
Hamilton’s equation 1.25 tells that the symmetry operator must commute with the Hamiltonian of the system.
i~
dQa
= [Qa , H] = 0
dt
(1.25)
1.3 Supersymmetry
11
Supersymmetry is the maximal extension of the Lorentz group, and it has fermionic generators
that satisfy the algebra in equation 1.26 [8].
µ µ
{Qa , Q†b } = 2σab
P
(1.26)
The anti-commutation relations are proportional to the momentum operator P µ and σ µν are
spin matrices defined in appendix A.3 equation A.7.
From this relation it becomes clear that Supersymmetry generators are related to the generators of space-time displacements. Commutators of energy-momentum generators (P) are
zero and this is also the case for commutator of Q and P.
From this, an intuition of what Supersymmetry really is can be deduced. Since the anticommutator is proportional to P, it means that performing two Supersymmetry transformations have the same effect as doing one energy momentum transformation on a state |ii.
{Qa, Qb } |ii = (Qa Qb + Qb Qa )|ii ∼ P |ii. This means that Qa itself is some kind of square
root of derivatives since Pµ ∼ ∂µ . As the Dirac equation is viewed as square root of the
Klein-Gordon equation, this is like doing one better than the Dirac equation. New fermionic
freedoms are achieved, which reminds of how the action of tanking the square root of -1 adds
the complex axis to the number plane. Supersymmetry enlarges space time to a more general
super space time.
1.3.2
The Minimal Supersymmetric Standard Model, MSSM
The MSSM is a direct supersymmetrization of the Standard Model. The gauge symmetry
group is the same as in the Standard Model, and each particle gets one Supersymmetric
partner, providing a minimal particle content. Only there has to be introduced two Higgs
doublet fields instead of one in the Standard Model. One of the Higgs doublets carries weak
hypercharge Y = 1 and the other carries weak hypercharge Y = −1. Unlike in the Standard
Model the doublet with Y = 1 can only give mass to up-type quarks because a scalar component of the right-chiral superfield is not allowed [30].
R-parity And New Interactions
Supersymmetry interactions are exactly the same as the interactions that are allowed in the
Standard Model, and are obtained by replacing two Standard Model particles with their
Supersymmetric partner. There must always be a even number of sparticles in each vertex
when R-parity conservation is assumed.
R = (−1)3(B−L)+2S
(1.27)
where B is the baryon number, L is the lepton number and S is spin. It is easy to check
that Standard Model particles will have R = +1 and Supersymmetric particles will have
R = −1. This conservation rule is assumed to conserve B and L. In the Standard Model this
is done automatically by requiring gauge invariance, while this is not the case in a general
Supersymmetric model. With R-parity comes also necessity of a Lightest Supersymmetric
Particle (LSP) that is non-interacting. A sparticle must always decay to a lighter sparticle
and a particle.
12
Chapter 1. Theoretical Framework
Particle Content
Table 1.4 shows the particle content of the MSSM. The subscription L and R on the scalar
field correspond to the chirality of the corresponding Standard Model particle, since the spin
0 sparticles can not have chirality. The superpotential is a function of just lef t-chiral superfields.
Like in the Standard Model the mass eigenstates that can be observed in the experiment are
not the same as the gauge eigenstates. Particles with the same quantum numbers will mix,
when symmetries are broken. The gauginos and Higgsinos mix to from two charginos χ̃±
1,2 ,
0
and four neutralinos χ̃1,2,3,4 . The two lightest netralinos have a dominant gaugino component
and the two heavier states tend to be more dominated by the Higgsino component [12]. The
scalar partners of the fermions have different masses for f˜L and f˜R , and the for the third
generation missing between the L and R states become important. The mass eigenstates of
the third generation is therefore called t̃1,2 , b̃1,2 and τ̃1,2 .
Name
Spin 0
Spin 1/2
quark,squark
Q̃ = (ũL , d˜L )
ũ∗R
d˜∗R
L̃ = (ν̃, ẽL )
ẽ∗R
Hu = (Hu+ , Hu0 )
Hd = (Hd0 , Hd− )
Q = (uL , dL )
ūR
d¯R
L = (ν, eL )
ēR
H̃u = (H̃u+ , H̃u0 )
H̃d = (H̃d0 , H̃d− )
g̃
W̃ ± , W̃ 0
B̃
lepton,slepton
Higgsino,Higgs
gluon,gluino
W’s,wino
B,bino
Spin 1
SU(3) × SU(2) × U(1)
g
W ±, W 0
B
(3, 2, 1/6)
(3̄, 1, −2/3)
(3̄, 1, 1/3)
(1, 2, −1/2)
(1, 1, 1)
(1, 2, 1/2)
(1, 2, −1/2)
(8, 1, 0)
(1, 3, 0)
(1, 1, 1)
Table 1.4: Particle content of the MSSM.
1.4
Supersymmetry Breaking
Supersymmetry is nothing more than a nice mathematical model unless experimental evidence is observed. So far we have not seen any sign of Supersymmetry at any colliders. We
would expect to have seen a sparticle with the same properties as the electron only with
different spin, but that has never been seen. This means that Supersymmetry can not be an
exact symmetry, but rather a broken symmetry if it exist in nature.
The MSSM has 105 free parameters which makes its impossible to do phenomenology. There
has to be imposed further assumptions, and a problem is that there is no theoretical principle to determine the soft Supersymmetry breaking parameters in the MSSM. We do not
know what underlying dynamics are responsible for spontaneous Supersymmetry breaking.
The MSSM is taken to be the low energy approximation to something more fundamental.
Experimental data is needed to reveal what the underlying dynamics is, meanwhile different
possibilities are studied.
1.4 Supersymmetry Breaking
13
Supersymmetric models based on global Supersymmetry run into trouble with the tree-level
mass sum rule when the Supersymmetry is broken at the weak scale. In models where Supersymmetry is unbroken, there is degeneracy of fermion and boson masses and the fermionic and
bosonic degrees of freedom are the same. This implies that the supertrace must vanish [30].
But the trace is zero even if the Supersymmetry is spontaneously broken ST rM 2 = 0. Phenomenologically this gives a inexpiable situation where scalars are lighter than fermions, and
so would have been seen in experiment.
The mass sum rule problem is avoided if Supersymmetry is taken to be a local symmetry,
or a global Supersymmetry where sparticles only get masses at loop level. In realistic Supersymmetric models it is assumed a hidden sector. Supersymmetry is then broken in the
hidden sector where the breaking is commuted to the MSSM sector through some messenger
interaction that couples to both. The most common model are gravity mediated (SUGRA),
gauge mediated (GMSB) and anomaly mediated (AMSB) models, which is characterized by
the agent that communicates the breaking effects to the observable world. In GMSB the
MSSM particles get their masses through the SU (3) × SU (2) × U (1) usual Standard Model
gauge interactions at a messenger scale Mm << Mp , and the gravitino (G̃) is the LSP. The
SUGRA is the model considered in this thesis and so will be explained briefly, and ASMB
takes into account an additional one-loop contribution th the Supersymmetry breaking parameters. This correction is most important in models where there is no Standard Model
gauge singlet super field that can acquire a vacuum expectation value at the Plankc scale.
1.4.1
Minimal Gravity Mediated Supersymmetry Breaking, mSUGRA
A requirement in gravity mediated Supersymmetry is that the symmetry is local. The hidden
sector is introduced and gravity is the messenger interaction that couples to both the hidden
sector and to the MSSM sector. In the minimal version of SUGRA (mSUGRA) all scalars
have a common mass m0 and all gauginos have a common mass m1/2 at the grand unification
(GUT) scale. The universality hypothesis is assumed at a scale Q = MGU T ∼ 2 × 1016 GeV :
• g1 = g2 = g3 ≡ gGU T
• M1 = M2 = M3 ≡ m1/2
• m2Qi = m2Ui = m2Di = m2Li = m2Ei = m2Hu = m2Hd ≡ m20
• At = Ab = Aτ ≡ A0
It is assumed that the MSSM is valid between the weak scale and the GUT scale. The m in
mSUGRA means minimal, and reflects on the choice of metric. With a flat Kahler metric
that makes it possible to get common scalar masses. Common gaugino masses can come from
assuming grand unification of gauge interactions showing in the GUT relation for gaugino
masses, see equation 1.28. Since α1,2,3 must have the common value for grand unification, so
do the gaugino masses M1,2,3 .
α1
α2
α3
=
=
(1.28)
M1
M2
M3
Here the coupling α1 is related to the weak hypercharge coupling α′ like α′ = 53 α1 , while α2
and α3 are coupling of SU(2) and SU(3).
14
Chapter 1. Theoretical Framework
The mSUGRA parameter space is given by five parameters
m0 , m1/2 , A0 , tan β, sign(µ)
(1.29)
which makes it more handable compared to the 105 that it starting out with. m0 represent
the mass of all scalar particles at the GUT scale, m1/2 is the mass of all gauginos at the
GUT scale, A0 is the common scale of trilinear couplings at GUT scale, tan(β) = vv12 is the
vacuum expectation value of two Higgs fields at the Electroweak scale and µ is the Higgsino
mass term. Radiative electroweak symmetry breaking fixes the value of µ2 and so only the
sign of this parameter is free.
1.5
1.5.1
Phenomenological Effects
Quadratic corrections
One of the single most important features of Supersymmetry model it that it presents a
natural solution on the scalar field and quadratic divergence. There is a problem with the
radiative corrections to the scalar boson mass δm2HSM . We will have loop contributions from
1.5.2
Cosmology
It is also emphasized that Supersymmetry provides a candidate for cold dark matter,CDM.
As we have learned from experiments such as WMAP, it is clear that the Universe consist of
a big part of unknown matter and energy. Supersymmetry theories, where conservation of
R-parity is assumed, will have a lightest Supersymmetric particle LSP, that is weakly interaction. The candidate for being the LSP in the model is strongly model dependent. If we
wish to provide a candidate for the CDM, we need the properties of our candidate to agree
with cosmological data. This constrains the Supersymmetry parameter space.
In this light, the LSP can not carry color or electrical charge. If this was the case, we
would already have observed the particle. Many experiments also exist to detect a weakly
interacting massive particle,WIMP. Direct and indirect. There must be a gas of WIMPs
in the Universe. In the direct experiments, the detectors, being stationary on earth, travel
through the gas of WIMPs. These will collide with the nucleus in the detector material. A
small amount of energy in deposit (order keV). This small change in energy can be detected
by methods such as ionization or by observing phase transitions in superconducting materials. No particles have jet been observed this way. The no observation provide important
information to which models we can allow. The sneutrino ν̃, is now excluded as LSP. A
sneutrino with mass bigger than 25 GeV would be detected by these direct experiments,
while mν̃ < 25GeV is excluded by precise measurements of the Z width. The most common
candidate for LSP in the models are the lightest neutralino χ̃0 or the gravitino. It should be
possible to see the neutralino in the direct experiments, while the gravitino would be almost
impossible to see. The cross section for the gravitino to interact is too small.
1.5.3
Coupling constants
This seems to be the most poplar argument to include when you argue for Supersymmetry.
When you let the coupling constants run with the Supersymmetry renormalization group
1.5 Phenomenological Effects
15
equations RGE, the slopes for the three different couplings seem to meat at a common scale
(∼ 2 × 1016 GeV ). This is indeed not the case when the SM RGE are applied. The Callan∂g
Synmanzik beta function tells how the coupling constants are running. β(g) = Q ∂Q
. At one
loop level we have βi =
gi3
b.
16π 2 i
Where
  

1
 
b1
0
1
10
 + ng 4 1 + nH  1 
SM : b2  = − 22
3
6
3
b3
−11
0
1
3
 
   
1
0
b1
10
M SSM : b2  = −6 + ng 2 1 + nH  12 
1
b3
−9
0
(1.30)
(1.31)
Here ng is the number of
generations, nH is the number of complex scalars and the
pfermion
′
rescaled quantity g1 = 5/3g . For values of ng and nH such that βi is negative, we have
asymptotic freedom. For the MSSM case we run with SM evolution of the couplings until
the expected scale where new particles can enter the loop contribution. We assume as from
the naturalness argument that this will happen at 1TeV. So from 1TeV there is a bend in
the plots where I start run with the evolution taking gauginos, matter scalars, Higgs scalars
and higgsinos into account, see figure 1.2 and 1.3.
Figure 1.2: Running of the coupling constants in the Standard Model
Figure 1.3: Running of the coupling constants with Supersymmetry
We use the experimental value measured at Q0 = MZ to run the couplings to a arbitrary scale
Q. The fact that the couplings meet at this common scale for a certain choice of parameters,
triggers the imagination for a grand unified theory, GUT.
16
Chapter 1. Theoretical Framework
Chapter 2
Observing Particles In An
Experiment
Not to be constrained to metaphysics, it is essential in physics to do experiments. The theory
must be tested and related to the observed world. Modern experimental particle physics is
dependent on advanced accelerator thechology, detector, trigger, data acquisition and offline
computing.
Figure 2.1: Experiments at the CERN LHC ring.
Only e− , u, d, neutrinos and force particles are needed to explain the current matter and its
possible transformations, the two other generations of these particles are produced in experiments by creating high energy conditions. The common way to obtain such conditions in a
laboratory is to build accelerators, where particles are accelerated close to the speed of light
before they collide. Together with the detectors and a grid of computers, the accelerator
acts as a gigantic microscope capable of observing phenomena at a scale of 10−18 m. The
conditions in such a collision are comparable to the environment in the universe 1ps after
17
18
Chapter 2. Observing Particles In An Experiment
the Big Bang. In the collision new particles are created, which quickly decay into long lived
particles. A brand new accelerator, the Large Hadron Collider (LHC), and several detectors
built around it, have just started operation at CERN, see figure 2.1. In addition to the more
extreme conditions provided by the accelerator the new computing grid is also driving particle
science forward. The amount of data that can be stored and processed has reached new levels.
Since particles are such small objects, it is not trivial to observe what happens after a collision.
The key is to have good knowledge about how particles interact in matter, and from this create
an advanced detector around the interaction point. Lots of fine engineering and material
research is essential. This chapter gives an overview of the basic idea behind detection of
different particle types, and shows how the ATLAS detector was constructed to make use of
this knowledge to recognize particles and measure their properties.
2.1
Arranging For High Energy Collisions, The LHC
The Large Hadron Collider (LHC) is an accelerator that will collide proton on proton at a
center-of-mass energy of 14 TeV at the design luminosity of 1034 cm−2 s−1 . The machine
will also collide lead ions with a peak luminosity of 1027 cm−2 s−1 [20]. The luminosity,
energy and particle type are the three important parameters of an accelerator. Luminosity gives the number of particles per unit area per unit time. The LHC ring is now the
most cutting edge particle accelerator in the world, located at CERN near Geneva Switzerland. The the 27km long ring is placed 100m under ground. It needs to be big to reach
high energies and minimize synchrotron radiation, and under ground to keep the collision
environment as stable as possible. LHC is now the most powerful tool for particle physics
research, since parameters like center-of-mass energy and luminosity are higher, by a factor of
7 for energy and a factor of 100 for luminosity, than any other experiment currently available.
The reason for colliding particles is to create an extreme environment where, subsequently,
energy is transformed into particles that are no longer present in the low energy universe.
These particles will not live long enough to reach the active parts of the detector. Only a
few of the particles mentioned in the previous chapter, such as electrons, muons, photons,
protons, neutrons, pions and kaons, will be detected.
2.2
Behavior of Particles in Matter
Charged particles will interact electromagnetically and loose energy to the atomic electrons
in the material they penetrate. This gives rise to ionization and excitation. The Bethe-Bloch
formula (eq. 2.1) is used to describe the energy loss for a heavy charged particles in a material.
If the energy loss to one electron is known, the total loss can be obtained by multiplying by
the number of electrons in all reasonable distances from the trajectory. Correction effects
such as the fact that electrons far away from the trajectory are protected because the electric
field around the particles polarize the atoms, is also accounted for in the formula.
h
dE
Z 1 1 2me c2 γ 2 β 2 max
δ
i = −4πNA re2 me c2 z 2
[ ln
T
− β2 − ]
2
2
dx
Aβ 2
I
2
(2.1)
2.2 Behavior of Particles in Matter
19
Here I is the ionization potential, Tmax is the maximum energy transfered in one collision, δ
corrects for shielding effects, Z is the atomic number, A is the mass number, γ is the Lorentz
factor and β = vc . I, N, Z and A are properties of the target, while z is the charge of the
particle going through the material. This gives an idea of what has to be taken into account
when calculating energy loss for heavy charged particles. Radiative losses are not taken into
account in this formula.
For lighter charged particles, such as electrons and muons, the formula gets a correction from
the fact that the particles penetrating the material have the same mass as the atomic electrons. In this case the Bremsstrahlung effect becomes important. This process has a cross
section going as m12 . So this effect is bigger for electron than for muons. This observation
will be seen in section 4, where lepton isolation is studied.
Different materials will have different characteristic thickness required to stop a certain particle. The thickness of a material that will cause an electron to reduce its energy by a factor
1
e is called a radiation length X0 . This thickness can with good accuracy, when the atomic
number Z > 2, be estimated with formula 2.2 [29]. The formula has an uncertainty ∼ 2.5%
(for Z>2).
X0 =
716.4gcm−2 A
Z(Z + 1) ln(287/Z)
(2.2)
The atomic number Z is the number of protons in the nucleus, and the mass number A is the
total number of protons and neutrons. This formula motivates the fact that lead is used as
absorption plates in between active layers in the ATLAS electromagnetic calorimeter to stop
electrons without using too much space, X0lead = 0.56cm.
Photons will transfer almost all their energy to electrons and positrons in the material, and
the electrons will in turn loose energy as described above. In the detector we will deal with
photons from the collision itself, from Bremsstrahlung and from de-excitations. For the photons it is their mean free path in the material that is considered. The process that dominates
photon loss at high energies is e+ e− -pair-production. Neglecting effects from photoelectric
effect and Compton scattering the mean free path for the photon is λγ ≅ 97 X0 [29].
Hadrons feel the strong nuclear force. The dominant mechanism for loosing energy for neutral
hadrons under 1GeV is through elastic scattering with the nucleons. When the energy is below 20MeV they get absorbed in the material, and above 1 GeV hadronic cascades will occur.
All hadrons with energy above 1 GeV have similar cross-sections for inelastic scattering, and
this is the most important process at higher energies.
Neutrinos are indicated by making use of the principle of momentum and energy conservation. After all visible energy and momentum has been summed up, the missing values
are assigned to the neutrinos. This is because neutrinos interact weakly with matter. The
Lightest Supersymmetric Particle (LSP) is also assumed to be non-interacting, and will in a
Supersymmetry scenario also carry away energy.
The important features for particle identification is to measure their trajectories and observe
what detector material that absorbs their energy. Magnets are installed to bend the trajectory
20
Chapter 2. Observing Particles In An Experiment
of charged particles. Knowledge of the bending radii of the particles provide momentum mea~
surement. The magnetic force works perpendicular to the direction of motion, F~m = q~v × B,
and the sign of the charge determines if it bends right or left, and the momentum is given
by p = |qB⊥ r|. Stronger magnets will provide more accurate charge determination and momentum resolution. A volume of 12000 m3 of the ATLAS detector will have a magnetic field
exceeding 50 mT.
To identify a particle it is normal to combine information from different detector types.
A tracker will notice the trajectory of charged particles without too much energy loss. The
calorimeter will measure the energy carried by the particle by completely stopping it: electron
and photons on one hand and hadrons on the other hand, need different materials/materialthickness to be fully stopped.
Particles created in the interaction that decay before they reach the detector, can be recognized by measuring secondary vertices. With accurate tracking it is possible to reconstruct
the path to decide if the particle came from the primary or secondary vertex. This is the way
to obtain information about c, b-quark systems and tau lepton, as examples.
2.3
ATLAS, A Toroidal LHC ApparatuS
The Experimental Particle Physics group in Oslo is connected with the ATLAS detector at
CERN. Here follows an overview of the detector configuration of various sub detectors. All
information here about the ATLAS detector can be found in ref [32]. In addition to particle
identification, the detector must provide precise information about momentum, energy and
vertex measurements.
When there is a collision it is important to register the products of the collision in a precise
manner. ATLAS is one of four main detector systems at the LHC ring. There is also the The
Compact Muon Solenoid [33] (CMS) built with the same goals as ATLAS but differently constructed, The LHCb [34] experiment is dedicated to precision measurements of CP violation
and rare decays of B hadrons and A Large Ion Collider Experiment [31] (ALICE) studies
physics of strongly interacting matter in the hope to investigate the quark-gluon plasma.
Figure 2.1 shows how the detector systems are located at the LHC ring and figure 2.2 gives
a better overview of the ATLAS detector.
Collisions as energetic as those 10−10 s after the Big Bang are reached, equivalent to probing
distances down to 10−18 m. Bunches of approximately 1011 protons collide 40 million times
per second [20]. With an inelastic proton-proton cross-section of 80 mb, the LHC will produce
a total rate of 109 /sec inelastic events at design luminosity. Heavy particles will decay before
they reach the detector. Their lifetime τ , is inverse proportional to the decay width, Γ. The
width is in general bigger for heavier particles, since it has more available lighter particles to
decay into.
After a collision a pattern of particles with long life time is registered in the detector. This
pattern makes it possible to reconstruct the decay chains from the collision up to what is
detected. Theory predict phenomena to happen with some probability, which correspond
2.3 ATLAS, A Toroidal LHC ApparatuS
21
Figure 2.2: Cut away view of the ATLAS detector.
to cross sections and branching ratios. Kinematics reveals properties of mother particles,
such as invariant mass formulas that can have a peak or an edge on the mass of the mother
particle. If there are deviation from theoretical expectations it means that there is something
that is not fully understood. Either one must try to understand the detector better, or there
is new physics providing new phenomena. Supersymmetry is one such possibility that it is
exciting to look for.
High precision measurements of the properties of the particles reaching the detector are required to be able to compare to theoretical results. High luminosity and increased cross
sections makes LHC good for precision test of QCD, electroweak interactions and flavor
physics. The top quark will be produced at the LHC at a rate of a few tens of Hz, providing
the opportunity to measure its mass, couplings and spin. At the TeV scale there is also the
possibility of reaching regimes where new physics can be discovered.
In the following is a brief description of the main components of the ATLAS detector. Table
2.1 introduces the coordinate system and kinematic variables used. Forward direction is along
the mean pipe at high |η|, and |η| = 0 corresponds to θ = 90o . The difference in the rapidity
of two particles is independent of Lorentz boosts along the beam axis.
2.3.1
Magnet system
Strong magnets and precise knowledge of the magnetic field throughout the detector is very
important for the tracking systems. In ATLAS there are one solenoid and three toroids
installed. The solenoid provides a 2T magnetic field for the inner detector and the muon
spectrometer works together with the toroids.
A barrel toroid and two end-cap toroids, each consists of eight coils. The end-cap toroids are
installed to make a radial overlap with the barrel coils. So they are rotated with an angle
22
Chapter 2. Observing Particles In An Experiment
Origin
Positive x-axis
Positive y-axis
z-axis
Azimuthal angle φ
Polar angle θ
Pseudorapidity η
∆R
The interaction point
Pointing from the interaction point to the center of the LHC ring
Pointing upwards
Defined by beam direction
Measured around the beam axis
Angle from the beam axis
η = − ln tan(θ/2)
Distance
p in the pseudorapidity-azimuthal angle space
∆R = ∆η 2 + ∆φ2
Table 2.1: Coordinate system and kinematic variables used to describe the ATLAS detector.
For massive objects the rapidity y = 1/2 ln[(E + pz )/(E − pz )] is used, where pz is the
momentum component along the beam axis.
22.5o with respect to the barrel. This will optimize the bending acting on the particles in the
region between the two toroids. The barrel torroid has a peak magnetic field of 3.9 T and
the end-cap torroid will have a peak magnetic field of 4.1 T.
2.3.2
Inner Detector
An hermetic and robust charged track pattern recognition is done in the inner detector (or
inner tracker). Knowing the trajectory very accurately, means that good momentum and
vertex resolution can be obtained. Three independent sub detectors will collect complementary information in the inner tracker: The silicon Pixel detector, the SemiConductor Tracker
(SCT) and the Transition Radiation Tracker (TRT) see figure 2.3 left. The overall momenσ
tum resolution of the tracking system is ppTT = 0.05%pT ⊕ 1% in the pseudorapidity region
|η| < 2.5. The pT thresholds are approximately 0.5 GeV, but as low as 0.1 GeV in some
regions. A strong uniform magnetic field is essential to achieve such a high precision.
Approximately 1000 particles will enter the inner detector (every 25ns), within the pseudorapidity region |η| < 2.5. With such a high track density, fine detector granularity is needed.
The track density in the detector material will be proportional to 1/r2 , where r is distance
from the interaction point. The detector will be exposed to a high radiation level, so it is
important in the design that the sensors, electronics, material and services can deal with this
environment.
At inner radii Pixel and SCT provide high resolution pattern recognition and at larger radii
the TRT provides continuous tracking to enhance the pattern recognition. In the barrel region
the overall envelope of the Pixel detector is 4.4cm< r <24.2cm, and the corresponding radial
extension for the SCT and the TRT are 25.5cm< r <54.9cm and 55.4cm< r <108.2cm. The
overall Inner Detector envelope in beam direction is 0cm< |z| <351.2cm.
The silicon Pixel detector
The Pixel detector is located closest to the interaction point and uses the latest technology
with its oxygenated n-type silicon wafers. It has high granularity to provide accurate vertex
2.3 ATLAS, A Toroidal LHC ApparatuS
23
measurements and covers the pseudorapidity region |η| < 2.5. In the barrel region the pixels
are mounted in concentric circles, and in the end-cap they are mounted on a disk perpendicular to the beam axis. Typically the particle will cross three layers of pixels, with three
cylindrical barrel layers and two times three end-cap disks. The number of layers is limited
by cost and the wish to keep material thickness as small as possible. On the other hand,
more layers will of course give higher precision. A total of 61400 identical pixel elements has
each a size of minimum 50 × 400µm2 . The intrinsic accuracy is 10 µm in the Rφ plane and
115µm in th z direction. There is 80.4 million readout channels.
Accurate vertex measurements gives information about short lived particles such as B hadrons
and τ leptons. The silicon sensors are operated at low temperature,-5 to -10o C, to ensure
adequate noise performance after radiation damage. Because of the high radiation while
operating at design luminosity, the innermost layer must be replaced after three years.
The SemiConductor Tracker
The SCT is located just outside the Pixel detector. From eight silicon strip layers, four space
points from each track is obtained. The micro strips are made of more traditional single
sided p-in-n sensors, and they are mounted on a barrel and two end-cap disks. There are
15912 such sensors, and they operates much like an reversed classical solar cell. Each strip
has a mean pitch of 80 µm leading to an intrinsic accuracy both in barrel and end-caps of
17µm in R − φ direction and 580 µm in z direction. Both Pixel and SCT are operated at low
temperature, to minimize impact of radiation damage. The effective doping consideration
also grow over time in a temperature dependent way. SCT has 6.3 million readout channels
and covers the region |η| < 2.5.
The Transition Radiation Tracker
The outer layer of the inner detector is the TRT. This detector is present to ease pattern
recognition, since there will be a large number of hits per track (typically 36 per track). It
is made up of gaseous straw tube elements with a material that gives transition radiation
between them. The gas is a Xe/CO2 /O2 mixture. Each straw has a diameter of 4mm.
There will be complementary electron identification to the calorimeter, for a large range of
energies. The intrinsic accuracy is 130 µm per straw and only the R-φ information is obtained. The barrel straws are organized in layers of 73 straws parallel to the beam axis, and
in the end-cap 160 straw layers lie radially in the wheels. The pseudorapidity coverage of the
TRT is |η| < 2.0.
That the precision per hit is lower than in the silicon detectors are compensated by a larger
number of measurement points and longer measured track length. The TRT operate in room
temperature and has 351000 readout channels.
When information from the inner detector system is combined there is high precision in both
σ
R-φ and z coordinates ( ppTT ∼ 0.05%). Precision tracks close to the interaction point is
combined with a larger number of points in the TRT at larger radius. This gives a robust
pattern recognition.
24
Chapter 2. Observing Particles In An Experiment
Figure 2.3: Cut-away view of the Inner Detector to the left and the Calorimeter system to
the right.
2.3.3
Calorimeters
The calorimeters are made up of a material that absorb the energy of the particles it is designed to measure. The goal is to determine the energy carried off by the particle entering
the material. Good missing transverse energy ( E
/ T ) measurement is very important for SUSY
search, where R-parity, resulting in a non interacting LSP, is assumed. Different materials are
used to pick up the physics process of interest. The calorimeter depth is important, since the
calorimeter shall register the energy in electromagnetic and hadronic showers. The optimum
is to absorb all the energy of electrons, positrons, photons and hadrons. This also limits the
punch through to the muon spectrometer, so that hits there are likely to be muons only. The
variables of consideration are the radiation length (X0 ) and the interaction length (λ). Space
and cost are limiting factors. The calorimeter consists of four parts shown in the right picture
of figure 2.3.
The electromagnetic calorimeter has a relative resolution
sponding value for the hadronic calorimeter is
σE
E
100%
√
E
σE
E
=
50%
√
E
σE
E
=
10%
√
E
⊕ 0.7% and the corre-
⊕ 3% for the barrel and the end-caps
=
and in forward direction
⊕ 10%. Forward direction is in the range 3.1 < |η| < 4.9.
Notice that the resolution increases with energy. The required depth to absorb the shower
increases only logarithmically with energy.
LAr electromagnetic calorimeter
This is a lead-LAr detector, meaning lead absorption plates in between the active LAr layers.
The electromagnetic calorimeter covers |η| < 3.2 The pseudorapidity region matching the
inner detector, |η| < 2.5, is used as precision measurements of electrons and photons. Here
very fine granularity is needed, and the calorimeter is segmented in three sections in depth.
Rougher granularity in the rest is sufficient for jet-reconstruction and ETmiss measurements.
Consult figure 2.4 for these values. The thickness of the LAr electromagnetic calorimeter is
∼22 X0 in the barrel and ∼24 X0 in the end-cap.
2.3 ATLAS, A Toroidal LHC ApparatuS
25
Figure 2.4: Main parameters for the calorimeter system [32].
Hadronic calorimeters
The hadronic calorimeter consists of the Tile calorimeter, LAr hadronic end-cap calorimeter
and LAr forward calorimeter. Granularity and pseudorapidity coverage are given in figure 2.4.
The Tile calorimeter is the innermost hadronic calorimeter, placed directly outside the elec-
26
Chapter 2. Observing Particles In An Experiment
tromagnetic calorimeter. Here the sampling medium is made of scintillator tiles and the
absorber medium is steel. The total detector thickness is 9.7 λ at η = 0.
The LAr hadronic end-cap calorimeter is made of 25 and 50mm thick copper plates interleaved with 8.5mm LAr gaps, being the active medium for the sampling calorimeter.
The LAr forward calorimeter (FCal) is 10 λ deep, and cover the region closest to the beam.
It consist of three modules. First copper plates that are optimized for electromagnetic measurements, then Tungsten to measure hadronic interaction. The sensitive medium between
the rod and tube is LAr. The geometry of the calorimeter is made to best control the gaps,
witch is 0.25 mm in the first section. These gaps is there to avoid problems due to ion buildup.
LAr means that the active detector medium is liquid argon. Liquid argon is used because it
has stable response over time, it is radiation hard and has a linear behavior.
The resolution is worse than in the electromagnetic calorimeter, because some of the energy
is lost in various reactions. Some of the reason for the energy loss are: the hadronic energy
is used to break up nuclear bindings, muons and neutrinos created in the showers escape
energy and there is fluctuations between electromagnetic and hadronic reactions (π0 → 2γ
and charged pions).
2.3.4
Muon Spectrometer
A system of four detectors allow identification and measurements of muons. It is made up
of separate trigger and high precision tracking chambers. The chambers are designed to
measure the momentum of charged particles that are not stopped by the calorimeter. In the
region |η| < 1.4 the particles are bent by the magnetic field from the large barrel toroid, in
the region 1.6 < |η| < 2.7 they are bent by the two smaller end-cap magnets, and in between
it is a combination of the two fields. It is important to know the distribution of the magnetic
field at each point. The goal is to determine the bending power to a few parts in a thousand,
so the field is monitored by 1800 Hall sensors.
Due to multiple scattering, there will be a degradation of resolution that must be minimized.
The B-field is mostly perpendicular to the muon trajectory, and muons with momentum from
a few GeV (∼ 3GeV) to a few TeV will be detected.
The Monitored drift tubes (MDT) and the Cathode strip chambers (CSC) allow precision
measurements. MDT is robust since there is a mechanical isolation in the drift tubes of each
sensor. The CSC will have higher granularity. This is a multi-wire proportional chamber, and
it can handle the background conditions in the innermost plane. The trigger system consists
of the Resistive plate chambers (RPC) in the barrel and the Thin gap chambers (TGC) in
the end-caps. This system shall provide bunch-crossing identification, provide well defined pT
thresholds and measure the muon coordinate in the direction orthogonal to that determined
by the precision-tracking chambers. In the end the region |η| < 2.7 is covered by the muon
system. Spatial resolutions in the muon spectrometers is given in table 2.2, and the required
σ
momentum resolution is ppTT = 10% at pT =1 TeV at |η| < 2.7. The spatial resolution is
given in table 2.2.
2.3 ATLAS, A Toroidal LHC ApparatuS
Type
MDT
CSC
RPC
TGC
Function
tracking
tracking
trigger
trigger
Chamber resolution
z/R
φ
time
35µm(z)
40µm(R)
5mm
7ns
10mm(z)
10mm 1.5ns
2-6mm(R) 3-7mm 4ns
27
Measurements
barrel end-cap
20
20
4
6
9
Table 2.2: Parameters of the muon detectors. The listed spatial resolution does not include
chamber-alignment uncertainties.
2.3.5
Forward Detectors
ATLAS forward region is covered by three smaller detectors. LUCIDO (LUminosity measurement using Cerenkov Integrating Detector) and ALFA (Absolute Luminosity For ATLAS)
have as main function to determine the luminosity. LUIDO is located at ±17m and ALFA at
±240m. The third detector at ±140m is ZDC (Zero-Degree Calorimeter) which is important
to determine the centrality of heavy-ion collisions.
2.3.6
Trigger and Data Acquisition
With the enormous amount of data it is physically impossible to store everything, and it is
also not all events that are interesting to physicists. For instance it is expected to produce
one interesting Higgs event per 109−10 minimum bias events. In addition to storage space, the
time it takes to read out an event makes it impossible to process all events. An effective system
to pick out interesting event candidates is needed. The trigger system selects 100 interesting
events per second out of 1000 million others and the data acquisition system channels the
data from the detectors to the storage. Here is a quick overview of how the events are selected.
The trigger system formally consists of two parts: the first level trigger (L1) and the high
level trigger (HLT). Practically the HLT is again divided into the Level 2 (L2) trigger and
the event filter (EF). The data acquisition system (DAQ) receives and buffers the event data
from the detector-readout electronics at the L1 trigger rate. The main difference between L1
and HLT is that L1 is a hardware trigger that bases its decisions on pure electronics, while the
HLT is software trigger and can therefore use reconstruction algorithms and take information
from more detector parts into account. This makes the L1 trigger very fast, and the total
decision takes less than 2.5µs. The event rate is reduced to 75kHz, and Regions-of-Interests
(RoI) with interesting features are defined for the next stages. L1 searches for signatures
from high pT muons, electrons/photons, jets and τ leptons that decay into hadrons, large
missing transverse energy , E
/ T , and large total transverse energy, ET .
The L2 trigger is based on possible trigger events defined in the RoI information from the L1
trigger. Now the trigger rate is reduced to 3.5kHz. Since computer processes are running it
takes more time: 40ms. In the end the event filter reduces the event rate to ∼200 Hz. The
event processing is of the order four seconds because complex algorithms are executed. It
28
Chapter 2. Observing Particles In An Experiment
Figure 2.5: Trigger DAQ, event rate and decision stages at ATLAS
connects to databases and get complete geometry and detector conditions. The HLT algorithms use full granularity and precision of calorimeter, muon chamber and inner detector.
Filtered events are moved to permanent storage, and are ready for a first reconstruction at the
Tier-0 center at CERN. Each year the computing system will analyze 1000 million recorded
events.
2.3.7
Detector Control System
Another important job is to ensure safe operation of ATLAS detector hardware. The Detector
Control System (DCS) supervises gas systems, power-supply voltages, cooling systems, magnetic fields and temperatures and humidity, handles communication between sub-detectors.
2.3.8
GRID
The LHC will produce 15 Peta bytes of data every year. If compared to the amount of information printed world wide in book format, the information from LHC would be 1000 times
greater. Obviously there is a big need for processing capacity for this data.
The solution has been not to store the date on one place, but to distribute the data to many
sites around the globe. It must also be possible for the thousands of scientist around the
world to access the data.
Worldwide LHC Computing Grid provides distributed data storage, data management and
analysis interface for all experiments at the LHC. The data is distributed around the world
in a four-tier model. Tier-0 performs a first reconstruction of the data and writes the data
2.4 Simulated Data
29
to backup tape before it is being distributed to Tier-1 sites, which are a series of large
computing centers, where further reprocessing of data is performed. Monte Carlo simulation
is performed by data Tier-2 sites, which are smaller units that are also used to store real data
for specific analysis. The scientists access the data through Tier-3 resources, which can be
local University clusters of computers.
2.4
Simulated Data
Before real data exist, Monte Carlo simulated data is used to investigate detector and physics
performance at ATLAS. The complex experimental setup and the broad variety of physical
processes produced at LHC, requires an accurate simulation procedure. In ATLAS this
routine is divided into event generation, detector simulation, reconstruction and digitalization. [12].
2.4.1
Event Generation
The event generator produces events containing physical processes of the theoretical model of
interest. Components from different event generators are combined to give an accurate theoretical prediction. Minimal bias events, signal events and background events are generated
with different event generators, see table 3.3, because the various programs perform better
or worse for different processes. Each process will again in general have input from different
generators. One program generates the hard process, another program adds a parton shower
algorithm and a third program hadronize QCD effects in the shower.
Two types of generators are used to obtain the simulated data for this thesis: general purpose
showering and hadronization event generators (pythia [28],herwig [22] and isajet [13]) and
a program using next-to-leading order (NLO) matrix elements with showering (MC@NLO
[21]). Another widely used generator is the matrix element generator for specific process
(alpgen [24]), where a specific final state is calculated to lowest order in perturbation theory.
The showering and hadronization generators can simulate a broad spectrum of initial and
final states [19]. Based on the hard scattering process, the partons showers are evolved to add
higher order effects. The events are produced with the frequency predicted by the theory, and
the hard subprocess is the only theory-dependent part. Other effects are obtained the by the
same same algorithm for all partons. herwig and isajet are general purpose Monte Carlo
event generators for lepton-lepton, lepton-hadron and hadron-hadron collisions. Both can
take input parameters, such as Supersymmetry parameters and masses from isajet, which
generate 2→2 subprocesses. The herwig generator can also work with jimmy, which is an
external package that uses a multiple scattering model for the underlying event, the part of
the interaction that does not take part in the hard process.
2.4.2
Detector Simulation
The simulation of the Inner Detector, LAr and Tile Calorimeters and Muon System is done
with the GEANT [2] package. A detailed simulation is required with many interactions per
beam-crossing and high background noise, with the goal to include as broadest range of energies and types of processes as possible. Ionization of the gas in the detectors can happen
30
Chapter 2. Observing Particles In An Experiment
as low as 10 eV and catastrophic energy losses of muons in the calorimeter can reach a few
TeV. It is however impossible to simulate the complete bunch crossing (23 interactions on
average for LHC), because of the CPU time required. Instead the approach is to simulate
single pile-up events and add them up in the end.
The geometry and layout of the detector is accurately taken into account so systematic effects
caused by the detector can be studied. To make the program flexible the geometry parameters
are stored in detector description banks, and can be altered.
2.4.3
Digitalization
The output from the digitalization is in a format similar to readout signal from the real detector. In this step physical information is being re-processed to create the detector output look
alike, so read out characteristics can be changed. Meaning it is possible to change parameters
such as granularity and strip pitch. This step is fast to do, and is done separately in order to
save CPU time in the detector simulation step. However it is possible to add pile up effects
here, which will make the procedure more time consuming.
Pile-up is the superimposed minimum bias events, when a interesting event has been selected
by the trigger. The detector get to deal with multiple events at the same time, since the
LHC with a design luminosity of 1034 cm−2 s−1 and bunched separated with 25ns has a mean
number of collisions of 23 per nun empty bucket.
2.4.4
Reconstruction
Reconstruction must be done either it is real or simulated data that is dealt with. First the
physics events are reconstructed based on each detector alone. Then information from all
detector parts are combined to get an accurate physics measurement.
2.5
Work At CERN
During the master we had several opportunities to go to CERN to learn and contribute
to the work in-situ. During summer 2007 we stayed for 6 weeks and followed the Summer
Student lectures, thought by some of the best lectures in the field. This gave us a broad
taste of particle physics; theory, detector development, software and interpretation of recent
experimental results. In this period we also took part in the SemiConductor Tracker (SCT)
End-Cap Installation and we were assigned the duty to implement a static web page for the
SCT Detector Control System.
Later we returned for the Milestone 5 (M5) cosmic ray test in October 2007 and again during
summer 2008 where we took part in the pre-beam tests and M8.
2.5.1
SCT End-Cap Installation
Summer 2007 we took part in the SCT End-Cap Installation. We worked with the LMT
(low mass tape) connection and testing. This work, coordinated by Alessandra Ciocio, was
related to the power supply, PS, system of the SCT-detector.
2.5 Work At CERN
31
Each detector module has two independent power supplies: the high voltage, HV, and the
low voltage, LV. For each part of the detector that would be tested that day, we checked
which crate corresponded to its modules. In total 88 PS crates for both the SCT barrel and
end-cap modules was tested. A PS crate is a structure that holds 12 LV cards and six HV
cards. [7] The LV cards have four output channels while the HV cards have eight output
channels. The high voltage power supply provides a normal bias voltage of 150-480 V. This
is needed to deplete a sensor. The low voltage supply gives digital and analogue voltage to
systems used for read out of the optical links. In addition the PS system provide bias voltage
for the p-i-n diodes control voltage.
The LMT connecting and testing, is important for two reasons. Firstly it ensures that the
cables themselves are performing correctly and secondly it is a confirmation that the mapping
is correct. The mapping provides an overview of which readout channel in the crate corresponds to which module in the detector. The testing equipment we used includes Agilent
and Keithley devices. The Aligent Agilent Data Acquisition measure resistance between the
power lines, and the Keithley Source Meter measure the high voltage circuit board [16]
The LMT is connected to PS racks located in USA15 and US15 areas in the ATLAS pit where
we are located. We are set in contact with a colleague positioned inside the cryostat. This
person calls us when a part has been connected, and we go through the testing procedure.
The testing devise was connected to the LV and HV channels in the test card one by one,
and by clicking the “test” button on a laptop the status of this channel was read off. In case
of anomalies, an expert was called to fix the problem, which was later reported about in the
elog.
The work gave an impression of all the engineering details that has to be taken into account
in the experiment.
2.5.2
Static Web Page for the SCT DSC
The other duty consisted in a programming job. The task was to display the status of the
SCT Detector Control System (DCS) in a static web-page. Originally a cron job, used for
the server to automatically perform special functions, sent a summary by e-mail to people
subscribed to get this update. The mail contained a list of error states in the detector. We
made extensions, such that the cron job generates a web page containing summary plots of
the detector status, as well as plots of channel values outside some allowed range. This work
was done together with Eirik Gramstad, master student at University of Oslo.
Saverio D’Auria, from University of Glasgow, provided a perl script with the query for the
database retrieving information from bias voltage, low voltage, module temperature and Dew
Points inside SCT Thermal Enclosure. The dew point is a function of the temperature inside
the detector. We made a perl script extracting the information about which variable, crate
and channel where anomalies occurred, and generated a html file displaying the information.The plots were made in ROOT, and the web page was used during M5, the cosmic run
milestone. Figure 2.6 shows an example of the plots displayed by the web page.
The main goal of the SCT DCS is to ensure a safe operation of the SCT-detector. It protects
the detector from failure conditions and controls the power supplies and cooling [7].
32
Chapter 2. Observing Particles In An Experiment
Figure 2.6: Static Web Page: Example of a histogram that was displayed
2.5.3
Data Quality-shift at M5, pre-beam test and M8
We took part in the Milestone M5 run from 22nd of October to 5th of November 2007, some
pre-beam runs between June and July, and the Milestone M8 Combined Run from 10th of
July to 20th of July 2008.
Online DQ-shifts are done in Point 1, the ATLAS control room, see figure 2.7. A DQ-shifter
is responsible for checking that the data form all detectors are collected and stored correctly,
and to keep checking that plots and values are reasonable. We have four screens at our disposition to run various applications.
During a run we open the Data Quality Monitoring Display, DQM Display, and click through
the hierarchical structure to check that the histograms are being filled. The online status flag
in the display is updated automatically to the data base DQ Status DB, and we check if it is
done correctly.
Figure 2.7: Data Quality shift in Point 1
Figure 2.8: Cabling work in USA15
In case anomalies are detected in the system the monitoring shifter for this sub detector are
informed. For instance if we see sudden jumps in the LVL1, LVL2 or EF trigger rates in the
TriggerPresenter application, we notify the shift leader. The Atlantis event display on the
2.5 Work At CERN
33
projector is controlled from the DQ-desk. We make sure that we actually see hits from the
current run in the detector parts that are active during the run. There is also a run summary
page as well as offline histograms to check. For long runs, lasting more than one hour, we
update an offline database by hand during M8. This will be automatized later. At the end
of a shift we submit a summary of the findings and problems to an electronic log book.
34
Chapter 2. Observing Particles In An Experiment
Chapter 3
Data Sets and Strategy
The main goal of this study is to investigate search strategies for discovering Supersymmetry
at LHC. If Supersymmetry is realized in nature, it is important to be prepared to recognize its
manifest in the detector. Kinematic variables showing the properties of the particles, such as
energy and momentum, are reconstructed and plotted in histogram distributions. The goal is
to find distributions that significantly distinguishes new physics form Standard Model physics.
Especially for early date it is desirable to have robust search channels and methods, both
since the detector will not yet be operating at ful luminosity, and in particlar since the rules
of nature are not known. The signatures searched for should for this reason be as model
independent as possible. If new physics is discovered, the next task is to determine which
model that best describes the data. This involves precision measurements of parameters, ans
will take place later on in the experiment.
In this thesis the search channels include leptons, and the search is performed for six mSUGRA
points. As long as R-parity is assumed, a large amount of energy is carried off by the
lightest Supersymmetric particle. Multi lepton channels are also potentially important, but
the size of the signal is sensitive to which sparticle pair is dominantly produced and on the
corresponding branching ratio to various decay modes. The Same Sign di-lepton channel
and the one lepton channel will be investigated in chapter 5 and later in this chapter the
advantages and disadvantages of these channels are discussed.
3.1
Cut And Count Method
After picking out good leptons in chapter 4, a more signature based study can start, aiming
directly at separating signal from background. The search method used to separate signal
from background is a cut and count method, which is a robust method for early ATLAS
Supersymmetry search. The idea is to have a set of variables that is observable in the
experiment, and perform one dimensional cuts on each of them (variables are desctibed in
chapter 5.1). There will be no specific shape to search for, as is the case in searches for
specific particles for instance by looking at invariant mass plots. It is important to predict
the background very precisely, because the goal is to recognize that the occurrence for the
variable exceeds the expected. Each one of the one dimensional distributions can be used to
check experimental data against simulated data, but it is usual to pick one variable to look
35
36
Chapter 3. Data Sets and Strategy
at the effect for the various cuts.
3.2
Optimizing The Cut According To The Significance
Statistics is essential for the understanding of the experimental results. After candidate events
have been selected by applying cuts, time comes to extract the signal in the histogram. Since
it is known what pattern is searched for in the detector, the events that contain this phenomena are selected. The histogram distribution shows some sort of kinematic variable such
as the transverse missing energy ( E
/ T ). It is important to keep in mind that the selection
efficiency will never be 100%, so that the events in the histograms are in most cases a combination of signal events and background events.
In a counting experiment significance testing for a possible signal must be done in order to
say if the signal can be observed or not. Significance is a quantity that tests the probability
that an observed count in the signal region could have been produced only by fluctuations
in the background sources. From the Standard Model there are predictions for how many
background counts that are expected in a certain area, or how the shape of the background
distribution should be. In the experiments, or from Monte Carlo simulations, data is collected.
If the counting or shapes of the distributions in the data sets exceeds of differ significantly
from the background expectations, discovery of new physics can be claimed [14]. The numbers
of counts can also be comparable to the background predictions. In that case the experiment
can quote upper limits for the new theory. However, there will be a flouting boundary between
counts compareble to prediction and counts that clearly exceeds the expectations. A limit
for discovery has to be defined: the area where the null hypothesis1 is so small that it is
interesting, but not so small that it can be neglected. In particle physics it is common to
say that above five σ there is a discovery. This is equivalent to say that if number of signal
counting is five times higher than the estimated background error, the signal is trusted to be
observed. There are different ways to define significance, Z. Usually it is given as the ratio of
estimated signal to its variance. One frequently used form is the signal to noise ratio (Zsb )
√
Zsb = S/ b
(3.1)
where S is the signal count in the area and b is the background count. Under the null hypothesis this statistics is approximately Gaussian for sufficient data [27]. With this expression
the uncertainty in the background estimation is completely ignored. The result is that the
significance gets overestimated, but it is possible to add ad-hoc corrections for signal uncertainty; b → b + δb.
The significance Zsb is frequently used for optimizing selection in high energy particle physics
because of its simplicity. Observed significances and predicted significances are different. The
prediction problem involves the probability of making an observation at a given significance
S
level. In measurements one use the power of analysis with Z = √S+B
[26].
1
The null hypotesis is that the signal is prodused only by background flucuations. This is assumed true
intill there are other statistical indications.
3.3 Characteristics Of The mSUGRA Points
37
Searching for Supersymmetry involves observing events that can not be described by the
background predictions. The Same Sign channel is a selection criteria that is predicted
to give more signal counts than background counts. Further selection criteria can also be
applied, such as high expected E
/ T and a high number of jets for the signal final state. The
significance in a histogram can be obtained from equation 3.2.
Z=
h−2 ln Qib − h−2 ln QiS+b
σb
(3.2)
Where Q is the sum of likelihood ratios in all bins, which for each bin follows Poisson statistics. For a single bin (counting experiment), Z is equal to Zsb in equation 3.1 [26]. The
numbers for S and b is read off the histogram distribution. Let Ntot be the number of observed events and let b be the average rate of estimated background events, then the number
of signal events S is given by: S=Ntot − b. The mean of the background rate is equal to the
variance for a single bin, as we see by comparing equation 3.1 and 3.2.
To decide if the signal is well separated from the background, we use significance. This
variable measure is given in various ways. In chapter 5 we will consider three definitions of
significance that will be quoted when the results are presented:
S
√ ,
b
S
q
,
2
b + σsyst
√
S
S+b
(3.3)
p
S/ (b) is the significance for discovery under the assumption that the background b can
be very√accurately predicted. Then only the statistical uncertainty on the background,
σstat = b, enters the formula. If there is also a systematic uncertainty on theqbackground es2
2
+ σsyst
=
timate, σsyst , then the two uncertainties must be added in quadrature, σtot = σstat
q
2 . This is incorporated in the second formula which is therefore more realistic.
b + σsyst
√
For comparison a third definition, S/ S + B, appropriate for measurements rather than discovery, will also be shown in tables. In ATLAS the systematic uncertainties on the Standard
Model background measurements were estimated in SUSY CSC Notes 1-3 [10] [11]. For W,
Z and top background the estimate is 20%, while for QCD a 50% uncertainty is expected.
3.3
Characteristics Of The mSUGRA Points
Preparations for a large experiment like the LCH involve production of simulated data. These
are run though detector reconstruction algorithms and later analyzed. This way the software
gets developed, physicists get trained, search strategies get developed and predictions can be
made. In this chapter the characteristics of six mSUGRA signal data sets and the Standard
Model background data sets will be discussed.
All samples were simulated by the Computing System Challenge (CSC), using the athena
release 12.0.6. The data sets are fully simulated using Geant 4 [2], and the mSUGRA spectra
was calculated by Isajet [13] and generated by herwig [22]. There are four regions with
a relic abundance consistent with the measured energy dencity of dark matter, and the six
38
Chapter 3. Data Sets and Strategy
mSUGRA points are chosen to represent regions roughly consistent with this allowed space.
Each of the regions have their distinct process for effective neutralino annihilation, and so the
most probable production mode and decay chains differ for each point. All mSUGRA points
have a gluino mass less than 1TeV, which makes g̃g̃ production available for all points at LHC.
By studying points with a wide range of event topologies, one can optimize the search methods
so that ideally all points can bee seen. The knowledge gained from studying a few typical
points in mSUGRA parameter space, can be used to scan a wider range of points. Such scans
are based on fast parameterized simulation, since the number of signal points are so large.
m0 [GeV]
m1/2 [GeV]
A0
tan(β)
sign(µ)
SU1
SU8.1
Coannihilation
70
210
350
360
0
0
10
40
+
+
SU2
Focus
3550
300
0
10
+
SU3
Bulk
100
300
-300
6
+
SU4
Low mass
200
160
-400
10
+
SU6
Funnel
320
375
0
50
+
Table 3.1: Values for six mSUGRA points investigated in this thesis.
In section 1.4.1 it was explained that five parameters are enough to specify a mSUGRA point.
The masses of the particles are roughly set by the values of scalar masses m0 and gaugino
masses at the GUT scale. Knowledge of the mass spectrum is important to built the event
topology for each point. Table 3.1 specify the six mSUGRA points that will be studied: SU1,
SU2, SU3, SU4, SU6 and SU8. Each point will be considered in some more detail in the end
of this section. A typical spectrum [9], at the energy of the LHC:
m(g̃) ∼ 2.5m1/2
(3.4)
∼ 0.8m1/2
q
±
m(˜lR
) ∼ m20 + 0.15m21/2
q
±
m(˜lL
) ∼ m20 + 0.5m21/2
q
m(q̃L,R ) ∼ m20 + 6m21/2
(3.6)
m(χ̃02 ) ∼
m(χ̃01 )
m(χ̃±
1)
∼ 0.4m1/2
(3.5)
(3.7)
(3.8)
(3.9)
Yukawa interactions reduce the scalar masses as the energies run from high scale down to
weak scale. This is the opposite effect from gauge interactions. Since Yukawa interactions
are important for third generation quarks, they are lighter than the first and second generation. Supersymmetric particles of the third generation mix left and right handed states. The
lowest mass eigenstates (t̃1 , b̃1 and τ̃1 ) will be lighter, see table 3.2. The mixing depends on
the values of A0 and tan(β).
In the extreme cases where m1/2 >> m0 and m1/2 << m0 we see from equations 3.5-3.9, that
squrks are much heavier than the sleptons in the first case and that they are degenerate in
the second case. This is explained by the fact that when m1/2 >> m0 , radiative corrections
39
3.3 Characteristics Of The mSUGRA Points
Particle
d˜L
ũL
b̃1
t̃1
˜
dR
ũR
b̃2
t̃2
ẽL
ν̃e
τ̃1
ν̃τ
ẽR
τ̃2
g̃
χ̃01
χ̃02
χ̃03
χ̃04
χ̃+
1
χ̃+
2
h0
H0
A0
H+
t
SU1
SU2
SU3
SU4
SU6
SU8.1
764.90
760.42
697.90
572.96
733.53
735.41
722.87
749.46
255.13
238.31
146.50
237.56
154.06
256.98
832.33
136.98
263.64
466.44
483.30
262.06
483.62
115.81
515.99
512.39
521.90
175.00
3564.13
3563.24
2924.80
2131.11
3576.13
3574.18
3500.55
2935.36
3547.50
3546.32
3519.62
3532.27
3547.46
3533.69
856.59
103.35
160.37
179.76
294.90
149.42
286.81
119.01
3529.74
3506.62
3530.61
175.00
636.27
631.51
575.23
424.12
610.69
611.81
610.73
650.50
230.45
216.96
149.99
216.29
155.45
232.17
717.46
117.91
218.60
463.99
480.59
218.33
480.16
114.83
512.86
511.53
518.15
175.00
419.84
412.25
358.49
206.04
406.22
404.92
399.18
445.00
231.94
217.92
200.50
215.53
212.88
236.04
413.37
59.84
113.48
308.94
327.76
113.22
326.59
113.98
370.47
368.18
378.90
175.00
870.79
866.84
716.83
641.61
840.21
842.16
779.42
797.99
411.89
401.89
181.31
358.26
351.10
392.58
894.70
149.57
287.97
477.23
492.23
288.29
492.42
116.85
388.92
386.47
401.15
175.00
801.16
797.09
690.31
603.65
771.91
773.69
743.09
766.21
325.44
315.29
151.90
296.98
253.35
331.34
856.45
142.45
273.95
463.55
479.01
274.30
479.22
116.69
430.49
427.74
440.23
175.00
Table 3.2: Masses for the fully simulated SUSY samples. All masses are given in GeV
to the sfermions will be large, since the masses are determined by contribution from m1/2 .
The first two generations of squarks will also be more heavy than the gluino.
In the other extreme, most of the masses at LHC scale will come from the m0 contribution.
Sfermions will be more or less degenerate since they are proportional to the same factor of
m20 , and much heavier than the gluino. In this regime the lightest neutralino states are Bino
±
and Wino-like (χ̃01 ∼ B̃ 0 , χ̃02 ∼ W̃ 0 ). The light chargino will also be Wino like (χ̃±
1 ∼ W̃ ) [30].
The squarks will never be much lighter than the gluino, but the gluino may be much lighter
than the squarks. In table 3.2 all sparticle masses have been accurately calculated using Isajet.
At LHC, quarks are dominantly produced via strong interactions. It is natural to look for
SUSY in final states from the processes pp → g̃g̃, pp → g̃ q̃ and pp → q̃ q̃. To know what
final states are possible we must know what is the most probable decay chain for the vrious
production processes. It turns out that the production cross sections, σ is most sensitive to
the masses mq̃ and mg̃ . Other SUSY parameters have less dependence.
40
Chapter 3. Data Sets and Strategy
A problem is that since that since colored sparticles are assumed to be the heaviest of all sparticles, their production may be kinetically suppressed. The gluon luminosity falls off rapidly
with the center off mass energies ŝ [30]. So as a rule of thumb low mass gluinos will dominantly be produced by gg initial state and if the gluino is heavier it will be produced from q q̄.
From the gluino pair we will see a hight number of jets from its cascade dacay ending in
a LPS that carries off energy. Therefor we require a high number of jets and a high number of missing energy in our detectors when we look for SUSY. This is a very general and
model independent signal of SUSY. Later we will look at more specific final states. These
can be used to gather information about the masses of the different Supersymmetric particles.
Different quarks produced also have their own decay patterns. Cross section and final state
will here varie for which squark pair is produced. Figure 3.1 to 3.3 show which particles
leptons decay from in the SU1, SU2, SU3, SU4, SU6 and SU8 samlple, and figure 3.4 and 3.5
show the mothers two mothers in a Same Sign selection ( χ̃±
1 , W and τ are important in this
case.)
Co-annihilation region SU1,SU8.1
This is the left region in the m0 − m1/2 plane, with low m0 and higher m1/2 values. The name
reflects the fact that χ̃01 τ̃1 and τ̃1 τ̃1 co-annihilates. Moving more to the left in the m0 − m1/2
plane one enter a space that is excluded by experiments, because mτ̃ < mχ̃0 and so τ̃ would
be the lightest Supersymmetric particle [30]. If indeed this was the case in nature, a charged
LSP would already be observed by experiments.
Mothers
Mothers
For SU1 χ̃01 and ˜l are nearly degenerate, while for SU8.1 all slepton masses expert the τ̃1 are
moved away from the χ̃01 mass. See table 3.2.
SU1
12000
µ
10000
3000
µ
e
e
2000
8000
1500
6000
4000
1000
2000
500
0
SU8
2500
0
-
~ ~
∼ µ
∼ µ
∼ ν
∼ ∼± ∼± ∼0 ∼0 ∼0 51
Z0 W± τ e
χ χ χ χ χ
R eL ν eL
R
L µL
1
2
2
3
4
0
0
-
~ ~
∼ µ
∼ µ
∼ ν
∼ ∼± ∼± ∼0 ∼0 ∼0 51
Z0 W± τ e
χ χ χ χ χ
R eL ν eL
R
L µL
1
2
2
3
4
Figure 3.1: Decay mothers of electrons and muons in SU1 and SU8. In the bin named ”0” the
lepton had no truth match, and the bin named ”-” the leptons came from quarks or mesons
(Mostly D- and B-mesons).
41
3.3 Characteristics Of The mSUGRA Points
Focus point, SU2
Mothers
Mothers
This point has large m0 and small m1/2 . (Larger m1/2 is called the hyperbolic branch.)
The region with very high values of m0 is excluded since electroweak symmetry is not broken
properly. In this forbidden area µ2 is positive, but just to the left of this area, where µ2
turn negative as required, the focus point is located. For this reason the value of µ2 will
for this reason be small and so |µ| ∼ mχ̃± ∼ mχ̃01 . The LSP χ̃01 becomes higgsino like, and
1
annihilation to WW, ZZ and Zh becomes large [30].
1800
SU2
1600
µ
1400
25000
SU3
µ
20000
e
1200
e
15000
1000
800
10000
600
400
5000
200
0
0
-
~ ~
∼ µ
∼ µ
∼ ν
∼ ∼± ∼± ∼0 ∼0 ∼0 51
Z0 W± τ e
χ χ χ χ χ
R eL ν eL
R
L µL
1
2
2
3
4
0
0
-
~ ~
∼ µ
∼ µ
∼ ν
∼ ∼± ∼± ∼0 ∼0 ∼0 51
Z0 W± τ e
χ χ χ χ χ
R eL ν eL
R
L µL
1
2
2
3
4
Figure 3.2: Decay mothers of electrons and muons in SU2 and SU3. In the bin named ”0” the
lepton had no truth match, and the bin named ”-” the leptons came from quarks or mesons
(Mostly D- and B-mesons).
Table 3.1 shows that m0 >> m1/2 for SU2. By applying equations 3.5-3.9, it comes clear
that squark and sleptons are heavy and degenerate. The gluino is lighter than the scalars, so
a final state with leptons from gluino production is suppressed, since such two body decay
modes are not possible. Direct gaugino production will instead be the source for leptons.
Bulk region, SU3
In this region both m0 and m1/2 are small. The most important contribution to neutralino
annihilation is χ̃01 χ̃01 → l¯l.
Low mass region, SU4
This is a low mass point close to Tevatron bound.
Funnel region, SU6
Point where LSP annihilation is enhanced due to the resonance condition 2mχ̃01 ≈ mA .
Chapter 3. Data Sets and Strategy
8000
Mothers
Mothers
42
SU4
7000
µ
6000
e
3500
SU6
3000
µ
2500
5000
e
2000
4000
1500
3000
1000
2000
500
1000
0
0
-
~ ~
∼ µ
∼ µ
∼ ν
∼ ∼± ∼± ∼0 ∼0 ∼0 51
Z0 W± τ e
χ χ χ χ χ
R eL ν eL
R
L µL
1
2
2
3
4
0
0
-
~ ~
∼ µ
∼ µ
∼ ν
∼ ∼± ∼± ∼0 ∼0 ∼0 51
Z0 W± τ e
χ χ χ χ χ
R eL ν eL
R
L µL
1
2
2
3
4
Figure 3.3: Decay mothers of electrons and muons in SU4 and SU6. In the bin named ”0” the
lepton had no truth match, and the bin named ”-” the leptons came from quarks or mesons
(Mostly D- and B-mesons).
3.4
List of Data Sets And Cross Sections
Table 3.3 gives the full overview of the data sets used in this thesis, with corresponding precut.
The effective cross section (σef f ) given in table 3.3, is calculated from the production cross
section (σ), by the relation: σef f = σ × EFef f iciency . Here EFef f iciency is the event filter efficiency. These numbers can be found on the Wiki page [3]. Here are also given the K-factors
for recalculating the cross sections from Leading Order to Next to Leading Order.
Two generators are common to use for the background production. These are pythia and
alpgen. Generally alpgen is said to be the most appropriate for background in Supersymmetry studies [6]. However pythia samples have higher statistics and are used for W → lν
and Z → ll in this thesis instead of alpgen used in the CSC study. In appendix B we
have done a study on how different these two samlpes perform, and we saw that they differ
by a factor of five (alpgen having more events left after some base line cuts defined in the
appendix B).
Precut
Some of the data sets have precuts at generator level, see the rightmost columns in table 3.3.
This has to be adjusted for in all data sets that is used in order to compare contributions
from the various physics processes to a distribution. In the W and Z samples E
/ T is bigger
than 80GeV, pT of the hardest jet is bigger than 80 GeV and pT of the second hardest jet
is bigger than 40GeV.
The cuts have been applied at truth level, which is before detector simulation and reconstruction. With detector simulation there will be a smearing effect due to uncertainties in energy
measurements. An E
/ T of 80 GeV on generator level will correspond to a 65GeV < E
/ T <95GeV
after detector simulation. To account for this, the precut that will be applied to all samples
43
0
∼
χ4
0
∼
χ
SU1
3
0
∼
χ2
±
∼
χ
2
±
∼
χ
1
∼
νµ L
∼
µ
L
∼
µ
R
∼
ν
eL
~
eL
~
e
R
τ
W±
Mother of softest SS lepton
Mother of softest SS lepton
3.4 List of Data Sets And Cross Sections
Z0
-
0
∼
χ4
0
∼
χ
SU2
3
0
∼
χ2
±
∼
χ
2
±
∼
χ
1
∼
νµ L
∼
µ
L
∼
µ
R
∼
ν
eL
~
eL
~
e
R
τ
W±
Z0
-
?
-
?
?
~ ~ ∼ ∼ ∼ ∼ ∼± ∼± ∼0 ∼0 ∼0
Z 0 W ± τ eR eL νeL µR µL νµ Lχ1 χ2 χ2 χ3 χ4
-
?
0
∼
χ4
0
∼
χ
SU3
3
0
∼
χ2
±
∼
χ
2
±
∼
χ
1
∼
νµ L
∼
µ
L
∼
µ
R
∼
ν
eL
~
eL
~
e
R
τ
W±
Z0
-
Mother of hardest SS lepton
Mother of softest SS lepton
Mother of softest SS lepton
Mother of hardest SS lepton
~ ~ ∼ ∼ ∼ ∼ ∼± ∼± ∼0 ∼0 ∼0
Z 0 W ± τ eR eL νeL µR µL νµ Lχ1 χ2 χ2 χ3 χ4
0
∼
χ4
0
∼
χ
SU4
3
0
∼
χ2
±
∼
χ
2
±
∼
χ
1
∼
νµ L
∼
µ
L
∼
µ
R
∼
ν
eL
~
eL
~
e
R
τ
W±
Z0
-
?
?
-
~ ~ ∼ ∼ ∼ ∼ ∼± ∼± ∼0 ∼0 ∼0
Z 0 W ± τ eR eL νeL µR µL νµ Lχ1 χ2 χ2 χ3 χ4
?
?
Mother of hardest SS lepton
-
~ ~ ∼ ∼ ∼ ∼ ∼± ∼± ∼0 ∼0 ∼0
Z 0 W ± τ eR eL νeL µR µL νµ Lχ1 χ2 χ2 χ3 χ4
Mother of hardest SS lepton
Figure 3.4: Mother combination in Same Sign events after leptson isolation for SU1, SU2,
SU3 and SU4
in all plots in the thesis are (unless otherwise stated)
ETmiss > 100GeV
ptjet1 > 100GeV
ptjet2 > 50GeV
(3.10)
Chapter 3. Data Sets and Strategy
0
∼
χ4
0
∼
χ
SU6
3
0
∼
χ2
±
∼
χ
2
±
∼
χ
1
∼
νµ L
∼
µ
L
∼
µ
R
∼
ν
eL
~
eL
~
e
R
τ
W±
Z0
-
Mother of softest SS lepton
Mother of softest SS lepton
44
0
∼
χ4
0
∼
χ
SU8
3
0
∼
χ2
±
∼
χ
2
±
∼
χ
1
∼
νµ L
∼
µ
L
∼
µ
R
∼
ν
eL
~
eL
~
e
R
τ
W±
Z0
-
?
?
-
~ ~ ∼ ∼ ∼ ∼ ∼± ∼± ∼0 ∼0 ∼0
Z 0 W ± τ eR eL νeL µR µL νµ Lχ1 χ2 χ2 χ3 χ4
Mother of hardest SS lepton
?
?
-
~ ~ ∼ ∼ ∼ ∼ ∼± ∼± ∼0 ∼0 ∼0
Z 0 W ± τ eR eL νeL µR µL νµ Lχ1 χ2 χ2 χ3 χ4
Mother of hardest SS lepton
Figure 3.5: Mother combination in Same Sign events, after leptson isolation for SU6 and SU8
The T1 Sample
The semi leptonic top sample T1 is generated with MC@NLO, and must be treated a bit
differently from the other samples in the code since it operates with both negative and positive
event weights. MC@NLO has a parton parton density library that is used to combine the
Monte Carlo event generator with Next-to-Leading-Order (NLO) calculations of rates for
QCD processes. The parton distributions are weighted negative and positive in order to
apply the NLO correction in a correct manner. The parton configurations are input for
another event generator such as herwig. The total number of events in the sample used
in the normalization factor, will be the number of events with positive event weight minus
number of events with negative event weight, shown in equation 3.11. In the data set used
here, 320064 of the 1194200 events had negative weight. The right way to treat this with
respect to the histogram and detector simulation, is to let positive event weight events count
as real events and the negative event weight events as counter events. The counter events
are real events in the detector simulation, but they are subtracted from histogram bins in all
distributions.
Ntot (T 1) = Ntot (eventW eight > 0) − Ntot (eventW eight < 0)
(3.11)
45
3.4 List of Data Sets And Cross Sections
Sample
8090
8091
8092
8093
8094
5985
5986
5987
8190
8191
8194
8195
8270
8271
8272
5200
5204
5401
5402
5403
6400
5404
5406
Process
J4 (140-280 GeV)
J5 (280-560 GeV)
J6 (560-1120 GeV)
J7 (1120-2240 GeV)
J8 ( > 2240 GeV)
W W
Z Z
W Z
Z →νν + n jet
Z →τ τ + n jet
Z →ee + n jet
Z →µµ + n jet
W →eν + n jet
W →µν + n jet
W →τ ν + n jet
T1 tt̄ → e, µ, τ +jets
tt̄ full hadronic
SU1
SU2
SU3
SU4
SU6
SU8
Generator
pythia
pythia
pythia
pythia
pythia
herwig
herwig
herwig
pythia
pythia
pythia
pythia
pythia
pythia
pythia
MC@NLO
MC@NLO
jimmy
jimmy
jimmy
jimmy
jimmy
jimmy
σef f (pb)
916.40
655
67.42
5.3
0.0221
39.05
2.83
14.06
41.33
4.50
46.20
9.61
49.05
28.64
55.91
450
383
10.86
7.18
27.68
402.19
6.07
8.70
E
/T
E100
E100
E100
E100
E100
80
80
80
80
80
80
-
p1T
80
80
80
80
80
80
80
80
80
80
80
80
-
p2T
40
40
40
40
40
40
40
40
40
40
40
40
-
Table 3.3: Samples used in this study. The quoted cross sections σef f [1] include filter
efficiency (EF) and the Next to Leading Order K-factor. Generator cuts are given in the
rightmost columns.
46
Chapter 3. Data Sets and Strategy
Chapter 4
Lepton Isolation
This is an investigation on how pure the leptons in the sample are. The aim of this study
is in first place not to distinguish signal from background , but rather to know with what
confidence the selected isolated leptons really stem from the hard interaction in the event.
This is the same for signal and background. It does not make sense to keep events with
secondary or fake leptons and interpret them as di-lepton events. The CSC 5 [6] selection
will be refereed to as the standard selection of the leptons. This study aims at investigating
the possibility to gain more pure samples by performing more optimized cuts.
4.1
Standard Object Definition: Electrons, Muons, Jet
Object selection in the ntupes were done so that the possibility to study lepton isolation
were open. This means that the standard object definition had not yet been applied, leaving
the freedom to study and optimize the cuts. The standard object selection is summarized in
table 4.1, and so a important step consists at reproducing the CSC results (done in chapter
5.2 and 5.3 for the specific analysis there).
When performing object selections, it is important to follow an order corresponding to the
initial object construction. Two points are important for leptons: isolating in calorimeter
energy, and overlap with jets. When the lepton is part of a jet it will be more activity around
it, resulting in more active cells in the calorimeter. Electrons and muons are treated apart
because they are detected and reconstructed differently since their interaction with matter is
slightly different. Bremsstrahlung effects are more important for electrons than for the much
heavier muons, and muons will have hits in the muon spectrometer in addition to the inner
detector and the calorimeters, as minimum ionizing particles.
Overlap removal of jets are only done when checking angular distance between electrons and
jets. Isolation cuts are done before overlap removal, to avoid a scenario where a non isolated
electron, probably part of a jet, is responsible for removing the jet itself. The electron will
then be removed after. This way both the electron and the jet are removed, but the jets will
be used in the analysis, and so it is of interest to keep the jet in the event. When it comes to
overlap removal, electrons have priority. Working with EventView1 electron candidates are
first picked from the electron container. These are selected by setting some isolation cuts.
1
An ATLAS analysis framework.
47
48
Chapter 4. Lepton Isolation
Then jets are selected from the jet container. Some of the jets will be the same physical
objects as the electrons, only reconstructed with a jet algorithm instead. Ideally one object
should only appear once in an event. So if a jet occurs within angular distance (see equation
4.1), ∆R < 0.2 of an electron that is already isolated, it is removed while the electron is
kept. In the end angular distance from a jet to the nearest jet is checked. It is required to
have ∆R(e,µ−Jet) > 0.4. If the distance is closer we remove the lepton (both electron and
muon) [6].
1.
2.
3.
4.
5.
Veto events with electrons with 1.37 < |η| < 1.52
Leptons: pT > 10GeV, |η| <2.5 Jets: pT > 20GeV, |η| <2.5
Isolation: etcone20 < 10 GeV for electrons and muons
Overlap removal (∆R(e−Jet) < 0.2 → remove jet)
Lepton angular distance to jet (∆R(e,µ−Jet) < 0.4 → remove lepton)
Table 4.1: Order for object selections and standard CSC values given.
The reason for veto events with |η| ∼1.45 is that there is a bug in the reconstruction algorithm
that was used for these samples, giving decreased energy resolution in this area. When an
event is vetoed the whole event is removed from the the analysis. Pseudorapidity η, is a
function of the angle relative to the beam axis θ. When θ goes to zero, η goes to infinity and
when θ goes to 90o , η goes to zero. Etcone holds the summed energy within a distance ∆R
around the lepton, with the energy of the lepton itself subtracted, and pT is the transverse
energy of the lepton.
∆R =
4.2
p
((φ1 − φ2 )2 + (η1 − η2 )2 ) ,
θ
η = − ln[tan( )]
2
(4.1)
Study Of Lepton Isolation
In this thesis lepton channels for discovering Supersymmetry is the object of investigation.
Therefore it is important to know that the lepton objects are good leptons. A reconstructed
lepton is called good, or primary, if it comes from the hard collision. That would mean that it
stems from gauge boson or SUSY particle decays. A primary tau-lepton decaying leptonically
will also be mother of the leptons, as is the definition in the CSC study. Only the electrons
and muons are included in the term “lepton”.
Secondary leptons are not interesting for the analysis. They will typically come from Bmesons, D-mesons, light mesons and or baryons. There will also be fake leptons, often a jet
misidentified as a lepton. In this analysis fake leptons will also be labeled as “secondary”
leptons, since these are also leptons that we are not interested in. The fake leptons are leptons that do not have a matching truth lepton within a distance ∆R <0.02. This matching
was already done in the ntuples. Usually non matched leptons will match with a jet or tau
instead [6]. A reason for fake muon is typically a punch through of heavy flavor jet to the
muon chamber. Fake electrons are often misidentified light flavor- and τ -jets. These leptons
are expected to have higher number of tracks around their trajectory, and hence poorer isolation in energy in the electromagnetic calorimeter. The definition of primary and secondary
leptons are summarized in table 4.2.
4.2 Study Of Lepton Isolation
Lepton type
Primary (prompt)
Secondary (other)
49
Decay mother
Z ,W , SUSY, τprompt
all other (meson, baryon or fake lepton)
Table 4.2: Definition of primary and secondary leptons.
The mSUGRA samples have high multiplicity of leptons in the final state. If we succeed
in rejecting secondary leptons, we will in an effective manner reduce Standard Model background since it has smaller multiplicity of primary leptons.
One advantage with simulated data, is that it is possible to go back and check where the
reconstructed objects actually came from. In the ntuples the reconstructed data has been
matched with the truth data. For the lepton objects that has this match, the information of
the complete decay chain can be ordained. The lepton will often undergo Bremsstrahlung,
and this is also recorded in the decay chain, and so the first particle that is not the lepton
itself, will be called mother of the lepton. When primary and secondary leptons have been
divided into two groups, properties such as transverse momentum and energy inside a cone
along its trajectory fro the two groups can be compared. This is useful information that
can be applied on real data, where there is no possibility to go and check where the leptons
originated from. The goal is to find a selection criteria for the lepton candidates such that
primary leptons are kept, and secondary leptons are rejected. One point to keep in mind is
however that this is Monte Carlo simulated data, and the distributions for real data could be
quite different.
The detector collects information about the leptons from tracking, calorimeter and muon
chambers (for the muons). Primary leptons are usually well isolated leptons in calorimeter
energy. Primaries and secondaries are plotted separately for SU3 and tt̄ (T1) as control
data-set. The first column of table 4.7 shows how many primaries and secondaries that is
inside the data sets originally. For SU3 16% of the electrons are secondary and 50% of the
muons are secondary, and for the semileptonic tt̄ set there is 8% secondary electrons and
38% secondary muons. This percentages can be improved a lot. Muons arise more frequently
inside the jets.
Figure 4.1 shows energy within a distance ∆R <0.2 from the lepton. The primary leptons
are peaked at lower values of etcone20 and than secondaries. This is one property that will
be used to distinguish primary from secondary leptons. The variable etcone20 is the sum of
energy deposit within an angular distance ∆R < 0.2 from the lepton. The mean energy loss
due to ionization is subtracted from muons, and cells associated with the electromagnetic
cluster are excluded for electrons. This means that a low value corresponds to low activity
around the track, meaning good spatial isolation from neighboring objects. A secondary
lepton within a jet will have more activity around than a primary lepton.
It is possible to use either track or calorimeter based isolation, or a combination of the two.
Here calorimeter isolation is investigated.
In the ntuples used for this analysis, lepton isolation has not yet been done, so the effect of dif-
50
106
SU3 prime
105
SU3 sec
T1 prime
4
10
T1 sec
τ primary mother
3
10
102
Events/10fb-1/0.5GeV
Events/10fb-1/0.5GeV
Chapter 4. Lepton Isolation
10
1
-10
106
SU3 prime
105
SU3 sec
T1 prime
4
10
T1 sec
τ secondary mother
3
10
102
10
0
10
20
30
1
-10
40
50
etcone20 [GeV]
Figure 4.1: The EtCone variable for primary and secondary leptons. Tau primary
mother. Peak for primaries at low etcone
values. Secondaries exceed primaries at
etcone20∼ 8GeV. The negative etcone20
values are present because the energy of
the lepton itself is subtracted from the
sum.
0
10
20
30
40
50
etcone20 [GeV]
Figure 4.2: The EtCone variable for primary and secondary leptons. Tau secondary mother. The peak for primaries
at low etcone values is decreased, especially in the mSUGRA case where a large
fraction of stau decay to tau is expected.
Instead we see that secondaries has gotten
a peak. Secondaries exceeds primaries as
etcone ∼ 7GeV.
ferent isolation cuts will be studied. The standard object definitions are given in table 4.1 [6].
When looking at figure 4.1 it comes clear that it is a good idea to demand that the energy
around the lepton must be smaller than a given value. The plot shows that primaries have a
peak for etcone around 2GeV, while the secondary distribution starts at higher value. At high
energy secondaries have more events than primaries. If the tau lepton is considered secondary
mother the secondaries also get their peak at low etcone20 values just like the primaries. The
leptons from a primary τ mother seems to have good calorimeter isolation, which is natural
since a tau decaying leptonically (τ → lνl ) will not include a jet. The decision whether to
include τ as primary or secondary mother has to be taken. Primary τ ’s decaying hadronically
will not be taken into account here, but leptons from a primary τ is kept as an interesting
lepton. The leptons do come from the hard collision, only in a one step longer chain. The
primary interest is to know if there was a SUSY particle in the decay chain, and the interest
is not a certain chain.
The standard object definition is to choose etcone20 < 10 GeV. To obtain a estimate for the
best choice for cutting value, the procedure is to scan over different cut values and look at
how the cut effect the two categories of leptons. The quality of a cut is considered by its
efficiency for keeping primary leptons (ǫp ), and its capability to reject secondary leptons (rs ).
The goal is of course to get the primary efficiency and secondary rejection as high as possible.
The two quantities are defined like
ǫp =
Σprimary with isolation cut
Σprimary without isolation cut
rs = 1 −
Σsecondary with isolation cut
Σsecondary without isolation cut
(4.2)
The denominator is taken to be the number of leptons when all the standard object definitions
51
4.2 Study Of Lepton Isolation
have been applied except calorimeter isolation. The nominator is with a isolation cut included.
The quantities ǫp and rs are defined in such a way that the best is to have both as close to
1 as possible. In which case close to 100% of the primaries are kept and nearly 100% of the
secondaries are rejected.
Correlation etcone20-pT
lepton
And etcone20-∆R
50
SU3 prime
SU3 sec
SU3 prime
SU3 sec
40
30
etcone20 [GeV]
etcone20 [GeV]
Before performing the scan, two important observations should be noticed. This is the relation
between etcone20 and pT of the lepton and the relation between etcone20 and the angular
distance (∆R) between a jet and lepton.
50
30
20
20
10
10
0
0
-10
0
100
200
300
-10
0
400
500
600
p electrons [GeV]
SU3 prime
SU3 sec
SU3 prime
SU3 sec
40
100
200
300
400
T
500
600
p muons [GeV]
T
50
tt prime
tt sec
tt prime
tt sec
40
30
etcone20 [GeV]
etcone20 [GeV]
Figure 4.3: The plot shows scatter and profile plot of etcone20 and pT of the lepton for SU3
when the ∆R selection has not been done. Electrons in the left plot and muons in the right
plot.
50
30
20
20
10
10
0
0
-10
0
100
200
300
400
500
600
p electrons [GeV]
T
tt prime
tt sec
tt prime
tt sec
40
-10
0
100
200
300
400
500
600
p muons [GeV]
T
Figure 4.4: The plot shows scatter and profile plot of etcone20 and pT of the lepton for the
semileptonic tt̄ sample when the ∆R selection has not been done. Electrons in the left plot
and muons in the right plot.
Figure 4.3 and 4.4 shows the relation between etcone and pT of electrons and muons for
SU3 and tt̄. The light dots represent primaries and the dark dots represent secondaries. The
red circles show the mean value of etcone for a certain pT value for primaries, and the blue
triangles shows the same relation for secondaries. The tendency is that primary leptons are
in a slice of the plot with low etcone for all pT , and the secondaries are in the slice with
52
Chapter 4. Lepton Isolation
low pT for all etcone. Muons are especially distributed this way, but the electrons show the
same tendency. When choosing a cut in this distribution it is possible to do two separate
one dimensional cuts, leaving a squared box of accepted lepton region. Alternatively it can
be chosen to cut on a function of the two variables. In this thesis the cut will be done on a
linear function ot the two variables (etcone = a×pT ). The optimal value for the slope a will
be investigated, and for that matter the normalized etcone should be defined:
NORMetcone =
etcone
(= a)
pT
(4.3)
50
SU3 prime
40
SU3 sec
etcone20
etcone20
Where etcone is etcone20 of the lepton and pT if also for lepton.
50
SU3 sec
30
30
20
20
10
10
0
0
-10
0
100
200
300
400
-10
0
500
600
p electrons
SU3 prime
40
100
200
300
400
500
600
p muons
T
T
50
T1 prime
40
T1 sec
etcone20
etcone20
Figure 4.5: The plot shows scatter plot of etcone20 and pT of the lepton for SU3 when the
∆R selection is done. Electrons in the left plot and muons in the right plot.
50
T1 sec
30
30
20
20
10
10
0
0
-10
0
100
200
300
400
500
600
p electrons
T
T1 prime
40
-10
0
100
200
300
400
500
600
p muons
T
Figure 4.6: The plot shows scatter plot of etcone20 and pT of the lepton for the semileptonic
tt̄ (T1) sample when the ∆R selection is done. Electrons in the left plot and muons in the
right plot.
Figure 4.5 and 4.6 show the etcone-pT scatter plot when the ∆R selection defined in points
4 and 5 of table 4.1 has been applied. Compared to the distributions where the angular cut
had not been applied, it becomes clear that the etcone cut and ∆R cuts are correlated. The
muons only have a small area of secondary leptons left, most of them with etcone less than
10 GeV which is the standard isolation cut. This is to be expected since leptons that are
too close to a jet are also the leptons with high energy activity around them, causing energy
53
4.2 Study Of Lepton Isolation
deposit in the calorimeter cells around it.
To investigate this correlation, and figure out what is the most efficient variable to cut on is
a study of its own. Here the ∆R cut will be held static while running over NORMetcone,
and comparing this with a normal etcone scan.
4.2.1
Scan Isolation Cut On EtCone20 Variable
SU1
SU2
SU3
T1
0.6
0.5
∈_p x r_s
∈_p x r_s
Calorimeter isolation is varied in the range 1 - 14 GeV in steps of 0.5 GeV. Figures 4.7 and 4.8
show ǫp ×rs for electrons(left) and muons (right) for etcone20 and MORMetcone, respectively..
0.86
SU1
SU2
SU3
T1
0.84
0.82
0.8
0.78
0.76
0.4
0.74
0.72
0.3
EtCone20
τ primary mother
EtCone20
τ primary mother
0.7
0.68
0.2
2
4
6
8
10
12
14
Electrons: etcone20 [GeV]
2
4
6
8
10
12
14
Muons: etcone20 [GeV]
0.8
SU1
SU2
SU3
T1
0.7
0.6
0.5
∈_p x r_s
∈_p x r_s
Figure 4.7: Scan over the etcone20 variable. Electrons are shown in the left plot and muons
in the right.
0.85
SU1
SU2
SU3
T1
0.8
0.75
0.4
0.7
0.3
EtCone20/pt(lepton)
τ primary mother
0.2
0
0.1
0.2
EtCone20/pt(lepton)
τ primary mother
0.65
0.3
0.4
0.5
Electrons: normalized etcone20
0.6
0
0.1
0.2
0.3
0.4
0.5
Muons: normalized etcone20
Figure 4.8: Etcone20 is normalized to pT of the lepton. The left plot shows electrons and
the right plot shows muons
When assuming the possible relation for optimizing the cut that the max value of ǫp × rs is
the best cut, the plots show that a NORMetcone cut can perform better than a single etcone
cut. For electrons SU3 has a peak at 0.63 for etcone and 0.70 for NORMetcone, while tt̄
(T1) has a peak value of 0.64 for etcone and 0.76 for NORMetcone. Muons do not have the
same effect, but it should be kept in mind that muons are taken away with the ∆R selection,
54
Chapter 4. Lepton Isolation
0.9
SU1
SU2
SU3
T1
0.8
0.7
∈_p
∈_p
and even though the cut is performed before this selection, the numbers are read out in the
very end of the procedure. Consideration has to be taken of how many primaries can be
tolerated to cut away. The recommended cut of etcone < 10GeV is far off from the peak
value, see figure 4.7. Another idea could be to say that the signal is proportional
with ǫp
√
and the background is proportional with rs , so √
that by a significance, S/ b, argument the
optimization should be done with respect to ǫp / 1 − rs instead of ǫp × rs .
Here the focus will be on keeping the signal efficiency high (∼ as with definition in table 4.1)
and improve the background rejection with the NORMetcone variable.
0.9
SU1
SU2
SU3
T1
0.85
0.8
0.6
0.5
0.75
0.4
0.7
0.3
EtCone20/pt(lepton)
τ primary mother
0.2
0.1
0
0.1
0.2
EtCone20/pt(lepton)
τ primary mother
0.65
0.6
0.3
0.4
0.5
Electrons: normalized etcone20
0
0.1
0.2
0.3
0.4
0.5
Muons: normalized etcone20
r_s
r_s
Figure 4.9: Scan of efficiency for NORMetcone20. Electrons in the left plot and muons in
the right plot.
1
SU1
SU2
SU3
T1
0.9
0.8
0.7
0.99
SU1
SU2
SU3
T1
0.98
0.97
0.6
0.96
0.5
0.95
0.4
EtCone20/pt(lepton)
τ primary mother
0.3
0
0.1
0.2
EtCone20/pt(lepton)
τ primary mother
0.94
0.3
0.4
0.5
Electrons: normalized etcone20
0
0.1
0.2
0.3
0.4
0.5
Muons: normalized etcone20
Figure 4.10: Scan of efficiency for NORMetcone20. Electrons in the left plot and muons in
the right plot.
Figures 4.9 and 4.10 show primary efficiency and secondary rejection for a NORMetcone20
scan. The corresponding plots for the single etcone20 variable are added in appendix C for
completeness.
Tables 4.3 and 4.4 summarize the efficiencies and rejection factors for SU1, SU2, and tt̄ (T1)
for etcone <10 GeV, etcone < 5 GeV and NORMetcone < 0.14, for electrons and muons
respectively. A NORMetcone cut such that both the rejection value and efficiency is higher
than the 5GeV etcone cut was chosen. It seams clearly that NORMetcone performs better
s
55
4.2 Study Of Lepton Isolation
than etcone alone, because to get just as high efficiency for etcone the cut has to be looser,
and to get just as high rejection the cut has to be harder. There is no cut that is both harder
and looser at the same time. A harder cut rejects both more secondaries and primaries, while
a looser cut keeps both more secondaries and primaries.
NORMetcone<0.14 fulfills the criteria above for electrons, where the efficiency for keeping
primaries is ∼90% for the different data-sets, accept SU2 which has lower efficiency of 83%
The secondary rejection for this cut is ∼66-80%, which is somewhat higher than for the standard cut(etcone20<10GeV) where ∼ 40% is rejected. Efficiency is similar to the standard
cut. The performance for muons does not improve that much compared to the standard
criteria. It is chosen to set a loose cut on NORMetcone20 < 0.26 for the muons.
ELECTRONS
SU1
SU2
SU3
T1
ǫp
rs
ǫp
rs
ǫp
rs
ǫp
rs
:
:
:
:
:
:
:
:
etcone20 < 10
0.92
0.45
0.84
0.40
0.91
0.40
0.92
0.34
etcone20 < 5
0.90
0.67
0.81
0.66
0.88
0.60
0.90
0.56
NORMetcone20 < 0.14
0.91
0.74
0.83
0.81
0.89
0.67
0.91
0.74
Table 4.3: Compare standard etcone20 cut with the new choice. The values for Etcone 20 is
given in GeV.
MUONS
SU1
SU2
SU3
T1
ǫp
rs
ǫp
rs
ǫp
rs
ǫp
rs
:
:
:
:
:
:
:
:
etcone20 < 10
0.89
0.97
0.78
0.95
0.88
0.97
0.89
0.93
etcone20 < 5
0.88
0.97
0.78
0.96
0.87
0.98
0.88
0.94
NORMetcone20 < 0.26
0.88
0.98
0.80
0.96
0.88
0.98
0.89
0.95
Table 4.4: Comparison between etcone20 cuts and the normalized etcone20 cut. The values
for Etcone 20 is given in GeV.
For the rest of this thesis the isolation criteria will be as follows
• electrons: NORMetcone20 < 0.14
• muons: NORMetcone20 < 0.26
56
Scan pT Cut For Leptons
Events/10fb-1/12GeV
4.2.2
Chapter 4. Lepton Isolation
106
SU3 prime
105
SU3 sec
T1 prime
104
T1 sec
τ primary mother
3
10
102
10
1
0
100
200
300
400
500
600
pt lepton [GeV]
Figure 4.11: pT of primary and secondary leptons in SU3 and tt̄ (T1)
By using NORMetcone20 pT the lepton transverse momentum has been taken into account.
It will therefor be interesting to investigate what effect different cuts on the lepton pT has
on the ability to keep primary electrons and reject secondary leptons. This time applying the
standard object definitions for leptons and jets given in table 4.1 but with the calorimeter
isolation criteria found in the previous section. For electrons normalized etcone20 < 0.14 and
for muons normalized etcone20 < 0.26. The additional cut on pT of the leptons is applied
after this procedure.Figure 4.11 show the distribution of primary and secondary lepton pT
before any cut on the pT .
The scan is done over pT cut from 10-35 GeV in steps of 1 GeV. Note the difference from
last time. Now the cut is applied after the object selection, not inside it as for the etcone20
cut. Figures 4.12 to 4.17 show signal efficiency and background rejection for electrons and
muons in the data-sets SU1, SU2, SU3 and T1.
The muons in all four samples behave in a quite similar way. The signal efficiency falls off
quite linearly with the pT cut value, and the background rejection has a steeper slope for
low pT and flattens out for higher pT . The cut is taken to be close to where the slope of
the rejection has the same value as the slope of the efficiency. A harder cut would then just
reject equal amount of primaries and secondaries. By requiring the muons to have higher
transverse momentum than 15 GeV, ∼88% of the primary muons are kept compared to the
standard choice pT > 10GeV. The same cut rejects ∼42% more than with the standard cut.
Table 4.6 summarizes the effect for cut pT of 15, 20 and 25 GeV compared to the 10 GeV cut.
Approximately 90% of the primary electrons are kept and reject 20% of secondary electrons
are rejected. Table 4.5 summarizes the effect for pT of 15, 20 and 25 GeV compared to the
10 GeV cut.
Table 4.7 shows an overview of how many leptons of each kind are left after the various
57
1
0.95
SU1
SU2
SU3
T1
0.9
0.85
∈_p
∈_p
4.2 Study Of Lepton Isolation
1
SU1
SU2
SU3
T1
0.9
0.8
0.8
0.75
0.7
0.7
0.65
0.6
0.55
0.6
p scan
p scan
T
10
0.5
15
20
25
30
35
Electrons: p [GeV]
T
10
15
20
25
30
35
Muons: p [GeV]
Figure 4.12: e : Efficiency vs pT cut value
Figure 4.13: µ : Efficiency vs pT cut value
r_s
T
r_s
T
0.7
SU1
SU2
SU3
T1
0.6
0.5
SU1
SU2
SU3
T1
0.8
0.7
0.6
0.4
0.5
0.3
0.4
0.2
0.1
1
0.9
0.3
p scan
0.2
T
10
15
20
25
p scan
T
0.1
10
30
35
Electrons: p [GeV]
15
20
25
30
T
1
0.95
SU1
SU2
SU3
T1
0.9
0.85
T
Figure 4.15: µ : Rejection vs pT cut value
∈_p
∈_p
Figure 4.14: e : Rejection vs pT cut value
35
Muons: p [GeV]
1
SU1
SU2
SU3
T1
0.9
0.8
0.8
0.75
0.7
0.7
0.65
0.6
0.55
0.6
p scan
p scan
T
0.1
0.5
0.2
0.3
0.4
0.5
0.6
0.7
Electrons: r_s
Figure 4.16: e : Signal efficiency vs background rejection for pT cut
0.1
T
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Muons: r_s
Figure 4.17: µ : Signal efficiency vs background rejection for pT cut
definitions. All the SUSY data-sets are included in addition to semileptonic tt̄ (T1) which
will turn out to be the most important background. The first column shows the number of
leptons before any selection, the second column shows the number of leptons for standard
definitions and the third column shows the normalized etcone20 cut. In column four and five
there is the extra pT cut in addition to the NORMetcone20 cut.
58
Chapter 4. Lepton Isolation
pT e
SU1
SU2
SU3
T1
ǫp
rs
ǫp
rs
ǫp
rs
ǫp
rs
:
:
:
:
:
:
:
:
>15
0.90
0.120
0.93
0.27
0.90
0.15
0.92
0.34
>20
0.81
0.31
0.86
0.41
0.80
0.27
0.82
0.48
>25
0.72
0.39
0.78
0.55
0.72
0.36
0.73
0.59
pT µ
SU1
SU2
SU3
T1
Table 4.5: Effect of additinal pT cut on
electrons. Cut values are in GeV. The values are relative to pT > 10 GeV
ǫp
rs
ǫp
rs
ǫp
rs
ǫp
rs
:
:
:
:
:
:
:
:
>15
0.88
0.45
0.91
0.45
0.86
0.51
0.90
0.51
>20
0.77
0.62
0.81
0.72
0.76
0.72
0.79
0.76
>25
0.68
0.73
0.73
0.84
0.67
0.83
0.69
0.88
Table 4.6: Effect of additinal pT cut on
muons. The ratios show how much we
keep/reject of primary/secondary muons
compard to the choice pT > 10 GeV
Table 4.7 show that there is not much to win in going from pT cut of 15GeV to 20 GeV (same
ratio of both secondary and primary are cut away). This seem to cut away approximately
equal fractions of both primaries and secondary leptons. For the rest of the analysis we will
use the NORMetcone20 cut with additional cuts on pT of the leptons >15GeV.
With this isolation choice we are left with 2.2% secondary electrons and 1.1% secondary
muons in the tt̄ (T1) sample, compared to 3.1% secondary electrons and 2.0% secondary
muons left after the NORMetcone20 cut alone. The standard object selections leave 7.8%
secondary electrons and 2.8% secondary muons.
For SU3 10.7% secondary electrons and 1.6% secondary muons are left after the standard
definitions. With the new selection 5.6% secondary electrons and 0.1% secondary muons are
left. The trend is the same for all samples.
From now an isolated lepton will have
pT > 15GeV
NORMetcone20 < 0.14electron
NORMetcone20 < 0.26muon
(4.4)
And with angular distance criteria and overlap removal as in table 4.1.
4.3
Lepton Purity
It is interesting to find some correlation between the lepton purity, i.e the fraction of prompt
or primary leptons, and the lepton pT hardness. The results are summarized in Table 4.8 as
a function of the lepton multiplicity for the standard cuts. Both SU3 and tt̄ are studied. In
case of events with one lepton, the purity is 94 % for SU3 and 97% for tt̄ . For event with
more than one lepton, the hardest pT lepton has the highest purity. tt̄ and SU3 behave
somewhat differently. Purities are in general higher for the NORMetcone20-based selection,
as demonstrated in Table 4.10.
59
4.3 Lepton Purity
SU3 µ
102
prompt
SU3 eprompt
SU3 µ
other
SU3 eother
10
Events/1fb-1/10GeV
Events/1fb-1/10GeV
For events with more than one lepton, the hardest pT lepton has the highest purity. The
tt̄ and SU3 behave somewhat differently. Purities are in general in general higher for SU3.
Figures 4.18 and 4.19 show the corresponding pT distribution of the hardest lepton for
prompt (primary) leptons and the number of other (secondary a ) when the total number
of selected leptons in the event is exactly equal to one. Electrons and muons are plotted
separately to see if they behave differently for signal sample SU3 and for the semi leptonic
background sample tt̄ .
tt µ
3
10
prompt
tt eprompt
tt µ
102
other
tt eother
10
1
1
10-10
50
100
150
200
250
300
350
400
pt of hardest lepton [GeV]
10-10
50
100
150
200
250
300
350
400
pt of hardest lepton [GeV]
Figure 4.18: pT of hardest lepton.(Exact one Figure 4.19: pT of hardest lepton (Exact one
lepton in event)
lepton in event)
60
Chapter 4. Lepton Isolation
SU3 µ
prompt
SU3 eprompt
SU3 µ
10
other
SU3 eother
Events/1fb-1/10GeV
Events/1fb-1/10GeV
Figures 4.20 to 4.23 show the pT distributions of the hardest and second hardest leptons
when there are exactly 2 leptons in the reconstructed event.
1
10-10
102
SU3 µ
prompt
SU3 eprompt
SU3 µ
10
other
SU3 eother
1
50
100
150
200
10-10
250
300
350
400
pt of hardest lepton [GeV]
50
100
150
200
250
300
350
400
pt of second hardest lepton [GeV]
tt µ
102
prompt
tt eprompt
tt µ
other
tt eother
10
1
10-10
Events/1fb-1/10GeV
Events/1fb-1/10GeV
Figure 4.20: pT of hardest lepton.(Exact two Figure 4.21: pT of second hardest lepton
leptons in event)
(Exact two leptons in event)
tt µ
prompt
tt eprompt
2
10
tt µ
other
tt eother
10
1
50
100
150
200
250
300
350
400
pt of hardest lepton [GeV]
10-10
50
100
150
200
250
300
350
400
pt of second hardest lepton [GeV]
Figure 4.22: pT of hardest lepton.(Exact two Figure 4.23: pT of second hardest lepton
leptons in event)
(Exact two leptons in event)
61
4.3 Lepton Purity
prompt
SU3 µ
other
SU3 eother
1
10-10
50
100
150
200
250
300
350
400
pt of hardest lepton [GeV]
10
SU3 µ
prompt
SU3 eprompt
SU3 µ
other
SU3 eother
1
10-10
50
100
150
Events/1fb-1/10GeV
SU3 µ
SU3 eprompt
Events/1fb-1/10GeV
Events/1fb-1/10GeV
Finally the pT distributions of the three hardest leptons are plotted when there are exactly
three leptons in the event, see figure 4.24 to 4.28. The goal of this study is to check if there
is a trend that the third hardest lepton has a pT distribution that would differ whether it is
a prompt or secondary lepton. In case of such a finding, it would be possible to enhance the
number of di-lepton Same Sign events by including the events from three lepton where the
softest lepton is likely to be secondary.
200
250
300
350
400
pt of second hardest lepton [GeV]
SU3 µ
10
prompt
SU3 eprompt
SU3 µ
other
SU3 eother
1
10-10
50
100
150
200
250
300
350
400
pt of third hardest lepton [GeV]
tt µ
prompt
10-10
other
tt eother
1
50
100
150
200
250
300
350
400
pt of hardest lepton [GeV]
tt µ
prompt
tt eprompt
tt µ
10-10
other
tt eother
1
50
100
150
Events/1fb-1/10GeV
tt µ
tt eprompt
Events/1fb-1/10GeV
Events/1fb-1/10GeV
Figure 4.24: pT of hardest Figure 4.25: pT of second Figure 4.26: pT pf third hardlepton (>= 3lep)
hardest lepton (>= 3lep)
est lepton (>= 3lep)
200
250
300
350
400
pt of second hardest lepton [GeV]
tt µ
prompt
tt eprompt
tt µ
1
other
tt eother
10-10
50
100
150
200
250
300
350
400
pt of third hardest lepton [GeV]
Figure 4.27: Hardest lepton Figure 4.28: Second hardest Figure 4.29: Third hardest
(>= 3lep)
lepton (>= 3lep)
lepton (>= 3lep)
In table 4.10 the numbers from the distributions above are extracted. For the SU3 signal
sample the fraction 3leptons
2leptons selected in the reconstructed events is about 0.1, while the same
fraction is 0.005 for the tt̄ background sample. This means that including the 3 lepton events
would at least increase the signal. However with three leptons there will always be at least
one Same Sign combination. There is no wish to have correlation with the 3 lepton study,
so it is desirable to know if the third lepton really comes from the hard interaction or not.
The Same Sign study aims at picking events where there is exactly one lepton in each leg
both having the same sign, not two leptons in one leg and one in the other leg. For SU3
table 4.10 shows that both the hardest and second hardest lepton tend to come from the
hard interaction (∼ 99% of the cases). The third hardest lepton also has a high probability
(95%), of being prompt. This means that only 5 % of these cases would be interesting to
fish out. If it was possible to recover all of these events, and not add any new unwanted
combinations, this would in numbers scaled to 1 fb−1 mean to add 3 new events to the 638
already selected Same Sign events. The pT distribution in figure 4.26 also does not show a
clear trend on where to cut on the third lepton to be convinced that the correct events are re-
62
Chapter 4. Lepton Isolation
covered, even though the third non prompt lepton in general has lower transverse momentum.
The conclusion from this is that when only pT of the leptons are taken into account, it is a
good idea to veto events with three or more leptons when the Same Sign di-lepton study is
done. For SU3 it there is only 0.5% more events to collect anyway.
63
4.3 Lepton Purity
SU1
SU2
SU3
SU4
SU6
SU8
T1
e prime
e sec
µ prime
µ sec
e prime
e sec
µ prime
µ sec
e prime
e sec
µ prime
µ sec
e prime
e sec
µ prime
µ sec
e prime
e sec
µ prime
µ sec
e prime
e sec
µ prime
µ sec
e prime
e sec
µ prime
µ sec
start
11597
1177
15074
6267
1890
284
2384
1567
23057
3779
30185
15050
237073
51022
320170
220561
6201
1252
7968
5682
5542
1416
7059
6231
191132
20814
261197
98515
standard
10686
639
13391
199
1590
169
1900
78
21004
2256
26526
421
213524
32445
274173
16126
5612
756
6801
158
5015
837
6059
157
176285
13812
234259
6577
NORMetcone
10525
308
13349
150
1578
54
1899
55
20625
1241
26438
304
208140
11127
273182
11554
5474
495
6789
116
4947
492
6045
122
173151
5412
233518
4663
pt>15
9535
248
11759
82
1463
40
1725
30
18476
1045
22960
149
186780
7326
238547
5274
4729
424
5690
71
4347
396
5062
72
159083
3574
210017
2269
pt>20
8527
214
10330
56
1355
32
1542
15
16555
897
20206
84
165462
5315
208085
2575
4121
364
4861
47
3912
343
4463
42
142445
2787
184461
1104
Table 4.7: Number of leptons left after each cut. Normalized to 10 fb−1 . Start refers to the
number of leptons in the datasets before any selection creteria has been applied. Standard
has object selection according to table 4.1. The pt cuts number are given when they are
applied in addition to the NORMetcone cut.
64
Chapter 4. Lepton Isolation
Leptons
occur
unscaled
scaled
0
1
2
3
3.68e+05
9.89e+04
2.63e+04
3.08e+03
1.38e+04
3.7e+03
983
115
0
1
2
3
545214
573310
7.38e+04
1.86e+03
210472
221316
2.85e+04
717
hardest lepton
prompt other
SU3
0.94
0.0596
0.984
0.0161
0.985
0.0146
tt
0.971
0.0293
0.947
0.0532
0.87
0.13
2nd hardest
prompt other
3rd hardest
prompt other
0.943
0.969
0.0568
0.0312
0.903
0.0968
0.816
0.617
0.184
0.383
0.304
0.696
Table 4.8: CSC standard isolation. The scaled numbers are scaled to 1bf−1 . 4 leptons occur
14 times(scaled) in SU3 and 8 times in tt.
#Lep
0 pr
1
2
3
0.060
0.003
0.000
1 pr
2 pr
SU3
0.94
0.066 0.93
0.012 0.118
3 pr
0 pr
0.87
0.0297
0.005
0.001
1 pr
2 pr
3 pr
tt
0.97
0.197 0.798
0.185 0.662
0.152
Table 4.9: CSC standard isolation. Ratios showing how pure the elctrons are for exactly one,
exactly two and exactly three leptons
Leptons
0
1
2
3
0
1
2
3
occur
unscaled
scaled
4.06
7.18
1.71
1.69
×105
×104
×104
×103
1.0839×106
101044
9210
46
1.52×104
2.69×103
638
63
418424
39006
3550
17
hardest lepton
prompt other
SU3
0.966
0.033
0.991
0.008
0.992
0.008
tt
0.988
0.012
0.983
0.0174
1
0
2nd hardest
prompt other
3rd hardest
prompt other
0.976
0.989
0.024
0.011
0.957
0.043
0.959
0.421
0.0405
0.579
0.526
0.474
Table 4.10: The table summarize the distributions of prompt primary and other secondary
leptons in figure 4.18 to 4.29. The scaled numbers are scaled to 1bf−1 . 4 leptons occur 7
times(scaled) in SU3 and 0 times in ttbar.
65
4.3 Lepton Purity
#Lep
0 pr
1
2
3
0.034
0.001
0
1 pr
2 pr
SU3
0.966
0.031 0.968
0.003 0.056
3 pr
0 pr
1 pr
2 pr
3 pr
0.941
0.012
0
0
tt
0.988
0.052 0.948
0.043 0.826
0.13
Table 4.11: Ratios showing how pure the elctrons are for exactly one, exactly two and exactly
three leptons
66
Chapter 4. Lepton Isolation
Chapter 5
Prospects for Supersymmetry in
Same Sign di-Lepton and
One-Lepton Final States
In the previous chapter ways to distinguish interesting lepton objects (primary leptons from
decays of gauge bosons, Supersymmetric particles and tau-leptons) from uninteresting ones
(secondary leptons), was looked into. This is a feature that is similar for all data-sets. Good
objects are important to be able to do physics. The next step is to distinguish different
physical processes from each other. For that matter one has to look into the characteristics
of each type of process.
It is important to know how the physical processes in the Standard Model behave in order to
go beyond and recognize new phenomena. Finding Standard Model particles, measure their
properties and calibrate the detectors have priority during the first months at LHC.
In section 5.1 the Standard Model will be investigated. The goal is to find good methods,
leading to optimized cuts, to remove the background events when searching for Supersymmetry in further sections. Section 5.2 will give details on the Same Sign (SS) di-lepton final state
analysis, together with the expectations for 1f b−1 . The procedure worked out here is also applied on the one lepton final state. Section 5.3 summarizes the prospects for Supersymmetry
with one lepton final states, with more details given in appendix D.
5.1
Important Variables and Backgrounds
First some variables that allow us to distinguish signal from background. The mSUGRA
point SU3 will be taken as guidance in the effective mass distributions, and SU2 and SU4
will be included in plots of other variables. These are chosen because they represent extremes,
in the sense that SU3 and SU4 are often easy to distinguish from Standard Model background
while SU2 tends to be more buried. The effective mass variable, Mef f , will later be the most
important reference plot to see how well signal is separated form background. All plots are
normalized to 1 f b−1 and the precuts defined in equation 3.10 have been applied.
67
Chapter 5. Prospects for Supersymmetry in Same Sign di-Lepton and One-Lepton Final
68
States
Effective Mass, Mef f
Events/1fb-1/100GeV
5.1.1
106
105
SU2
SU3
SU4
104
103
SM bg
2
ATLAS Preliminary
10
10
1
0
500
1000
1500
2000
2500
3000
3500 4000
Meff [GeV]
Figure 5.1: Mef f for the total Standard Model background and signal points SU2, SU3 and
SU4. Only standard precut defined in eq 3.10 is applied
The effective mass given in equation 5.1 is the sum of missing energy and transverse momentum of the jets and leptons in the event. Since Supersymmetric events typically have a large
missing transverse energy ( E
/ T ) with many hard jets and possibly leptons, this variable is
expected to be a good discriminator against Standard Model events which usually have lower
Mef f values. This variable is also a first estimate of the sparticle mass scale, since gluino and
squark pair production is the dominant Supersymmetry cross section at LHC. The peak of
the Mef f distribution from Supersymmetry is correlated with MSU SY = min(mg̃ , mq̃ ) [30].
Supersymmetric particles are assumed to have higher mass than Standard Model particles,
and so the mass scale shown by the variable should be higher than standard model when
wise cuts are applied. Figure 5.1 shows that the mSUGRA points are buried in the Standard
Model background when no further event selection has been done. The task is then to apply
cuts such that Mef f for the signal points exceeds the background.
Mef f = E
/ T + ΣJet pT + Σlepton pT
(5.1)
Lepton Selection
In chapter 4 lepton isolation criteria were studied. Here we will see how requiring leptons
according to equation 4.4, affects the effective mass distributions. Figure 5.2 shows the distribution only including the precut. We see clearly that the SU3 signal is buried in a huge
background of different origins. The signal and background contributions were described in
chapter 3.
Requiring at least one isolated lepton, according to the definition described in Chapter 4
(equation 4.4 ) is removes the QCD background (figure 5.4) and leaves the SU3 signal of the
same order as the Standard Model background at high enough effective mass. The situation is
even clearer, and the SU3 signal more apparent, when at least 2 isolated leptons are required
(figure 5.6), compared to the situation without isolation (figure 5.5). As stressed in Chapter
4, good isolation criteria are essential to extract final states with primary leptons. From figure
69
Events/1fb-1/100GeV
5.1 Important Variables and Backgrounds
5
10
SU3
tt
W
Z
QCD
VV
104
3
10
sum bg
102
ATLAS Preliminary
10
1
0
500
1000 1500
2000
2500
3000 3500 4000
Meff [GeV]
Events/1fb-1/100GeV
Events/1fb-1/100GeV
Figure 5.2: Effective mass. Here with precut as in equation 3.10
SU3
tt
W
Z
QCD
VV
4
10
3
10
sum bg
102
104
SU3
tt
W
Z
QCD
VV
3
10
102
sum bg
ATLAS Preliminary
ATLAS Preliminary
10
10
1
0
500
1000 1500
2000
2500
1
0
3000 3500 4000
>1lep: M [GeV]
500
1000 1500
2000
2500
3000 3500 4000
>1lep: M [GeV]
Figure 5.3: Effective mass with precut and
at least 1 lepton. No isolation applied.
Figure 5.4: Effective mass with precut and
at least 1 isolated lepton.
Events/1fb-1/100GeV
eff
Events/1fb-1/100GeV
eff
SU3
tt
W
Z
QCD
VV
3
10
102
sum bg
ATLAS Preliminary
10
1
0
500
1000 1500
2000
2500
3000 3500 4000
>2lep: M [GeV]
eff
Figure 5.5: Effective mass with precut and
require at least 2 leptons. No isolation
applied.
SU3
tt
W
Z
QCD
VV
102
sum bg
10
1
0
ATLAS Preliminary
500
1000 1500
2000
2500
3000 3500 4000
>2lep: M [GeV]
eff
Figure 5.6: Effective mass with precut and
at least 2 isolated leptons.
5.7 (Same Sign isolated di-leptons) and figure 5.8 (Opposite Sign di-leptons), we already learn
that SM background is negligible in SS-case (see Section 5.2 for more details) and negotiable
Chapter 5. Prospects for Supersymmetry in Same Sign di-Lepton and One-Lepton Final
70
States
102
Events/1fb-1/100GeV
Events/1fb-1/100GeV
in the OS-case [25]. Notice that tt̄ is the dominant Standard Model background that survives
the Same Sign selection.
SU3
tt
W
Z
QCD
VV
10
sum bg
ATLAS Preliminary
1
0
500
1000 1500
2000
2500
Events/1fb-1/100GeV
102
sum bg
10
ATLAS Preliminary
1
0
3000 3500 4000
SS: Meff [GeV]
Figure 5.7: Effective mass with precut and
a Same Sign lepton pair.
SU3
tt
W
Z
QCD
VV
500
1000 1500
2000
2500
3000 3500 4000
OS: Meff [GeV]
Figure 5.8: Effective mass with precut and
an Oposite Sign lepton pair.
104
103
SU3
tt
W
Z
QCD
VV
102
sum bg
ATLAS Preliminary
10
1
0
500
1000
1500
2000
2500
3000
3500 4000
Meff [GeV]
Figure 5.9: Effective mass with precut and requiring exactly one isolated lepton. Effectively
this figure is showing figure 5.4 with figure 5.6 subtracted.
The one lepton case, shown in figure 5.9 needs more work to bring out the signals, and this
case will be studied in Section 5.3. In the next subsection we define further cuts that can be
used to reduce the background, and some features of the main background will be discussed.
5.1.2
Further Discriminating Variables
It is important to know the properties of the background and of the Supersymmetry signal
in order to get a good signal and background separation. As we have seen the detector can
only detect particles that live long enough to reach the detector. We see tracks from charged
particles, electromagnetic and hadronic showers, missing transverse energy and muon hits
in the muon chambers. Knowledge of how different backgrounds, meaning different primary
particles decay, helps understand how to get rid of the backgrounds. By defining a set of
smart cuts on the variables, we can get a situation where we cut away a great part of the
5.1 Important Variables and Backgrounds
71
background, while keeping as much signal as possible.
The QCD is already strongly reduced by requiring isolated leptons, as seen in the effective
mass distributions (figure 5.3 to 5.9). We will now define additional useful variables.
Missing Transverse Energy, E
/T
This vector has the same magnitude but opposite direction as the sum of all the particles in
the event in the x-y plane. The initial momentum of the LHC beam in the plane perpendicular to the beam axis is zero, and by energy-momentum conservation the E
/ T in the final
state is calculated.
In the Standard Model the only prompt source for E
/ T is the neutrinos. With R-parity conserving Supersymmetry two LSPs will also carry off energy, in addition to the neutrinos which
are responsible for E
/ T in the Standard Model. This makes E
/ T a good variable to distinguish
Supersymmetry events from Standard Model events. Figure 5.10 shows the E
/ T of the three
mSUGRA points and Standard Model background.
Events/1fb-1/20GeV
It is clear for SU3 and SU4 that the E
/ T in the signal points exceeds the Standard Model,
and so can be distinguished by this variable alone. The SU2 point is however buried, and
would need additional requirements to be seen.
106
105
SU2
SU3
SU4
104
103
SM bg
102
ATLAS Preliminary
10
1
0
100 200 300 400 500
600 700 800 900 1000
Missing E [GeV]
T
Figure 5.10: E
/ T for Standard Model background and signal points SU2,SU3 and SU4, with
the precuts
Number Of Jets And pT Of The Jets
Squarks and gluinos are generally heavy, but still they make up the dominant Supersymmetry production for most points. At LHC energies with proton proton collisions, the strong
interaction is the most important. Typically squarks and gluinos decay in a cascade to the
LSP. In the cascade several hard jets are usually emitted. Requiring a certain number of jets
could be a good idea. Figure 5.11 shows the number of jets for the Standard Model and the
signal points. As we see the peak of the Standard Model distribution is at three jets while the
peak for SU3 and SU4 are at around five jets. Sparticles are heavier than Standard Model
Events/1fb-1/1GeV
Chapter 5. Prospects for Supersymmetry in Same Sign di-Lepton and One-Lepton Final
72
States
106
105
SU2
SU3
SU4
104
103
SM bg
102
ATLAS Preliminary
10
1
0
2
4
6
8
10
12
14
16
18
20
Number of jets
Figure 5.11: Number of jets for Standard Model background and signal points SU2, SU3 and
SU4.
106
105
SU2
SU3
SU4
104
103
SM bg
2
ATLAS Preliminary
10
10
1
0
Events/1fb-1/20GeV
Events/1fb-1/20GeV
particles and so the decay chains involving sparticles will lead to harder jets. Figures 5.12
and 5.13 show the pT distributions of the four hardest jets. Except for the fourth hardest
pT distribution the mSUGRA points are peaked at slightly higher values. For SU2 it seems
particularly interesting to do cuts on the third and fourth hardest jets in addition to the two
hardest.
106
105
SU2
SU3
SU4
104
103
SM bg
2
ATLAS Preliminary
10
10
200
400
600
800
1000
p Jet1 [GeV]
T
1
0
200
400
600
800
1000
p Jet2 [GeV]
T
Figure 5.12: pT distributions of the hardest jet to the left and the second hardest jet to the
right. The standard precuts of equation 3.10 is used. The non-zero entry for pT < 50 GeV
in the distribution of the second hardest jet, which seems to contradict default precut, is due
to re-binning.
Number of Leptons
The lepton multiplicity is expected to be higher in mSUGRA than in the Standard Model.
Figure 5.14 shows that three and four leptons occur more often in mSUGRA than in the
Standard Model. Requiring leptons is also a way to get rid of Standard Model processes that
do not include leptons. QCD is dominant at LHC but does not naturally lead to a final state
with leptons, and so can be dealt with with a lepton cut.
73
106
105
Events/1fb-1/20GeV
Events/1fb-1/20GeV
5.1 Important Variables and Backgrounds
SU2
SU3
SU4
4
10
103
SM bg
2
SU2
SU3
SU4
4
10
103
SM bg
ATLAS Preliminary
10
10
1
0
105
2
ATLAS Preliminary
10
106
10
100 200 300 400 500
1
0
600 700 800 900 1000
p Jet3 [GeV]
100 200 300 400 500
T
600 700 800 900 1000
p Jet4 [GeV]
T
Events/1fb-1/1GeV
Figure 5.13: pT distribution of the third hardest jet are shown in the left plot and the fourth
hardest jet are shown in the right plot.
105
SU2
SU3
SU4
4
10
103
SM bg
102
ATLAS Preliminary
10
1
0
2
4
6
8
10
12
14
16
18
20
Number of leptons
Figure 5.14: Number of leptons for Standard Model background and signal points SU2,SU3
and SU4.
Transverse Sphericity, ST
Transverse sphericity, ST , is a projection of the measure of three dimensional isotropy of an
event, sphericity S, onto two dimensions. While sphericity depends on x, y and z components
of the momentum vector, transverse sphericity only takes into account the x and y components. The purpose of the variable is to distinguish back-to-back events from more isotropic
events. QCD di-jet events are expected to be back-to-back, as a difference to squark production where we expect a cascade-decay through sparticles emitting jets more isotropically
in space. QCD events will then have transverse sphericity close to zero and Supersymmetry
events will have transverse sphericity closer to one. Transverse sphericity, ST , is given in
equation 5.2.
2λ2
(λ1 + λ2 )
Sij = Σk pki pkj
ST =
(5.2)
λ1 and λ2 are the eigenvalues of the (2×2) matrix Sij , where k runs over the number of jets
Events/1fb-1
Chapter 5. Prospects for Supersymmetry in Same Sign di-Lepton and One-Lepton Final
74
States
106
105
SU2
SU3
SU4
4
10
103
SM bg
2
ATLAS Preliminary
10
10
1
0
0.2
0.4
0.6
0.8
1
ST
Figure 5.15: ST for Standard Model background and signal points SU2,SU3 and SU4.
and i,j = (px , py ). We use λ1 > λ2 so that 0 < ST < 1. This means that a back-to-back event
will have λ2 ∼ 0 and an isotropic event will have λ1 ∼ λ2 . If we write out the equation we
see that we get one when Σk pkx pkx = Σk pky pky and totally spherical event obey this in all
Cartesian coordinate systems. For back-to-back we can always find a representation where
Σk pky pky ∼ 0 and Σk pkx pkx > 0, then one of the eigenvalues are approximately zero.
Figure 5.15 shows a peak close to zero in the Standard Model ST distribution that the
mSUGRA distributions do not have. The main contribution to this observed peak comes
from QCD. Since QCD is mostly cut away by requiring lepton(s), it might be that this
variable is not important in lepton final states.
The Angle Between E
/ T And The Jet, δφ
For samples with fake E
/ T (QCD), E
/ T mostly arises from uncertainty in the jet energy measurements. We therefore expect the direction of E
/ T and one of the leading jets to be parallel
or anti parallel. The angular distance in the transverse plane between jets and E
/ T are defined
by
δφ1 = |φj1 − φ E
/ |
(5.3)
δφ2 = |φj2 − φ E
/ |
(5.4)
T
T
Here φ is the angle in the plane perpendicular to the collision axis. For a back to back
QCD event we expect δφ = 0, π, corresponding to the jet energy being underestimated or
overestimated respectively. For a Supersymmetry event there is no reason to assume such a
relation, since the jets here comes from cascade decays from the sparticles to the LSPs. The
two LSPs in an R-parity conserving Supersymmetry model carry off the majority of the E
/T
in a Supersymmetry event.
Figure 5.16 shows the expected peak of δφ close to zero for Standard Model background.
The mSUGRA points show a flatter distribution, which means that they are more randomly
distributed in space.
75
108
Events/1fb-1
Events/1fb-1
5.1 Important Variables and Backgrounds
107
SU2
SU3
SU4
106
105
104
107
105
ATLAS Preliminary
102
10
10
0.5
1
1.5
2
2.5
3
delφ [rad]
SM bg
103
102
1
0
SU2
SU3
SU4
106
104
SM bg
103
108
1
0
ATLAS Preliminary
0.5
1
1
1.5
2
2.5
3
delφ [rad]
2
Figure 5.16: The δφ distributions for Standard Model have a peak close to zero, which is not
present in the mSUGRA points.
5.1.3
Main Standard Model Background Processes
Here we will briefly comment on the main background samples leading to more leptons in the
final states. If we can reconstruct some of the particles, such as Z, W and top, this knowledge
can possibly be used later to remove Standard Model background from the mSUGRA search
channels. It is also interesting in itself, to investigate methods applied to Z, W and top events
in the early data.
Z boson
The Z boson will decay to two leptons of opposite sign, two neutrinos or a quark antiquark
pair. Since we study lepton channels, we are interested in the leptons produced here. If
only two leptons of opposite charge are recorded in the event, this would not be a problem.
However there could be a third lepton recorded, most likely a misidentified jet. If one of the
primary leptons are not reconstructed this could produce a Same Sign final state. Alternatively, when one of the leptons are not reconstructed and there is no third lepton present, Z is
background for the one lepton channel. Figure 5.7 shows that Z is negligible background for
the Same Sign channel, while figure 5.9 shows that Z is present, although not very dominant,
in the one lepton channel.
Around 3.4 % of the Z particles produced will decay to two electrons and around 3.4 % decays
to two muon. We plot the invariant mass of two leptons, with a rather hard cut selection for
the Z samples, leading to somewhat low statistics. With the precut defined in eq 3.10, the Z
peak is not possible to see when all Standard Model background is included. Here we apply
some looser cuts, which makes the measurement somewhat unrealistic, but it still illustrates
the procedure for measuring the number of Z events in real data. The cuts applied are two
jets, the hardest with pT larger than 100GeV, and the second hardest jet with pT larger than
20GeV. There was no E
/ T cut. Seeing a Z-peak will indicate that we have picked good leptons.
In figure 5.17 the invariant mass distribution of two Opposite Sign leptons were plotted for
all Standard Model backgrounds except QCD, which we have seen is negligible after a dilepton selection. The background was fitted with a ninth order polynomial and the signal
was fitted with a Gaussian. Once the mean and standard deviation of the Gaussian peak
was found, the original histogram distribution was integrated within two standard deviation
Events/1bf -1
Chapter 5. Prospects for Supersymmetry in Same Sign di-Lepton and One-Lepton Final
76
States
1000
800
600
OS Mll
fitted function
400
200
0
0
50
100
150
200
250
300
Mll [GeV]
Figure 5.17: The Z peak is fitted with Gaussian and the background is fitted with a ninth
order polynomial.
of the mean. Meaning that 95% of the Z’s in the peak would be included. The background
fitted distribution was integrated in the same area and subtracted from the number inside
the peak. This will also be a way to get the number of Z bosons decaying leptonically in
real data. In the low statistics histogram after the hard cuts the mean of the Gaussian was
90GeV and the standard deviation was 3.3GeV. The total number of events inside the peak
is 2972 events.
W boson
The W boson mass can be obtained from the transverse mass variable. This has been done
in hadronic collider experiments such as UA1, UA2, CDF and D0. When the W boson decay
leptonically, W → lνl , we will observe a lepton and missing energy since the neutrino does
not interact. Because of the missing energy it is not straight forward to obtain the W mass
by making the invariant mass as done for the Z boson. The transverse mass variable is given
by
m2T = 2plT pνT (1 − cos φlν )
(5.5)
where E
/ T is taken to be pT of the neutrino and φlν is the angle between the leptons in the
plane perpendicular to the pp̄ collision axis. Equation 5.5 satisfies m2T ≤ m2W , so that we on
an event-by-event basis generate a lower bound for the W mass. The mT distribution should
show an upper end-point at the W mass. Figure 5.18 show the mT distribution when precuts
in equation 3.10 has been applied, and figure 5.19 show the same plot but with logarithmic
axis. The first plot suggests an edge in the W sample, and the second plot shows that the
SU3 sample is much flatter in this variable. This is a general feature for the signal samples.
A typical cut mT >100GeV can be applied to reject W and top events in Supersymmetry
search (this is also quite hard to the signal). This variable will be used as a cutting variable
to get rid of Standard Model events in the one lepton channel in section 5.3.
77
2200
2000
Events/1fb-1/4GeV
Events/1fb-1/4GeV
5.1 Important Variables and Backgrounds
SU3
tt
W
Z
QCD
VV
1800
1600
1400
1200
1000
800
ATLAS Preliminary
SU3
tt
W
Z
QCD
VV
103
102
ATLAS Preliminary
10
600
400
200
0
50
100
150
200
250
300
1 lepton: M [GeV]
1
0
50
100
ll
Figure 5.18: Transverse mass. Selection is
one isolated lepton and precut according
to eq 3.10
150
200
250
300
1 lepton: M [GeV]
ll
Figure 5.19: Transverse mass as the left
plot but with logarithmic axis to show the
signal distribution.
top quark
Top quark constitutes the main background for the SUSY searches presented in this thesis.
Semileptonic tt̄ and W decays pollute especially the one-lepton channel, but also Same Sign
di-lepton channel when a second lepton is found in the event. A measurement of top mass
could be used to reduce that background. Some work reconstructing the top mass, was
started. In a tt̄ event one top may decay hadronically and the other semileptonically.
t →W + b
Events/1fb-1/4GeV
−−−→ l + ν ( q + q ′ )
2500
tt
W
Z
QCD
VV
2000
1500
mjjj
mjj
1000
ATLAS Preliminary
500
0
50
100
150
200
250
300
> 1 lep: M [GeV]
ll
Figure 5.20: Invariant mass of three jets and two jets within these three. Plot Combination
that minimize χ2 . One lepton has been required. Note: two invariant mass distributions are
plotted in the same plot.
Figure 5.20 shows two invariant mass distributions in one plot: Mqq and Mqqq . The jets
inside Mqq is a combination of the jets inside Mqqq . The combination of jets to be used in
this distributions were found by running over all possible jet combinations and picking the
Chapter 5. Prospects for Supersymmetry in Same Sign di-Lepton and One-Lepton Final
78
States
DataSet
J4
J5
J6
J7
J8
SU1
SU2
SU3
SU4
SU6
SU8
T1
TTbar
Wenu
Wmunu
Wtaunu
WW
WZ
Zee
Zmumu
Znunu
Ztautau
ZZ
1 lepton
111.0
164.1
18.8
1.5
0
2129.7
233.6
3740.9
38572.1
1024.0
1023.8
19265.6
7.8
3878.8
5458.7
1888.0
239.7
94.3
0
174.7
4.9
382.5
6.0
2 lepton
0
0
0
0
0
357.9
63.5
880.6
5134.9
160.9
100.8
1607.1
0
2.9
0.9
1
7.0
3.0
0
15.3
0
35.1
1.0
>2 jet
79113.1
40183.2
6727.3
1017.6
8.7
7769.4
813.7
20289.4
189525.0
4656.5
6496.8
46136.8
723.3
5558.3
4134.6
10169.9
252.2
122.0
0
168.9
6169.2
968.4
11.9
>3 jet
40840.2
23003.1
4923.8
620.8
4.9
5923.7
740.2
16309.9
159487.0
3897.9
5148.5
36166.3
664.7
1860.9
1314.5
3541.1
96.8
52.4
0
53.7
1846.3
396.0
3.9
E
/ T >200GeV
1457.0
3454.7
1388.2
237.7
5.3
7377.6
433.2
16918.0
83452.5
4275.5
6107.9
7198.8
62.5
1984.5
2037.2
3783.3
90.5
42.5
20.5
57.6
4383.4
180.0
4.7
Mef f > 800GeV
9242.1
24089.6
7847.1
1417.3
12.6
7419.5
554.8
16434.9
54437.8
4397.9
6187.9
6199.1
394.9
791.6
715.0
1176.9
39.0
25.6
0
23.0
770.9
153.9
2.0
Table 5.1: The table shows the effect of single cuts after the standard precut selection defined
in eq 3.10. The numbers are normalized to 1f b−1 .
combination that minimized a χ2 function.
χ2 =
(Mqq − mW )2 (Mqqq − mtop )2
+
2
2
σW
σtop
(5.6)
Here mW and mtop are fixed experimental values of the W boson and the top quark, and σ is
the expected experimental resolution of the W and top. In figure 5.20 both σ were set to 15
GeV. The distributions seems very promising, showing a nice peak at the top mass. No cut on
χ2 was set, which may explain the bump to the right of the W peak in the Mqq invariant mass
plot. For an improvement such a cut should be implemented. Further requirements could
also be added, such as including a leptonically decaying W mass Mνl in the χ2 expression,
or requiring b jets (t → bW ) in the invariant mass of three jets (Mbqq ). Time limited further
investigation of the top peak.
5.1.4
Effect Of Cuts On Different Processes
Tables 5.1 and 5.2 show cuts on some common variables for signal and background samples,
with numbers of events scaled to 1f b−1 . The precuts (equation 3.10), are already very hard
on the Standard Model background. The last column in table 5.2 shows the number of events
79
5.1 Important Variables and Backgrounds
DataSet
ST >0.2
Jet1 >200
Jet4 >50
J4
J5
J6
J7
J8
SU1
SU2
SU3
SU4
SU6
SU8
T1
TTbar
Wenu
Wmunu
Wtaunu
WW
WZ
Zee
Zmumu
Znunu
Ztautau
ZZ
24048.9
8027.2
891.2
48.4
0.1
4801.2
609.7
12926.3
127288.6
2821.1
3932.7
28631.8
277.6
4501.8
4530.8
6625.7
182.7
82.5
0
203.5
6856.6
432.0
7.2
61669.6
53423.9
7851.3
1417.3
12.6
8033.5
596.3
19062.6
113661.7
4624.3
6639.0
15707.3
539.6
4030.9
3311.7
7286.3
106.9
59.7
0
111.3
4259.0
759.6
5.2
15764.3
10465.9
3084.7
340.7
2.6
3481.3
604.5
9591.0
75493.3
2547.5
3117.9
12383.8
504.4
297.2
173.7
549.3
9.3
12.9
0
7.6
232.4
76.5
0.6
E
/T
Mef f
>0.2
δφ2 >0.2
mT >100
precut
82193.8
15679.3
463.4
3
0
7440.4
578.8
18423.2
166803.6
4196.5
6107.4
40646.0
179.8
10580.0
9210.6
18578.6
387.3
159.1
20.5
374.4
14467.9
1363.5
17.8
33415.9
13068.7
2587.7
387.6
3.4
8460.8
800.9
21301.6
192376.8
4792.3
6936.1
45948.4
445.7
10500.6
9229.7
16249.4
401.4
165.6
20.5
387.8
13950.7
1026.0
17.2
111.0
164.1
18.8
1.5
0
2522.5
308.7
4716.4
44085.5
1201.4
1135.2
20881.0
7.8
3881.8
5459.7
1889.0
246.7
98.3
0
190.0
4.9
417.6
7.1
111030.4
54651.0
7851.3
1417.3
12.6
8757.4
852.6
22138.0
203445.7
5003.5
7197.3
49048.5
727.2
11159.8
9631.6
19097.0
420.0
178.6
20.5
401.2
14513.7
1538.1
19.4
Table 5.2: The table shows the effect of further single cuts after the precut selection. The
numbers are normalized to 1f b−1 .
left when only the precut has been applied. Here the semileptonic tt̄ (T1), W and QCD
(J4-J8) are the most important standard model backgrounds left. The QCD sample has a
fake E
/ T filter of 100 GeV, since no true E
/ T is created, but due to the huge cross section, the
low fraction of events where a jet has been considerably mismeasured still gives a large E
/T
contribution. Neutrinos carry away energy in the tt̄ and W samples. The Z background
should not survive this E
/ T criteria well, and is also seen to only have a small amount of
events left.
The most effective cut to remove QCD background is to require one or two isolated leptons.
The channels of study for this thesis is exactly lepton channels, and so QCD is not an important background.
The W background is reduced to the half compared to the number of events after precut
with the mT , and this variable also aims at removing this background, as discussed in section
5.1.3. Requiring three and four jets also reduces the background dramatically. The process
W → lν is present in the one lepton channel, but is negligible in the Same Sign channel, with
only three events left.
Chapter 5. Prospects for Supersymmetry in Same Sign di-Lepton and One-Lepton Final
80
States
Top is a difficult background to get completely rid of since it has both jets, lepton(s) and
E
/ T . By applying cuts on several variables it can still be considerable y reduced. The mT
cut and additional cut on E
/ T are the variables in tables 5.1 and 5.2 that best deal with this
background.
5.2
Discovery Potential In The Same Sign Channel
Standard Model events have low lepton multiplicity, as seen in table 5.1. For Supersymmetry
nearly every event contains two χ̃02 , two χ̃±
1 or one of each, so multi lepton events are suppressed only by the branching fraction for charginos and neutralinos to decay leptonically,
which usually is a significant share. The SS channel is cleaner than the OS channel, which
was seen in figure 5.8.
When tt̄ decays semileptonically, a second Same Sign lepton may stem from hadronic activity. This is the most important Standard Model background. Leptons associated with jets
can also pass the isolation criteria, so that it looks like there is an event with SS leptons.
Real SS events from the Standard Model have small cross section, and would come from two
W radiating from quarks with the same charge.
Cross section for SS leptons in Supersymmetry events are considerably larger than for Standard Model events. In g̃g̃ production both legs could have a chain leading to one lepton,
where there is no preferred SS or OS, since the gluino is a self-conjugate Majorana fermion.
Same Sign leptons have significant signal over background, and will be a good indication of
Supersymmetry together with other features of the theory.
103
102
Events/1fb-1
Events/1fb-1/20GeV
In figure 5.7 we saw that the Same Sign channel is very clean. Once we have good lepton
selection, we do not have to work hard to see the signal. Figure 5.21-5.26 show the mSUGRA
samples and background for some common variables described in the previous section. Standard precuts defined in equation 3.10 have been applied to all plots.
SU1
SU2
SU3
SU4
SU6
SU8
103
SU1
SU2
SU3
SU4
SU6
SU8
102
SM bg
10
1
0
SM bg
ATLAS Preliminary
100 200 300 400 500 600 700 800 900 1000
Missing E [GeV]
T
Figure 5.21: E
/ T for six mSUGRA points,
all except SU2 clearly exceeds the Standard Model background.
10
1
ATLAS Preliminary
-0.4 -0.2
-0
0.2
0.4
0.6
0.8
1
1.2
1.4
ST
Figure 5.22: ST for isolated Same Sign
lepton.
81
5.2 Discovery Potential In The Same Sign Channel
103
Events/1fb-1/20GeV
Events/1fb-1/20GeV
Figure 5.21 shows that the Standard Model has larger E
/ T values than the mSUGRA points,
so this seems like a good variable to cut on. For transverse sphericity shown in figure 5.22 it
is more unclear. A significance scan will be done to see the effect of a cut here.
SU1
SU2
SU3
SU4
SU6
SU8
102
SM bg
10
1
0
400
600
SU1
SU2
SU3
SU4
SU6
SU8
102
SM bg
10
ATLAS Preliminary
200
103
1
0
800
1000
p Jet1 [GeV]
ATLAS Preliminary
200
400
600
800
1000
p Jet2 [GeV]
T
Figure 5.23: pT of the hardest jet in the
Same Sign event
T
Figure 5.24: pT of the second hardest jet
in the Same Sign event
103
Events/1fb-1
Events/1fb-1
Figure 5.23 and 5.24 display the jet pT of the hardest and second hardest jet. Both could
potentially be used to cut on, since the signal tend to exceed Standard Model background at
high pT values. The δφ variables are mainly intended for reducing the QCD where the E
/T
comes from missmeasured jets. In the Same Sign channel the QCD background is already
at zero, and so this variable it not so useful to cut on. Figure 5.25 and 5.26 show the δφ
distributions. There is nothing to gain on a cut on this variable, with respect to increasing
the signal over background ratio.
SU1
SU2
SU3
SU4
SU6
SU8
102
103
SU1
SU2
SU3
SU4
SU6
SU8
102
SM bg
10
1
0
SM bg
10
ATLAS Preliminary
1
2
3
4
5
delφ
1
Figure 5.25: δφ between E
/ T and hardest
jet
1
0
ATLAS Preliminary
1
2
3
4
5
delφ
2
Figure 5.26: δφ between E
/ T and second
hardest jet
To choose a few variables to perform a multidimensional scan on, a one dimensional significance scan was performed on each of the variables, see figure 5.27 to 5.34. Point SU4 is not
displayed in the plots because the significance is so large that the effects in the remaining
signal points does not show. In general one should be cautious with setting cuts on the
Chapter 5. Prospects for Supersymmetry in Same Sign di-Lepton and One-Lepton Final
82
States
variables only according to one dimensional scans since there could be correlations, therefor
we perform the matrix scan over cut combinations.
40
SU1
SU2
SU3
SU6
SU8
35
30
25
20
Significance S/ B
Significance S/ B
Scan over ST , number of jets, pT of the third jet, δφ2 and E
/ T /Mef f (figure 5.28, 5.29,5.32,5.33
and 5.34) show a rather flat distribution for all mSUGRA points, showing that cuts on these
variables will not enhance the significance. The most promising are the E
/ T variable and the
pT of the hardest jet (figure 5.27 and 5.30).
15
SU1
SU2
SU3
SU6
SU8
12
10
8
6
10
4
5
2
100
150
200
250
300
ETmiss [GeV]
0
14
SU1
SU2
SU3
SU6
SU8
12
10
8
0.1 0.15
0.2 0.25
0.3 0.35
0.4 0.45
ST
Figure 5.28: Significance scan of ST
Significance S/ B
Significance S/ B
Figure 5.27: Significance scan of E
/T
0.05
18
16
14
SU1
SU2
SU3
SU6
SU8
12
10
8
6
6
4
4
2
2
0
0.5
1
1.5
2
2.5
3
3.5
4
>=Number of jets (2jets already required)
100 120 140 160 180 200 220 240 260 280
p [GeV]
T
Figure 5.29: Significance scan of number of Figure 5.30: Significance scan of hardest jet
jets
pT
83
25
Significance S/ B
Significance S/ B
5.2 Discovery Potential In The Same Sign Channel
SU1
SU2
SU3
SU6
SU8
20
15
10
18
SU1
SU2
SU3
SU6
SU8
16
14
12
10
8
6
5
4
50
60
70
80
90
100
110 120
130 140
p [GeV]
50
60
70
80
90
T
100
p [GeV]
T
14
Significance S/ B
Significance S/ B
Figure 5.31: Significance scan of second Figure 5.32: Significance scan of third hardhardest jet pT
est jet pT
SU1
SU2
SU3
SU6
SU8
12
10
8
16
SU1
SU2
SU3
SU6
SU8
14
12
10
8
6
6
4
4
2
2
0
0.1
0.2
0.3
0.4
0.5
δφ
0
0
2
0.05
0.1
0.15
0.2
0.25
0.3
0.35
(E /Meff)
T
Figure 5.33: Significance scan of δφ2 , the Figure 5.34: Significance scan of ( E
/ T /Mef f )
leftmost point is a thecnical plotting mistake
and can be looked away from.
In the multidimensional scan we will use these two ( E
/ T and pT of the hardest jet), as well
as the number of jets. A matrix of 8 cuts is used, see table 5.3. We choose two values for
E
/ T (100,130), two values for pT of hardest lepton (100,130) and number of jet (2,3) to keep
the output so small that it can be displayed in the thesis. We do not want to make too hard
cuts since the number of events is already quite low.
Chapter 5. Prospects for Supersymmetry in Same Sign di-Lepton and One-Lepton Final
84
States
Signal
CUT
B
S
√S
B
q S
2
B+σsyst
SU1
Cut01
Cut02
Cut03
Cut04
Cut05
Cut06
Cut07
Cut08
Cut01
Cut02
Cut03
Cut04
Cut05
Cut06
Cut07
Cut08
Cut01
Cut02
Cut03
Cut04
Cut05
Cut06
Cut07
Cut08
Cut01
Cut02
Cut03
Cut04
Cut05
Cut06
Cut07
Cut08
Cut01
Cut02
Cut03
Cut04
Cut05
Cut06
Cut07
Cut08
Cut01
Cut02
Cut03
Cut04
Cut05
Cut06
Cut07
Cut08
42.2
33.6
30.8
25.6
20.9
19.6
16.6
16.3
42.2
33.6
30.8
25.6
20.9
19.6
16.6
16.3
42.2
33.6
30.8
25.6
20.9
19.6
16.6
16.3
42.2
33.6
30.8
25.6
20.9
19.6
16.6
16.3
42.2
33.6
30.8
25.6
20.9
19.6
16.6
16.3
42.2
33.6
30.8
25.6
20.9
19.6
16.6
16.3
88.8
79.7
85.7
76.8
84.3
75.6
81.6
73.1
20.9
20.7
20.0
19.8
18.4
18.3
17.7
17.6
142.9
129.0
137.2
123.7
129.5
116.7
124.7
112.3
904.7
835.9
735.3
674.7
654.1
599.0
559.1
509.5
47.5
44.2
45.4
42.6
45.3
42.3
43.6
40.9
32.0
30.0
30.6
29.0
29.8
27.9
28.5
26.9
13.6
13.7
15.4
15.1
18.4
17.0
19.9
18.0
3.2
3.5
3.6
3.9
4.0
4.1
4.3
4.3
21.9
22.2
24.7
24.4
28.3
26.3
30.5
27.7
139.1
144.0
132.4
133.2
143.0
135.0
136.8
125.9
7.3
7.6
8.1
8.4
9.9
9.5
10.6
10.1
4.9
5.1
5.5
5.7
6.5
6.2
7.0
6.6
8.3
8.9
10.3
10.6
13.6
12.7
15.4
14.0
1.9
2.3
2.4
2.7
2.9
3.1
3.3
3.4
13.3
14.5
16.5
17.1
20.9
19.6
23.6
21.5
84.7
93.9
88.6
93.6
105.5
101.0
106.0
97.9
4.4
4.9
5.4
5.9
7.3
7.1
8.8
7.8
2.9
3.3
3.6
4.0
4.8
4.7
5.4
5.1
SU2
SU3
SU4
SU6
SU8
5.2 Discovery Potential In The Same Sign Channel
Cut01
Cut02
Cut03
Cut04
Cut05
Cut06
Cut07
Cut08
E
/T
E
/T
E
/T
E
/T
E
/T
E
/T
E
/T
E
/T
>
>
>
>
>
>
>
>
100GeV
100GeV
100GeV
100GeV
130GeV
130GeV
130GeV
130GeV
jets
jets
jets
jets
jets
jets
jets
jets
>
>
>
>
>
>
>
>
2
3
2
3
2
3
2
3
pT
pT
pT
pT
pT
pT
pT
pT
jet1
jet1
jet1
jet1
jet1
jet1
jet1
jet1
>
>
>
>
>
>
>
>
100
100
130
130
100
100
130
130
85
GeV
GeV
GeV
GeV
GeV
GeV
GeV
GeV
Table 5.3: Cuts used in significance scan in the Same Sign channel. The precut defined in
equation 3.10 is applied in all cases.
Cut01 is the loosest cut: precut (equation 3.10) and two isolated (equation 4.4) Same Sign
leptons. Effective mass distribution for this selection can be found√in figure 5.7. From the
significance table for the cuts we see that√four of the points has S/ b > 5 (SU1, SU3,
SU4
√
and SU6), with SU8 on the limit with S/ b = 4.9 and SU2 somewhat lower at S/ b = 3.2.
With systematics taken into account in the significance formula SU6 also falls below discovery
limit of five sigma. Three of the points (SU1, SU3 and SU4) can be easily discovered.
From the scan we see from the table listing the significances that cut07 (table 5.3) gives the
overall best performance. With this harder selection cuts both SU6 and SU8 are brought
above the discovery limit. Only SU2 is undetected with 1f b−1 of data. Figure 5.35 shows
the effective mass distribution for some of the points for cut07.
Events/1fb-1/100GeV
The Same Sign channel is proved to be a very promising channel for discovery of Supersymmetry. Only tt̄ gives a hand full of Standard Model background events.
103
SU2
SU3
SU4
102
SM bg
10
1
0
ATLAS Preliminary
500
1000
1500
2000
2500
3000
3500 4000
Meff [GeV]
Figure 5.35: Effective mass with Cut7 selection given in table 5.3
Chapter 5. Prospects for Supersymmetry in Same Sign di-Lepton and One-Lepton Final
86
States
5.2.1
Charge Misidentification
When a lepton is detected there exist the possibility for charge misidentification. In the Same
Sign sample there are few events left after the cuts, and given the large ratio of opposite sign
over same sign events, charge misidentification could cause a significant number of original
opposite sign events to pass the same sign event selection.
Events/1fb-1/5GeV
Events/1fb-1/5GeV
One way to get an idea if charge miss identification is a problem here, is to plot the invariant
mass (Mll ) of the two leptons in a two lepton event. In the Opposite Sign selection a peak
is expected at ∼ 90GeV, representing decay of Z to two leptons (Z → l¯l). This peak should
not be present in the same sign selection, if charge reconstruction is always to be trusted.
100
80
60
Z->ll
40
8000
7000
6000
5000
4000
Z->ll
3000
2000
20
1000
0
50
100
150
200
250
300
SS lep: M [GeV]
0
ll
50
100
150
200
250
300
OS lep: M [GeV]
ll
Figure 5.36: Invariant mass in an event with Figure 5.37: Invariant mass in an event with
two Same Sign leptons. Only Z samples with two Opposite Sign leptons. Z sample with
no precuts are plotted
no precut
Figure 5.36 shows that there is a small contribution from Z after the Same Sign selection. This
must be due to charge misidentification. As a first estimate of this charge misidentification
effect, one can simply compare the number of events in the Z peak for the two selections.
Z(SS)
200
≃
≃ 0.014
Z(OS)
14000
(5.7)
This simple counting predicts a charge misidentification of about 1.4%. It should be reminded
that this effect might be slightly different in the signal samples where Opposite Sign leptons
arise from neutralinos and sleptons. The pT distribution of these leptons should be higher
due to the more massive sparticles involved in the decay chain. Since charge is determined
by the bending direction in the magnetic field, it can be expected that the rate of charge
misidentification is somewhat higher for high pT leptons.
Nevertheless, unless equation 5.7 should be a dramatic underestimate, for the investigation
of discovery significance made in this section effects from charge misidentification can safely
be neglected.
The same method can be applied on data, typically with fits of the Z peak rather than mere
counting, as done in section 5.1. Since all processes will be present in real data the continuum
5.3 Discovery Potential In The One Lepton Channel
87
under the Z peak will be larger. To estimate the precision with which the charge misidentification rate can be measured with this method, more study is needed.
Discovery Potential In The One Lepton Channel
Events/1fb-1/100GeV
5.3
104
10
SU1
SU2
SU3
SU4
SU6
SU8
102
SM bg
3
ATLAS Preliminary
10
1
0
500
1000 1500
2000
2500
3000 3500 4000
Meff [GeV]
Figure 5.38: Effective mass after 1 lepton selection. Only precut selection applied.
5.3.1
Cutting Variables After One Lepton Selection
The one lepton channel has a much higher cross section than the Same Sign channel. Lepton production is limited by the branching fraction for leptonic decay, and in the one lepton
channel only one of the decay legs result in a lepton. Like in the Same Sign channel the decay
chain must include a chargino (χ̃± ) that for instance decay to the LSP and a W or directly
to a lepton and a neutralino (ν̃), or it could include a leptonically decaying χ̃02 where one of
the primary leptons are not detected. The Standard Model background is more present here
than in the Same Sign channel. Real one lepton events stem from W decay, secondary lepton
can mimic a one lepton event, as would be QCD contribution to the background, and if one
lepton from Z decay if not reconstructed this can also look like a one lepton event. Effort is
needed to distinguish signal from background.
The technique that will be used here is to perform a multi dimensional matrix scan to obtain
the optimal set of cut. Cutting variables after the standard precut and one isolated lepton
selection are shown in figures D.1 to D.7. For E
/ T and pT of the jets it is quite clear that a
cut will increase the significance, while for the other variables it is harder to read the effect
off the figures. There will also be correlations between cuts on various variables, meaning
Chapter 5. Prospects for Supersymmetry in Same Sign di-Lepton and One-Lepton Final
88
States
that a cut that looks good on its own might not perform so well in combination with other
cuts.
5.3.2
Optimize With Matrix Approach
To find the optimal cut selection a matrix of 4374 cut combinations were run over. These
were E
/ T (100, 150, 200)GeV, ST (0, 0.1, 0.2), jet1pT (100, 150, 200)GeV, jet2pT (50, 100,
150)GeV, jet3pT (20, 50, 100)GeV, number of jets (>=2,>=3,>=4) , δφ1 (0, 0.2, 0.4) and
E
/ T /Mef f (0, 0.2). Motivated by figure 5.18 the transverse mass cut was held constant, mT >
100GeV.
From the scan four qualitatively different sets of cuts, all giving close to best significances were
selected. Cut01 is the baseline four-jet cut also used in the CSC studies, CU2 is effectively
a two-jet cuts, where two additional soft jets are required, cut03 is a harder two-jet cut now
with only one soft extra jet, cut04 is a hard three-jet cut with an extra soft jet. The results
for these four cuts are shown in table 5.5. SU1, SU3, SU6 and SU8 all performed best at cut
combinations close to Cut03 in table 5.4, while SU4 prefer a loose line of cuts (Cut02) and
SU2 has highest significance
q for very hard cuts (Cut04). The result will be slightly different
√
2
whether it is S/ b or S/ b + σsyst
that it is optimized with respect to. In the latter formula
systematic uncertainties are taken into account. When b is large compared to the signal this
correction plays an important role.
5.3.3
Conclusion
The discovery limit is 5 σ, so in the one lepton channel all point can be discovered (SU2
just at the limit). Effectively being close to two-jet selection, even if hard, Cut03 gives high
statistics and a very good significance when compared to the four-jet CSC selection. In points
±
where the branching ratio of q̃ → q χ̃±
1 → ql ν̃ is high, there exists a possibility to measure
an end-point in the invariant mass of the lepton and the quark. This measurement would be
an experimental way to constrain the mass of q̃ , χ̃± and ν̃. Figure 5.39 shows the effective
mass after Cut03.
Cut
Mef f
mT
E
/T
Jets
pjet1
T
pjet2
T
pjet3
T
pjet4
T
ST
E
/T
Mef f
δφ1
01
02
03
04
>800
>800
>800
>800
>100
>100
>100
>100
>100
>100
>200
>200
>=4
>=4
>=3
>=4
>100
>100
>200
>200
>50
>50
>150
>150
>50
>20
>20
>100
>50
>20
>20
>0.2
>0.1
>0.1
>0.2
>0.2
-
>0.2
>0.4
Table 5.4: Cut values used in table 5.4. Cut01 is the CSC cut, and cut02-04 is obtained with
the matrix optimization. Values for Mef f , mT , E
/ T and pT of the leptons are given in GeV.
Events/1fb-1/100GeV
5.3 Discovery Potential In The One Lepton Channel
89
103
SU1
SU2
SU3
SU4
SU6
SU8
102
SM bg
10
1
0
ATLAS Preliminary
500
1000
1500
2000
2500
3000
3500 4000
Meff [GeV]
Figure 5.39: Effective mass after one isolated lepton selection and Cut03 selection (given in
table 5.4). All signal points except SU2 have a high significance, see table 5.5
Chapter 5. Prospects for Supersymmetry in Same Sign di-Lepton and One-Lepton Final
90
States
CUT
S
B
Cut01
Cut02
Cut03
Cut04
241.8
729.8
492.4
250.5
42.0
354.7
50.3
26.7
Cut01
Cut02
Cut03
Cut04
41.3
95.7
46.8
39.8
42.0
354.7
50.4
26.7
Cut01
Cut02
Cut03
Cut04
380.1
1130.0
623.6
327.8
42.0
354.7
50.3
26.7
Cut01
Cut02
Cut03
Cut04
847.7
3703.7
1092.9
549.6
42.0
354.7
50.3
26.6
Cut01
Cut02
Cut03
Cut04
156.8
423.7
268.5
161.3
42.0
354.7
50.4
26.7
Cut01
Cut02
Cut03
Cut04
142.5
417.9
257.8
143.2
42.0
354.7
50.3
26.7
√
S/ b
SU1
37.3
38.7
69.3
48.4
SU2
6.3
5.0
6.6
7.7
SU3
58.6
60.0
87.8
63.3
SU4
130.7
196.6
153.9
106.3
SU6
24.2
22.5
37.8
31.1
SU8
21.9
22.1
36.3
27.6
S/
q
2
b + σsyst
√
S/ S + b
23.7
10.6
40.4
33.6
14.3
22.1
21.1
15.0
4.0
1.3
3.8
5.3
4.5
4.5
4.7
4.8
37.3
16.4
51.2
44.0
18.5
29.3
24.0
17.4
83.3
53.9
89.8
73.8
28.4
58.1
32.3
22.8
15.4
6.1
22.1
21.6
11.1
15.1
15.0
11.7
14.0
6.0
21.1
19.2
10.4
15.0
14.6
10.9
Table 5.5: Significance for some chosen cuts in the one lepton channel. Cut01 is with the
event selection specified in the CSC5 note. Cut03 seem to perform better with respect to
significance, and more of the events are kept, giving more statistics for studying endpoint or
other kinematic variables. The cuts are specified in table 5.4.
Chapter 6
Conclusion
We have studied the discovery potential for Supersymmetry discovery in the Same Sign and
one lepton channels. Official datasets from the CSC activity were used. In addition a lepton isolation study was done, where we showed that additional cuts lead to a higher purity
lepton sample. For the Same Sign channel we found selection cuts that enable us to discover
the mSUGRA points SU1, SU3, SU4, SU6 and SU8. For 1 f b−1 the SU2 point can not be
discovered by ATLAS in this channel. The one lepton channel is complementary, yielding
better results than the Same Sign di-lepton channel, for some points. In particular the SU2
point, which is not discoverable in the Same Sign channel, is just above discovery limit for
one lepton channel.
We have shown that if Supersymmetry is realized in nature in a region of parameter space
with similar phenomenology as any of the studied mSUGRA points, ATLAS will be able to
make a discovery already with 1f b−1 of data. However we have assumed a perfect knowledge
of the detector, and the work to achieve this only starts now.
91
92
Chapter 6. Conclusion
Appendix A
Notation
A.1
Pauli Matrices
These are 2×2 complex Hermitian and unitary matrices.
µ
¶
µ
¶
µ
¶
0 1
0 −i
1 0
σ0 =
, σ1 =
, σ2 =
,
1 0
i 0
0 −1
A.2
(A.1)
Gell-Mann Matrices
These are 3 × 3 matrices satisfying the relation [gi , gj ] = if ijk gk , where gi are the generators
for SU(3) with i running from one to eight. They are a generalization of the Pauli matrices.
Here the matrices are written down in a fundamental representation.

0
λ1 = 1
0

0
λ4 = 0
1

0
λ7 = 0
0




1
0 −i 0
1 0
0 0 , λ2 =  i 0 0 , λ3 = 0
0
0 0 0
0 0




0
0 0 −i
0 1
0 0 , λ5 = 0 0 0  , λ6 = 0
0
i 0 0
0 0



0 0
1 0 0
1
0 −i , λ8 = √ 0 1 0 
3 0 i −2
i 0
93

0 0
−1 0
0 0

0 0
0 1
1 0
(A.2)
(A.3)
(A.4)
94
A.3
Appendix A. Notation
The Dirac Matrices
The matrices are shown in Dirac representation



0
1 0 0
0



0
1
0
0
 1 0
γ0 = 
0 0 −1 0  , γ =  0
−1
0 0 0 −1



0
0 0 0 −i
0 0 i 0 3 0
2


γ =
 0 i 0 0  , γ = −1
0
−i 0 0 0
0
0
−1
0
0
0
0
1
0
1
0
0
1
0
0
0

1
0

0
0

0
−1

0
0
(A.5)
(A.6)
The 4×4 spin matrices are defined the following way [23] as commutator relation of the Dirac
matrices
i
σ µν = [γ µ , γ ν ]
(A.7)
2
A fifth useful γ-matrix is defined by the product
γ 5 = iγ 0 γ 1 γ 2 γ 3
(A.8)
Appendix B
Note On alpgen And pythia
Systematics
In the CSC study W → lνl and Z → ll datasets were generated by alpgen, while pythia
was used in this thesis. The remaining background and signal data sets are generated by
the same generator in both analysis. From this situation arises an opportunity to investigate
systematic differences between the two generators.
The comparison between the two samples are done by applying the exact same event selection
as in the CSC note [6] only with the different data set. All signal, QCD, di-boson and tt̄
samples should give the same numbers here in order for the method to be trusted. The
selection criteria are as follows:
• Cut01: Only one lepton with pT > 20 GeV satisfying electron or muon selection criteria
in table 4.1
• Cut02:
– Four Jets satisfying the requirements listed in table 4.1
– Hardest jet pT > 100 GeV
– 4th jet pT > 50 GeV
• Cut03:
– MET E
/ T > 100 GeV
– MET E
/ T > 0.2Mef f
• Cut04: Transverse sphericity ST > 0.2
• Cut05: Transverse mass MT > 100 GeV
• Cut06: Mef f > 800 GeV
Following this selection criteria table B.1 showing the cut flow was presented in the CSC note.
The same object and cut selections were applied with the thesis code and datasets resulting
in table B.2.
95
96
Appendix B. Note On alpgen And pythia Systematics
process
tt̄
W
Z
Diboson
QCD
SM BG
SU1
SU2
SU3
SU4
SU6
SU8.1
Total
450000.0
19198.3
18570.4
55947.2
1345396.1
1889112.0
10860.0
7180.0
27680.0
402190.0
6070.0
8700.0
Cut01
183998.1
5935.0
3951.7
29326.7
4408.1
227619.4
2294.0
935.1
3999.8
67093.6
986.0
1064.6
Cut02
15893.4
1219.5
581.0
33.6
986.8
18714.3
767.1
191.5
1491.6
18451.0
462.7
414.3
Cut03
2028.5
425.2
39.0
7.3
0.0
2500.1
571.7
86.7
995.7
7523.6
342.3
296.4
Cut04
1546.8
314.8
27.3
5.1
0.0
1894.0
423.0
75.6
767.9
6260.4
250.9
214.4
Cut05
131.7
9.9
1.7
0.8
0.0
144.1
259.9
46.1
450.5
2974.4
161.9
151.4
Cut06
36.0
5.4
0.2
0.0
0.0
41.6
232.3
39.6
363.6
895.8
147.9
136.3
√
S/ B
36.0
6.1
56.4
138.9
22.9
21.1
Table B.1: Summary of signal and background
selection cross-section (fb) in fully simulated
√
data samples, along with a simple S/ B calculation of the corresponding significance of an
observation for the SUn SUSY models.
Cuts 01-06 gives the same values, within 0.1%, for all signal points, tt̄ and di-boson samples.
QCD differ somewhat more. Alpgen and Pythia differ by a factor of five. Part of this is due
to Alpgen producing more hard jets. Another effect is however due to an additional generator
cut imposed on the Pythia samples. More study is needed to see which gives the larger effect.
97
Dataset
tt̄T 1
tt̄had
QCD
Diboson
W
Z
SM
SU1
SU2
SU3
SU4
SU6
SU8
Cut01
184053.3
4105.7
4949.2
29331.9
27231.8
20019.8
269691.5
2298.4
940.3
4000.4
66595.5
985.9
1066.0
Cut02
15903.0
563.0
1266.5
33.5
125.9
56.5
17948.7
769.9
192.6
1490.9
17512.4
462.5
412.0
Cut03
2030.3
0
0
7.3
80.4
8.22
2126.3
574.0
86.9
995.2
7077.6
342.3
295.8
Cut04
1548.5
0
0
5.1
55.7
5.52
1614.8
421.4
75.4
767.4
5919.2
250.9
214.3
Cut05
131.7
0
0
0.8
1.9
0
134.5
258.6
44.9
447.6
2788.3
161.1
150.6
Cut06
36.0
0
0
0
1.0
0
37.0
230.7
38.4
361.2
813.9
147.3
135.7
Table B.2: Cut flow with CSC selection. W and Z background are generated with pythia
here.
Dataset
tt̄T 1
QCD
Diboson
W
Z
SU1
SU2
SU3
SU4
SU6
SU8
Cut01
1
1.12
1
4.58
5.06
1
1
1
0.99
0.99
1
MYevents /CSC5events
Cut02 Cut03 Cut04
1
1
1
1.28
0
0
0.99
1
1
0.1
0.18
0.17
0.09
0.21
0.2
1
1
0.99
1
1
0.99
0.99
0.99
0.99
0.94
0.94
0.94
0.99
1
1
0.99
0.99
0.99
Cut05
1
0
1
0.19
0
0.99
0.97
0.99
0.93
0.99
0.99
Cut06
1
0
0
0.18
0
0.99
0.96
0.99
0.9
0.99
0.99
Table B.3: Comparison to check the ratio between the CSC datasets and the datasets used
in this thesis. The numbers given is the number of pythia events in the thesis code divided
by the number of alpgen events from the CSC analysis for each cut.
98
Appendix B. Note On alpgen And pythia Systematics
Appendix C
0.9
∈_p
∈_p
Etcone20 Efficiency And Rejection
SU1
SU2
SU3
T1
0.8
0.9
SU1
SU2
SU3
T1
0.85
0.7
0.8
0.6
0.75
0.5
0.4
EtCone20
τ primary mother
2
4
0.7
6
8
10
12
14
Electrons: etcone20 [GeV]
2
SU1
SU2
SU3
T1
0.8
0.7
4
6
8
10
12
14
Muons: etcone20 [GeV]
Figure C.2: Muons: eficiency
r_s
r_s
Figure C.1: Electrons: eficiency
0.9
EtCone20
τ primary mother
0.99
SU1
SU2
SU3
T1
0.98
0.97
0.6
0.96
0.5
0.95
0.4
0.3
0.2
EtCone20
τ primary mother
0.94
EtCone20
τ primary mother
0.93
2
4
6
8
10
12
14
Electrons: etcone20 [GeV]
Figure C.3: Electrons: rejection
2
4
6
8
10
12
14
Muons: etcone20 [GeV]
Figure C.4: Muons: rejecton
99
100
Appendix C. Etcone20 Efficiency And Rejection
Appendix D
One Lepton
Variables After One Lepton Selection
Events/1fb-1
Events/1fb-1/20GeV
D.1
104
10
SU1
SU2
SU3
SU4
SU6
SU8
102
SM bg
3
SU1
SU2
SU3
SU4
SU6
SU8
3
10
102
SM bg
ATLAS Preliminary
ATLAS Preliminary
10
10
1
0
1
100 200 300 400 500 600 700 800 900 1000
Missing E [GeV]
-0.4 -0.2
-0
0.2
0.4
T
Events/1fb-1/1GeV
Figure D.1: Effective mass, Standard
Model values are peaked at lower values.
SU1
SU2
SU3
SU4
SU6
SU8
8000
7000
6000
5000
SM bg
4000
ATLAS Preliminary
3000
2000
1000
2
4
6
8
0.8
1
1.2
1.4
ST
Figure D.2: Transverse spericity, it is not
clear from the plot whether a cut enhanses
the signal.
9000
0
0.6
10
12
14
16 18 20
Number of jets
Figure D.3: Number of jets in one lepton selection.
101
102
Events/1fb-1/20GeV
Events/1fb-1/20GeV
Appendix D. One Lepton
SU1
SU2
SU3
SU4
SU6
SU8
3
10
102
SM bg
104
SU1
SU2
SU3
SU4
SU6
SU8
3
10
102
SM bg
ATLAS Preliminary
ATLAS Preliminary
10
10
1
0
200
400
600
1
0
800
1000
p Jet1 [GeV]
200
400
600
800
1000
p Jet2 [GeV]
Figure D.4: pT of the hardest jet in the
one lepton selection.
Figure D.5: pT of the second hardest jet
in the one lepton selection.
Events/1fb-1
T
Events/1fb-1
T
104
SU1
SU2
SU3
SU4
SU6
SU8
3
10
102
SU1
SU2
SU3
SU4
SU6
SU8
3
10
102
SM bg
SM bg
ATLAS Preliminary
ATLAS Preliminary
10
10
1
0
1
2
3
4
5
delφ
1
0
1
Figure D.6: Delta phi between the hardest
jet and E
/T.
D.1.1
1
2
3
4
5
delφ
2
Figure D.7: Delta phi between the second
hardest jet and E
/T.
Cut Step By Step For Optimized Cut
Cut03 in table 5.4 gave the best result with respect to significance for most of the mSUGRA
points. Here follows figures of how cut variables aeffect the Mef f distribution. The variables
are plotted to the left, and to the right follow the effective mass plot after cut on the variable
to the left has been made.
103
104
SU2
SU3
SU4
103
102
SM bg
Events/1fb-1/20GeV
Events/1fb-1/20GeV
D.1 Variables After One Lepton Selection
104
SU2
SU3
SU4
103
102
SM bg
ATLAS Preliminary
10
1
0
ATLAS Preliminary
10
200
400
600
800
1000
p Jet1 [GeV]
T
1
0
200
400
600
800
1000
p Jet2 [GeV]
T
Figure D.8: Left plot shows pT of the hardest jet after E
/ T > 200GeV cut and the right plots
shows pT of the second hardest jet after E
/ T > 200GeV. Cut on the hardest jet will be set
at 200GeV and second hardest at 150GeV
Appendix D. One Lepton
104
Events/1fb-1/100GeV
Events/1fb-1/20GeV
104
SU2
SU3
SU4
3
10
102
SM bg
104
SU1
SU2
SU3
SU4
SU6
SU8
3
10
102
SM bg
ATLAS Preliminary
ATLAS Preliminary
10
1
0
10
100 200 300 400 500
600 700 800 900 1000
p Jet3 [GeV]
1
0
500
1000 1500
2000
2500
T
3000 3500 4000
Meff [GeV]
104
Events/1fb-1/100GeV
Events/1fb-1
Figure D.9: Left plot shows pT of the third hardest jet after E
/ T >200GeV and the right plots
shows effective mass after jet cuts.
SU2
SU3
SU4
3
10
102
SM bg
104
SU1
SU2
SU3
SU4
SU6
SU8
3
10
102
SM bg
ATLAS Preliminary
ATLAS Preliminary
10
1
0
10
0.2
0.4
0.6
0.8
1
ST
1
0
500
1000 1500
2000
2500
3000 3500 4000
Meff [GeV]
Figure D.10: Left plot shows ST after the jet and E
/ T cuts and the right plots shows effective
mass after additional ST > 0.1 cut.
Appendix E
Software
E.1
ROOT
The analysis in this thesis is mainly done within the ROOT framework. ROOT is a interactive object oriented analysis framework written in c++ [5]. The ROOT framework can also
be used for data acquisition, event reconstruction, detector simulation and event generation.
In earlier years the FORTRAN based PAW [4] system was commonly used. When the amount
of data to be handled grew and the PAW libraries started to be 20 years old, PAW became
to big and oldfasionable. For LHC we will need to process a great amount of data. However
the ROOT development first stared with the NA49 experiment at CERN. (Large Acceptance
Hadron Detector for an Investigation of Pb-induced Reactions at the CERN SPS). Here 10
Terabytes of raw data was generated per run and there came a need of a system that could
handle such amount of data in an efficient way.
105
106
Appendix E. Software
We here use ROOT to run through a set of data stored in ntuples. An Ntuple is like a
table where we have rows of hundred thousands of events, each event having columns of
some hundred variables. These variables are the particles and their properties like transverse
momentum, charge, eta angle etc. The distributions of these variables correspond to some
selection criteria. We can make 1- or more-dimensional projections of these variables of the
events and to change the selection mechanisms inside ROOT. It is easy to test the selection
mechanism, binning or calculation of new variables on a smaller sample before running the
system on a large number of events.
E.1.1
Cuts In Database
ROOT is a nice tool to use for physics purpose, when it is need do do calculations and show
distributions in plots. However if the goal is to do simple systematic cuts only to count how
much signal and background there is left, ROOT seemed to be a bit clumsy and slow tool to
work with. (Maybe there exist some trick the candidate is not aware of?)
That’s why I decided to write a tool to perform these simple cuts, the perlParticleCutter(ppc). The ppc consists out of a few modules, MySQL database, perl scripts and latex
functionality. The reason to use databases is that they are easy to use and extremely fast
when it comes to processing data. One of the advantages I found out myself is that all
your data is stored nicely into different databases. MySQL was used as our database-server,
this is a free product which is easily installed on most linux distributions. I choose perl as
my programming language to communicate to the db-server, just like MySQL perl has been
around for a long time on linux-based systems securing us that all the functionality i need
is available a known to be stable. Perl’s strength also shows itself when working inside data
files, this is very convenient when gathering and or sorting information to be included in the
thesis.
How does it work?
The ppc system will transfer data files (converted with root) inside the database. It does
this by generating a table for each data-file, and placing each variable inside the corresponding
field. This will result that my database contains a multitude of cuts (su1, su2 pt).
Query’s
The way to approach your database is with the use of query’s. They are usually small
statements describing constraints on the data that I want. For example
"SELECT * from ’Su3’ where Number_Elec>3"
will give me all the records which the constraint number of electrons bigger then 3 holds.
The nice part is that I can combine several of these query’s and use them on all my tables(datasets) inside the database, which in my case gives me quick results. These query’s i
write done inside a simple text file which my ppc program will feed to the database. the ppc
program will write the The database response to into different information files.
Now that i have all my information nicely stored it’s time to convert these information files
into latex code so i am able to read the result in pdf format and include them in my thesis.
A database is already a structure developed to handle a big amount of data. The approach
has been tested against results from ROOT when the same cuts have been applied, and the
output numbers are always the same. The variables of interest was In this thesis there will be
E.1 ROOT
107
shown significance scans and combinatorial matrix approach for optimization of cuts when
searching for the signal. For time saving and convenience purpose it was developed a system
that queries a database and
108
Appendix E. Software
Bibliography
[1] The ATLAS TWiki. Available from:
Atlas/CscNote5Datsets.
https://twiki.cern.ch/twiki/bin/view/
[2] Geant4 Collaboration. Available from: http://geant4.web.cern.ch/geant4/.
[3] MC production status for SUSY CSC works, Wiki page. Available from: https://
twiki.cern.ch/twiki/bin/view/Atlas/SusyCscMcProduction.
[4] Physics analysis workstation - PAW. Available from: http://paw.web.cern.ch/paw/.
[5] The ROOT system home page. Available from: http://root.cern.ch.
[6] Susy csc note 5, internal atlas note.
[7] Abdesselam, A., Barr, A., Basiladze, S., Bates, R. L., Bell, P., Bingefors,
N., Böhm, J., Brenner, R., Chamizo-Llatas, M., Clark, A., Codispoti, G.,
Colijn, A. P., D’Auria, S., Dorholt, O., Doherty, F., Ferrari, P., Ferrére,
D., Górnicki, E., Koperny, S., Lefèvre, R., Lindquist, L.-E., Malecki, P.,
Mikulec, B., Mohn, B., Pater, J., Pernegger, H., Phillips, P., RobichaudVéronneau, A., Robinson, D., Roe, S., Sandaker, H., Sfyrla, A., Stanecka,
E., Stastny, J., Viehhauser, G., Vossebeld, J., and Wells, P. The detector control system of the ATLAS semicondutor tracker during macro-assembly and integration.
oai:cds.cern.ch:1033100. Tech. Rep. ATL-INDET-PUB-2008-001. ATL-COM-INDET2007-010. CERN-ATL-COM-INDET-2007-010, CERN, Geneva, May 2007.
[8] Aitchison, I. J. R. Supersymmetry and the MSSM: An elementary introduction, 2005.
Available from: http://arxiv.org/abs/hep-ph/0505105v1.
[9] Asai, S. Physics at LHC, november 1997.
[10] ATLAS Collaboration. Data-driven determinations of W, Z and top background to
SUSY searches.
[11] ATLAS Collaboration.
searches. Tech. rep.
Data-driven estimation of QCD background to SUSY
[12] ATLAS Collaboration. ATLAS detector and physics performance. technical design
report i and ii. Tech. rep., ATLAS collaboration, 1999. Available from: http://atlas.
web.cern.ch/Atlas/GROUPS/PHYSICS/TDR/TDR.html.
109
110
BIBLIOGRAPHY
[13] Baer, H., Paige, F. E., Protopopescu, S. D., and Tata, X. Monte Carlo Event
Generator for p p, pbar p, and e+ e- interactions, March 2008. Available from: http:
//www.hep.fsu.edu/~isajet/.
[14] Barlow, R. Introduction to statistical issues in particle physics, 2003. Available from:
http://arxiv.org/abs/physics/0311105v.
[15] Burgess, C., and Moore, G. The Standard Model A primer. Cambridge, 2007.
[16] Cioci, A. ATLAS Twiki page. Available from: https://twiki.cern.ch/twiki/bin/
view/Atlas/LMTConnectionAndTesting.
[17] Coleman, S. R., and Mandula, J. All possible symmetries of the S matrix. Phys.
Rev 159 (1967), 1251–1256.
[18] Dirac, P. A. M. The quantum theory of the emission and absorption of radiation.
Proceedings of the Royal Society of London Series A, 114 (1927), 243–265. It is reprinted
in (Schwinger 1958).
[19] Dobbs, M. A., Frixione, S., Laenen, E., Tollefson, K., Baer, H., Boos, E.,
Cox, B., Engel, R., Giele, W., Huston, J., Ilyin, S., Kersevan, B., Krauss, F.,
Kurihara, Y., Lonnblad, L., Maltoni, F., Mangano, M., Odaka, S., Richardson, P., Ryd, A., Sjostrand, T., Skands, P., Was, Z., Webber, B. R., and Zeppenfeld, D. Les Houches guidebook to Monte Carlo generators for Hadron Collider
Physics, 2004. Available from: http://arxiv.org/abs/hep-ph/0305252.
[20] Evans, L., and Bryant, P. LHC machine. Journal of Instrumentation 3, 08 (2008),
S08001. Available from: http://www.iop.org/EJ/abstract/1748-0221/3/08/S08001.
[21] Frixione, and et al. Matching NLO QCD and parton showers in heavy flavour
production. JHEP 0308 (2003).
[22] Knowles, C., Moretti, M., Richardson, O., and Webber, S. Monte Carlo
package for simulating Hadron Emission Reactions With Interfering Gluons, 2001. hepph/0011363. Available from: http://hepwww.rl.ac.uk/theory/seymour/herwig/.
[23] Mandl, F., and Shaw, G. Quantum Field Theory (Revised Edition). Wiley, 1993.
[24] Mangano, M. L., Moretti, M., Piccinini, F., Pittau, R., and Polosa, A. D.
ALPGEN, a generator for hard multiparton processes in hadronic collisions. JHEP 0307
(2003), 001. Available from: http://arxiv.org/abs/hep-ph/0206293.
[25] Pedersen, M. Supersymmetry, a study of the supersymmetric opposite sign di-lepton
channel. Master’s thesis, Univerity of Oslo, Norway, June 2008. Available from: http:
//folk.uio.no/maikenp/Master/master.html.
[26] Read, A. Simple Analysis Optimization. University of Oslo, 2006. Available from:
http://www.fys.uio.no/epf/seminar/slides/simpleopt.pdf.
[27] Rolke, W., and Lopez, A. How to claim a discovery, 2003. Available from: http:
//arxiv.org/abs/physics/0312141v1.
BIBLIOGRAPHY
111
[28] Sjostrand, Mrenna, and Skands. A brief introduction to PYTHIA 8.1. Computer
Physics Communications Volume 178, Issue 11 (6 2008), 852–867. Available from: http:
//adsabs.harvard.edu/abs/2007arXiv0710.3820S.
[29] Stapnes. Instrumentation for high energy physics. Available from: http://www.fys.
uio.no/~stapnes/pylos-I.pdf.
[30] Tata, X., and Baer, H. Weak scale supersymmetry: from superfields to scattering
events. Cambridge University Press, 2006.
[31] The ALICE Collaboration, Aamodt, K., and et al. The ALICE experiment
at the CERN LHC. Journal of Instrumentation 3, 08 (2008), S08002. Available from:
http://stacks.iop.org/1748-0221/3/S08002.
[32] The ATLAS Collaboration, Aad, G., and et al. The ATLAS Experiment at
the CERN Large Hadron Collider. Journal of Instrumentation 3, 08 (2008), S08003.
Available from: http://stacks.iop.org/1748-0221/3/S08003.
[33] The CMS Collaboration, Chatrchyan, S., and et al. The CMS experiment
at the CERN LHC. Journal of Instrumentation 3, 08 (2008), S08004. Available from:
http://stacks.iop.org/1748-0221/3/S08004.
[34] The LHCb Collaboration, Alves Jr, A., and et al. The LHCb detector at
the LHC. Journal of Instrumentation 3, 08 (2008), S08005. Available from: http:
//stacks.iop.org/1748-0221/3/S08005.
Download