Physik-Institut Experimental Methods in Particle Physics (HS 2015) Electronics, Data Acquisition and Trigger - Trigger Lea Caminada lea.caminada@physik.uzh.ch Overview • • • • • • • Interlude: LHC operation Data rates at LHC Trigger overview Coincidence logic Pipelines Higher complexity systems Important concepts – Occupancy – Pileup – Cross talk – Random coincidences – Dead time Interlude: LHC operation LHC Proton bunches • 2808 bunches per beam • 1011 protons per bunch • 40 MHz bunch collision rate • BX: bunch crossing LHC Fill • An LHC fill usually lasts a few hours • After injection and acceleration, the beams are – first, focused – then, brought into collision • LHC then “declares stable beams” à experiments start data-taking • Luminosity decreases over the course of the run • Divide run into “luminosity sections” (also called “luminosity blocks”) Trigger Large data volume at the LHC • Proton-proton collision rate at LHC is 40 MHz • Pixel detector has 66M readout channels out of which O(104) are hit in an event • Data size to be stored for each pixel is 4 byte à Data volume of 120 000 TB per day! • Offline data analysis would take forever… • Therefore: à Need to reduce data volume online: Trigger à Hardware and/or software trigger select events in real time based on relevant detector information Trigger Concepts • In collider experiments, events of interest occurs at much lower rate compared to proton-proton collision rate Trigger Strategy • Good knowledge of detector and signatures is needed to efficiently select interesting events – Relevant detector parts and their performance – Needed/desired measurement precision – Physical properties of signal and background events (expected signatures, kinematic distributions, mass constraints, etc.) • In case of multipurpose experiment: Decide which processes are important… • Trigger strategy needs detailed planning and has physical, technical and political aspects. Important parameters • Timing – How long does the trigger need to form decision – Need fast processes – Need intermediate storage • Rate – Maximum rate defined by available bandwidth for permanent storage – Usually given by background levels • Efficiency – Efficiency to select signal events – Optimized according to physics needs Simple Trigger Setup • Trigger setup for a sensor producing a signal at a random time (for example cosmic rays, radioactive decay) DET DELAY ADC DISCR CONVERSION SIGNAL FOR ADC • Discriminator is used to form conversion signal for ADC • Signal is passed over delay line Coincidence Trigger DET 1 DISCR DELAY SCALER COINCIDENCE (“AND”) DET 2 DISCR DELAY • Setup to detect 2-body decay, for example π0 à ϒϒ Coincidence Trigger Input 1 Input 2 Output • Coincidence triggers if there is some overlap during time window Δt • In order to allow for a true coincidence, both signals need to have same length signal path à need to introduce (and adjust) delay lines Random Coincides • Need to take into account probability that coincidence trigger registers two hits which are not from the same event: P = Δt × Z1 × Z2, where Δt: time window of coincidence Z1, Z2: count rate of detector 1,2 TRUE CONINCIDENCDE RANDOM CONINCIDENCDE Dead time • Dead time is the time that the detector and readout electronics are busy with processing the previous event and not ready to accept new events • Any event that happens during the dead time is lost! • Measures for dead time: – Total dead time d: usually measured in percent as a fraction of the total measuring time – Dead time per event τ: measured in ms (determined by actual processing time in the electronics circuits) Dead time: Events with random time distribution • Efficiency is related to total dead time by: ε = 1 - d • For a source with actual event rate Rtrue, the events that can be detected is: – Racc = ε Rtrue = (1-d) Rtrue • The total dead time is related to the dead time per event: – d = Racc τ = (1-d) Rtrue τ • The total dead time then becomes: d = (Rtrue τ)/(1+Rtrue τ) • And the efficiency becomes: ε = 1/(1+ Rtrueτ) • Note that the efficiency decreases with increasing rate • Note that there will always be some dead time Dead time: Events with fixed occurrence (collider) • Events occur at fixed rate with time separation tBX • Two possible scenario: – τ < tBX à no dead time (used to be the case at LEP) – τ >> tBX à need to keep event rate low! à complex trigger system From coincidence triggers to higher complexity • Two ways: 1) Larger number of channels à more complex combinations - - - Usually based on FPGAs (field programmable gate array) Several hundred inputs Programmable operations à complex logic combinations à Need longer delay lines: time for calculation increases with complexity 2) Additional computations after digitization - For example: π0 àϒϒ: Compute π0- mass from energy of detected photons , then apply mass window selection à Second trigger level (can be built from FPGA or fast processors) à Need long delay lines and intermediate storage FPGA • Designed to be configured by customer after manufacturing using hardware description language (HDL) • Contain large number of logic gates and RAM blocks • Logic blocks can be configured to perform complex combinatorial functions Pipelines • Simple delay lines usually not feasible for long delays – 100ns delay need about 20m cable • Pipelines allow for intermediate storage – Analog pipelines: built from switch capacitance – Digital pipelines: using digital registers buffer cell • Pipelines consist of several buffer cells. Pipelines • R/W pointers are moved by given clock frequency buffer cell – Chosen to match time resolution of detector signal read pointer write pointer Pipelines • R/W pointers are moved by given clock frequency – Chosen to match time resolution of detector signal buffer cell • Latency: time difference between read and write pointer read pointer latency (<buffer length) write pointer Pipelines • R/W pointers are moved by given clock frequency – Chosen to match time resolution of detector signal buffer cell • Latency: time difference between read and write pointer • Circular buffer: pointer jump back to first cell when they reach end of buffer read pointer latency (<buffer length) write pointer Example: Multi-step trigger for π0 à γγ • Step 1: Digital signal of photon energy from ADC • Step 2: Adding the two values • Step 3: Comparing the sum to 2 values (upper and lower bound of π0-mass window), can be done in parallel • Step 4: Store event if mlow < mϒϒ < mhigh à Trigger latency is 4 tBX à 3 events at the same time in the pipeline Note: Trigger decision at LHC takes longer than tBX à trigger decision itself needs to be pipelined Occupancy • Probability to see a signal in a given channel • Aim is to have small occupancy (<<1) • Probability to create fake matches (for example hits to tracks) increases with increasing occupancy Pileup • Electronic pileup: Signal has too long decay time à comparator still high when next event arrives à not possible to record new event à data loss Pileup • Pileup from additional pp collisions at the LHC • At LHC, bunches contain 1011 protons à several collisions can happen at the same time • Hard process (the process of interest which triggers the readout) overlayed by particles from other collisions • Particles from secondary collisions will need to be identified offline and subtracted Pileup in ATLAS Pileup in CMS Pileup in Run 1 Peak pileup vs time Peak luminosity vs time Pileup in Run 1 Pileup in Run 2 CMS Trigger System CMS Trigger System CMS Trigger System: L1 and HLT • LHC BX rate: 40 MHz • L1: – 100 kHz rate – 3.2us latency (128 BX) • HLT: – ~ 300 Hz rate – 150ms latency (depends on CPU) ~300 Hz CMS L1 Trigger • Fast readout of the detector with limited granularity • Only muon system and calorimeter take part in decision • Implementation using FPGA and ASICs • Synchronous operation CMS L1 Trigger Objects • Track segments in the muon system – muons • Towers of calorimeter cells in ECAL and HCAL: – Jets, electrons, photons – Total energy, missing energy – Isolation CMS L1 muon trigger CMS L1 calo trigger CMS L1 Trigger CMS L1 Trigger Menu Example (2012) Trigger Threshold [GeV] Single muon 16 Double muon 10, 3.5 Isolated double muon 3, 0 Single e/gamma 22 Isolated single e/gamma 20 Double e/gamma 13, 7 Muon + electron 7, 12 Single jet 128 Ouad jet 4 x 40 Six jet 6 x 45 MET 40 HT 150 L1 rate depends on luminosity • Trigger menu needs to be adjusted over the course of the experiment Trigger prescales • Use prescales to adjust rate of a given trigger – Needed at high-luminosity runs for some triggers to keep system alive – Prescale n: Keep only every nth event (Prescale 1 means no prescale) • Dynamic prescales – Based on the availability of trigger bandwidth – Automatically reduce prescales as the luminosity falls over the course of the run • Note: Needs to be taken into account in offline physics analysis! L1 muon triggers L1 jet triggers CMS High-Level Trigger (HLT) • Events that are accepted by L1 are passed to HLT • Full readout of the detector at 100 kHz • Implemented as software algorithms running on large cluster of commercial processors (event filter farm) ~15k cores (30k processors or threads) The challenge CMS High-Level Trigger (HLT) • Form regions of interest to speed up reconstruction – i.e. if there is a L1 muon or calo tower à perform local reconstruction surrounding detectors • Reject events as early as possible to free CPUs CMS HLT Objects • Muons – Tracker and muon system • Electrons and photons – Tracker and calorimeter • Taus – Tracker and calorimeter • Jets, MET, HT – Tracker and calorimeter • B-tagging (secondary vertices) – Tracker • Other more complex variables HLT muons • Muons • Track segment in muon system • Matched to track in tracking detector • Isolation requirement based on calorimeter HLT electrons and photons • Tower in electromagnetic calorimeter • Matched track in tracking detector? – yes: electron, no: photon • Isolation requirement based on calorimeter HLT taus • Leptonic tau decays (e,mu) usign e/mu triggers • Hadronic tau decays: 1prong or 3-prong decays • Calo cluster matched to tracks in tracker detector • Isolation requirement based on calorimeter detector HLT b-tagging • Calo cluster with associated tracks in the tracker detector • B-tagging based on long lifetime of b-quark and its large mass • Form secondary vertex from tracks in jet HLT menu CMS HLT Trigger Menu Example (2012) Trigger Threshold [GeV] Single muon 40 Single isolated muon 24 Double muon 17, 8 Single electron 80 Isolated single electron 27 Single photon 150 Double photon 36, 22 Muon + electron 8, 17 Single jet 320 Ouad jet 4 x 80 Six jet 6 x 45 MET 120 HT 750 CMS HLT Config Browser CMS HLT Config Browser Prescales for different luminosities L1 condition HLT rate depends on luminosity • Different HLT paths have different allocated rates HLT muon efficiency • Measured efficiency compared to simulated efficiency • Reasonable agreement between data and simulation • Note: Need to have ways of measuring trigger efficiency Measurement of trigger efficiency • Knowledge of trigger efficiency needed for physics analysis • Strategy for measuring trigger efficiency needs to be thought of from the very beginning • Events that are rejected on trigger level are lost, i.e. not available for efficiency calculation! à need backup trigger with looser selection à can be prescaled Measurement of trigger efficiency • Example: Measurement of efficiency of QuadJet50 • Efficiency measured wrt reconstructed events – 8 jets pT > 30 GeV, leading 4 jets pT > 50 GeV Efficiency • Use EightJet30 as backup trigger à need to take bias into account 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 Data 0.1 MC MC (HLT_8j30) 0 30 40 50 60 70 80 90 100 jet p [GeV] T Summary • Triggers are needed to reduce data volume and allow for permanent storage • Trigger strategy has physical, technical and political aspects and needs careful planning • Parameters to consider are timing, rate and signal efficiency • Discussed trigger concepts ranging from simple coincidence triggers to multi-level trigger system as used at the LHC