Distributed Trigger System for the LHC experiments

advertisement
Distributed Trigger System for the LHC
experiments
Krzysztof Korcyl
ATLAS experiment laboratory
H. Niewodniczanski Institute of Nuclear Physics,
Cracow
Title – 1
Copyright © 2000 OPNET Technologies, Inc.
Contents
• LHC
• physics program
• detectors (ATLAS, LHCb)
• LHC T/DAQ system challenges
• T/DAQ system overview
• ATLAS
• LHCb
•T/DAQ trigger and data collection scheme
• ATLAS
• LHCb
Krzysztof Korcyl, Institute of Nuclear Physics, Cracow
Title – 2
Copyright © 2000 OPNET Technologies, Inc.
CERN and the Large Hadron Collider, LHC
LHC is being constructed underground inside a 27 km tunnel. Head on collisions of
very high energy protons. ALICE, ATLAS, CMS, LHCb - approved experiments.
Krzysztof Korcyl, Institute of Nuclear Physics, Cracow
Title – 3
Copyright © 2000 OPNET Technologies, Inc.
The ATLAS LHC Experiment
Krzysztof Korcyl, Institute of Nuclear Physics, Cracow
Title – 4
Copyright © 2000 OPNET Technologies, Inc.
The LHCb LHC Experiment
Krzysztof Korcyl, Institute of Nuclear Physics, Cracow
Title – 5
Copyright © 2000 OPNET Technologies, Inc.
The LHCb LHC Experiment an event signature
Krzysztof Korcyl, Institute of Nuclear Physics, Cracow
Title – 6
Copyright © 2000 OPNET Technologies, Inc.
Challenges for Trigger/DAQ system
The challenges:
• unprecedented LHC rate of 109 interactions per second
• large and complex detectors with O (108) channels to be read out
• bunch crossing rate 40 MHz requires a decision every 25 ns
• event storage rate limited to O (100) MB/s
The big challenge: to select rare physics signatures with high efficiency
while rejecting common (background) events.
• E.q. H  yy (mH  100 GeV) rate is ~ 10-13 of LHC interaction rate
Approach: three level trigger system
Krzysztof Korcyl, Institute of Nuclear Physics, Cracow
Title – 7
Copyright © 2000 OPNET Technologies, Inc.
ATLAS Trigger/DAQ - system overview
CALO MUON TRACKING
Bunch crossing rate: 40 MHz
Pipeline
memories
LVL1
Regions
of
Interest
Interaction rate ~ 1 GHz
Readout
Driver
Readout
Driver
Readout
Driver
Readout
Buffer
Readout
Buffer
Readout
Buffer
rate : 100 kHz
latency: < 2.5 s
throughput 200 GB/s
LVL2
rate : 1 kHz
Event builder
latency: <10ms>
throughput 4 GB/s
EF
rate : 100 Hz
Data Recording
latency: <1 s>
• LVL1 decision based on
course granularity
calorimeter data and muon
trigger stations
• LVL2 can get data at full
granularity and can combine
information from all
detectors. Emphasis on fast
rejection. Region of Interest
(RoI) from LVL1 are used to
reduce data requested (few
% of whole event) in most
cases
• EF refines the selection
according to the LVL2
classification, performing a
fuller reconstruction. More
detailed alignment and
calibration data can be used
throughput 200 MB/s
Krzysztof Korcyl, Institute of Nuclear Physics, Cracow
Title – 8
Copyright © 2000 OPNET Technologies, Inc.
ATLAS overall data collection scheme
Readout Subsystem
ROD
ROS
ROS
ROS
ROS
ROS
LVL1
RolB
L2SV
LARGE SWITCH
DATA COLLECTION NETWORK
SFI
SFI
DFM
SFI
SWITCH
SWITCH
L2PU
SWITCH
CPU
L2PU
LVL2 FARM
EF SUBFARM
EF SUBFARM
Krzysztof Korcyl, Institute of Nuclear Physics, Cracow
Title – 9
Copyright © 2000 OPNET Technologies, Inc.
Why trigger on GRID?
• First code benchmarking shows that local CPU power
may not be sufficient (budget+ manageable size of the
cluster)  distribute the work over remote clusters.
• Why not? The GRID technology will provide platform
independent tools which perfectly match the needs to
run, monitor and control the remote trigger algorithms.
• Developement of dedicated tools (based on the GRID
technology) ensuring quasi real-time response of the
order of a few seconds might be necessary  task for
CROSSGRID
Krzysztof Korcyl, Institute of Nuclear Physics, Cracow
Title – 10
Copyright © 2000 OPNET Technologies, Inc.
Data flow diagram
Experiment
Local high
level trigger
Tape
Event buffer for
remote
processing
Event dispatcher
CROSSGRID
interface
Remote
high level
trigger
...
Remote
high level
trigger
decision
decision
Krzysztof Korcyl, Institute of Nuclear Physics, Cracow
Title – 11
Copyright © 2000 OPNET Technologies, Inc.
Operational features
• Event dispatcher is a separate module. Easy to activate and
deactivate
• Implementation independent on specific trigger solutions for a
given experiment
• Dynamical resource assignment to keep system running within
assumed performance limits (event buffer occupancy, link
bandwidth, number of remote centers, timeout rate...)
• Fault tollerance and timeout management (no decision within
allowed time limit)
• User interface to monitor and control by a shifter
Krzysztof Korcyl, Institute of Nuclear Physics, Cracow
Title – 12
Copyright © 2000 OPNET Technologies, Inc.
Testbed for distributed trigger
Easy to test by substituting real experiment data
with PC sending Monte Carlo data
Event
Dispatcher Monitoring and
Control Tool
Monte Carlo Data
PC at CERN
Event Buffer
Spain
Poland
Germany
Krzysztof Korcyl, Institute of Nuclear Physics, Cracow
decision
decision
decision
Title – 13
Copyright © 2000 OPNET Technologies, Inc.
Summary
• Trigger systems for the LHC experiments are challenging
• GRID technology may help to solve lack of local CPU power
• Proposed distributed trigger structure as a separate Event
dispatcher module offers cross-experiments platform independent
of specific local trigger solutions.
• Implementation on testbed feasible even without running
experiments
• Dedicated tools to be developed within CROSSGRID project to
ensure interactivity, monitoring and control.
Krzysztof Korcyl, Institute of Nuclear Physics, Cracow
Title – 14
Copyright © 2000 OPNET Technologies, Inc.
Download