Introduction to electronics for a possible LHCb upgrade

advertisement
Introduction to electronics
for a possible LHCb upgrade
Upgrade options
Front-end architectures
Required technologies and building blocks
Cost and manpower estimates
The two major upgrade options
•
A: Hardwired vertex trigger within L0 latency
– Vertex detector with synchronous 40MHz readout to FPGA based Vertex and
impact parameter trigger in counting house.
•
Front-end of this also compatible with option B
– Strips or pixels ? (very different front-ends, readout, and trigger system)
– Latency a major bottle neck: 160 or 256 clock cycles
•
Beetle, Otis (and HPD ?) will be restricted to 160 clock cycles
– Complicated FPGA wired processing system
•
Can sufficiently simple and efficient algorithms be found ?.
– Physics efficiency ? (TT information needed ?)
– Other detectors do not need to be changed unless rate or radiation problems
•
B: Full readout of all detectors for all bunch crossings (~30MHz)
– All front-end electronics to be changed
•
•
Possible exception for muon as muon binary data at 40 MHz available in muon trigger
When changing front-ends completely there will most likely be a strong urge to also change detectors
–
–
–
–
–
–
•
Higher rates, higher radiation, higher performance, , ,
Synchronous or asynchronous front-end architecture
Microelectronics technologies for new front-ends (same across sub-detectors)
Large number of radiation hard optical links. (same across sub-detectors)
High performance DAQ interface (same across sub-detectors)
Major effort required for development of new front-ends
(C: Get current detectors to work as required)
J.Christiansen/CERN
2
Major building blocks currently used
•
TTC: Timing and Trigger (Control) distribution
–
–
•
ECS: Control and monitoring of front-end electronics
–
–
•
Vertex, Pileup, ST, TT: Beetle
Ecal – Hcal
Muon MWPS, Muon GEM
Common front-ends should be used to the extent possible
DAQ interface
–
–
•
Radiation tolerant linear regulator: 15.000, Wiener Marathon: 500 channels of 300W
Replacement for lower voltages and higher radiation tolerance to be found.
Better power efficiency
Common front-ends across sub-detectors:
–
–
–
–
•
GOL: 8.000 , ORC: 400 modules
Radiation hard replacement with higher bandwidth to be found
Power supplies: radiation tolerance for use in cavern
–
–
•
SPECS: 400 , ELMB: 300, DCU chip: 500
Replacement: Use of only one system in cavern would be preferred
Readout link: Data transport from radiation zone
–
–
•
TTCrx: 1.300 , QPLL: 3.000, optical fan-out: 50, readout supervisor and switch
Replacement to be found (backward compatibility ?)
TELL1 (+RICHL1): 400 , GBE interface: 400
Common high bandwidth replacement to be made
Micro-electronics for front-ends
–
–
–
–
0.25um CMOS radiation hard: 10 Designs, ~30 prototype runs, 100k Produced.
Plus GOL, QPLL, DCU
AMS 0.7 um CMOS: 3 designs, ~7 Prototype runs, 4k Produced
Dmill: TTCrx, ASDBLR
Radiation hard and affordable technology to be identified.
Use of common radiation hard IP blocks
J.Christiansen/CERN
3
Front-end architectures for 40MHz
readout (option B)
•
Basic rate control required to match data rate
to DAQ
–
•
•
Synchronous without Zero suppression in FE
– Constant high bandwidth -> many links
– Simple buffering, No truncation
– Simple synchronization verification
– (known empty bunches could be removed ~25%)
Asynchronous with zero suppression in FE
– Lower but variable bandwidth -> limited number of
links (if low occupancy)
– Derandomizer Buffers to be studied and fixed
– Well defined truncation limits needed
– Synchronization verification complicated
•
See talk by David
Real positive trigger or simple throttle ?
A: Trigger applied in front-end
–
–
•
•
Must though still work up to full rate
Pipeline required in FE - Latency ?
B: Trigger applied in DAQ interface
–
–
High bandwidth from front-end to DAQ interface
needed from day1
No pipeline required in FE
Front-end
DAQ interface
Data
formatting
Zerosuppression
MUX
DAQ
Trigger
Compression
MUX
Data
formatting
Data
formatting
Zero
suppression
MUX
Data
formatting
Compression
Zerosuppression
MUX
Data buffering during trigger
Derandomizer buffer
Zerosuppression
Optical links from front-ends
J.Christiansen/CERN
4
Optical links:
The interface between detector and counting house
•
Data transport from front-ends to DAQ interface
–
–
–
–
Transmitter must be radiation hard and have SEU
protection
Link protocol must be FPGA serial link compatible for
use on DAQ interface modules
TTC
All front-ends must use same link (as now)
ECS
Links needed if local zero-suppression in front-ends Global trigger
•
•
•
–
Links needed if no zero-suppression in front-ends
•
•
–
2 x 100kB x 30MHz / 2.56Gbits/s = 20k (With current tracker
information), (10-15k with binary tracker)
Assumes effective zero-suppression
(factor 2 worse than global zero-suppression and compression)
Strong dependency on occupancy and tracker data type
Global TTC
ECS
Global ECS
40MHz x (1M binary + 100k byte + 10k double byte)/ 2.56G = 30k
(binary tracker)
40MHz x (1M byte + 100k byte + 10k double byte)/ 2.56G = 140k
(digital tracker)
Cost of a current GOL link: 200 – 300 CHF
TTC
ECS
•
It would be very nice to use same basic link.
This would allow a combined link instead of
dedicated separate links which can result in
significant system simplifications
The GBT link project is currently working on
defining such a flexible solution
–
Trigger links
TTC driver
TTC link
ECS interface
ECS link
Local
trigger
extraction
TTC and ECS
distribution
Hybrid
detector
DAQ interface
Readout links
Link interface chip
Fully combined detector links
We also need optical links for TTC and ECS:
–
–
Trigger
processor
Global DAQ
including serializer, VCSEL, fiber, fiber ribbon receiver and
de-serializer.
•
Separate detector links
Global TTC
Global ECS
Global DAQ
Combined:
TTC driver
ECS interface
DAQ interface
detector
Combined links
See talk by Sandro
J.Christiansen/CERN
5
Power supply
•
LV power supplies, their distribution to the front-ends an often
overlooked problem
–
–
–
–
•
•
Everybody underestimated the cooling problems
Radiation tolerance.
Grounding, EMC and noise coupling
A significant fraction of the material in the detectors and front-ends are
related to power distribution and cooling.
This will get even more problematic with deep sub-micron technologies
working at low voltages but requiring similar currents as the old
generation.
Replacement for the current radiation tolerant linear regulators from ST
will be needed.
– Preferable a switching mode type to improve power efficiency and reduce
cooling problems
•
•
•
Replacement for Wiener (and CAEN) radiation tolerant switching mode
power supplies will be needed.
HV considered less critical
Some activity slowly starting up on this across the experiments to
prepare the upgrade programs
J.Christiansen/CERN
6
DAQ interface
•
•
Required DAQ network links for 40MHz readout:
~100kB x 30MHz x 8 / 10Gbits * 1.2 = ~3000
DAQ interface: TELLX
– 4 x 10G Ethernet ports -> 3000/4 = ~1000 modules
• Or 40Gbits/s DAQ links
• Or 8 10G ports per module
– 48 GBT Inputs @ 2 - 10 Gbits/s ( See talk by Sandro)
– Performs data reception, verification, buffering, compression and
final formatting
– Use of high end FPGA’s with integrated serial link interfaces
• Both GBT link and Ethernet link interface can be put in future FPGA’s
• May not be a cheap board: 5-10 k CHF, Total 5 – 10 M CHF
– Must use modern commercial components so final design of such a
module should only be started 2-3 years before upgraded
experiment to be ready.
• Requires combined Hardware, firmware and software design TEAM
– See talk by Guido
J.Christiansen/CERN
7
Radiation hard micro electronics
•
LHCb has profited significantly from the 0.25um CMOS technology
–
–
–
–
–
•
Radiation qualification already done
Well defined rules for use of enclosed layout for radiation tolerance
Radiation hard library
IP blocks: memory, LVDS, DAC’s, Etc.
MPW service and access to foundry
New radiation hard technology to replace 0.25um CMOS to be
identified.
(production facilities for this technology now starting to be phase out)
– Time schedule of upgrade important to identify most appropriate
technology: 2010 or 2015 makes a major difference
•
•
•
•
•
Enough time to fully radiation qualify
Enough time to make delicate dedicated front-end chips
Library and design kits
Common IP blocks: Bandgap reference, DAC’s, memory, PLL’S, ADC, voltage regulator, ?
Technology access and MPW service
– Upgrade programs across the 4 LHC experiments will determine the most
appropriate common technology
•
See talk by Sandro
J.Christiansen/CERN
8
Cost and manpower estimates
(electronics for 40MHz readout)
•
R&D for typical LHCb front-ends
–
Front-end chip: Design, test and qualification: 8 MY, 1M CHF
•
–
–
–
•
•
•
Front-end module: Design and test: 2 MY, 100k CHF
Mini system test with TTC, ECS, readout: 2 MY, 100k CHF
Total: 5x (12MY, 1.2M CHF) = 60MY + 6M CHF
Transition to electronics production: 20MY + 2M CHF
Production: FE’s, links, DAQ interface: ~20MY + 20M CHF
Electronics total: ~100MY + ~30M CHF (optimistic ?)
–
•
Under the assumption that common building blocks available:
Optical links, power, Radiation hard technology
PLUS:
–
New detectors
•
–
–
–
–
–
•
2 small scale test circuits, 1 full chip, 0.13um CMOS or alike
It is my feeling that most will in practice be made new if all front-ends exchanged
DAQ
Power supplies, cabling, cooling and other infrastructure
New software for software trigger system
Installation, debugging and commissioning
Etc.
New experiment ?
J.Christiansen/CERN
9
The way ahead for electronics for a
possible major LHCb upgrade
•
Define new front-end (and DAQ) architecture
– Generic architecture simulations can start to determine pro and cons of
different schemes.
– Can not be done seriously before clear goal of upgrade defined
•
Use common building blocks when ever possible
(as currently done in LHCb)
–
–
–
–
–
•
Optical links (GBT or alike for TTC, ECS and readout)
Radiation hard microelectronics technology with IP blocks
Radiation hard power supply system and related cooling problems
DAQ interface module (based on modern commercial chips)
Common developments across LHC experiments vital.
(LHCb too small to do this on our own)
Specific front-ends
– Local R&D can start for basic front-ends of interesting detector options.
– Final chip designs can not start before detectors, front-end
architecture, common optical link, and appropriate radiation hard
technology identified.
J.Christiansen/CERN
10
Download