Electronics / Trigger / DAQ considerations - PowerPoint PPT Presentation

About This Presentation
Title:

Electronics / Trigger / DAQ considerations

Description:

Main advantage of going faster: allows narrowing readout window ... Failure of CPU manufacturers to stay on expected Moore's Law curve for single-core clock speeds ... – PowerPoint PPT presentation

Number of Views:24
Avg rating:3.0/5.0
Slides: 15
Provided by: grego100
Category:

less

Transcript and Presenter's Notes

Title: Electronics / Trigger / DAQ considerations


1
Electronics / Trigger / DAQ considerations
  • Gregory Dubois-FelsmannSLACSuperB
    Workshop16-18 March 2006

2
Time structure is the key
  • Entire electronics / trigger / DAQ design depends
    on
  • Interval between crossings continuous or in
    trains?
  • Interval between luminosity-driven interactions
  • Probability of overlap
  • In the same crossing
  • Within the detector response time
  • Choices
  • Electronics
  • Response times
  • Known-T0 shaping/filtering vs. peak-finding
  • Number of channels depends on detector technology
    choices
  • Especially the possibility of an all-silicon
    tracking system
  • Trigger
  • Beam-crossing-driven vs. data-driven
  • DAQ
  • Pipeline design depends on minimum possible
    interval between triggers, probability of overlap
    between readout frames

3
0.1-1 MHz design approaches nirvana
  • Electronics
  • Easy to achieve no significant overlap between
    events from different crossings
  • Little pressure to go to very fast detector
    response times
  • Waveform sampling doesnt need to be any faster
    than for BaBar
  • Except possibly in calorimeter endcaps
  • Precise knowledge of T0 simplifies
    shaping/filtering, improves noise rejection
  • Trigger
  • Trigger decision needs to be evaluated only once
    per crossing
  • 0.1-1 MHz rate - c.f. BaBar 4/8 MHz
  • DAQ
  • Nonoverlapping readout frames straightforward
    pipeline
  • Maximum instantaneous trigger rate is limited -
    queuing problems reduced

4
Almost nirvana
  • Simultaneous interactions are main remaining
    problem
  • Bhabhas 50-5 probability of coincidence
  • Single-particle backgrounds from QED, 2-photon
    processes not yet evaluated
  • Possibly troublesome background for recoil-based
    analyses
  • Needs some very simple MC tests to evaluate
  • Nothing can be done about this at E/T/D level
    its a problem for reconstruction

5
Yesterdays developments
  • Latest Raimondi and Seeman parameter sets
    envision essentially continuous collisions at
    500 MHz
  • Consequences
  • Luminosity per crossing goes way down vs.
    low-rate models very little chance of a problem
    with simultaneous interactions
  • More of a problem with overlapping (but not
    simultaneous) interactions
  • Continuous collisions not too much worsemostly
    driven by short-interval tail of Possion
    distribution of event times
  • Bunch trains worst-case scenario, could approach
    100 overlap
  • More or less impossible, and essentially
    pointless, to make a hardware trigger decision on
    every crossingTrigger must be data-driven, much
    like present B-Factory triggers
  • Beam currents back up, so beam backgrounds play a
    larger role again

6
Quick thoughts on the new model
  • Electronics
  • Beam backgrounds back up detector response
    times must be reduced, especially in calorimeter
  • Lack of an a-priori T0 requires peak-finding
  • Either with conventional electronics or waveform
    sampling
  • Nature and severity of beam backgrounds needs to
    be known better in order to make this decision
  • Waveform sampling rates may need to be
    considerably higher
  • Compare BaBar EMC 4 MHz sampling
  • Continuous collisions require continuous sampling
    - potentially very high raw data volumes in
    pipeline

7
Quick thoughts on the new model - II
  • Trigger
  • Return to B-Factory modelMake trigger decisions
    at a speed set by the scale of the T0 resolution
    (latency jitter) achieved in the hardware
    trigger
  • 4/8 MHz for BaBar
  • Does it need to be faster?
  • Main advantage of going faster allows narrowing
    readout window
  • Rejects noise hits before they start getting
    transported through DAQ and reconstructed
  • Ultimately limited by physics of detector systems
    (e.g., drift time)
  • My guess probably wont want to go more than 2x
    faster at most
  • Expect very high rates, which will affect

8
Quick thoughts on the new model - III
  • DAQ
  • Need a much deeper pipeline than used in BaBar (4
    buffers) to deal with high rate
  • Data movement tends to dominate cost/performance
    of front-end DAQ need a design that can
    construct overlapping readout frames by
    indirection
  • Instead of requiring multiple copies of event
    data in the pipeline
  • Basically true of BTeV, already done in a
    rudimentary way in one part of the BaBar DAQ
  • Crucial to avoid any fixed per-trigger deadtime
  • BaBar has 2.7us - intolerable at 100 kHz, major
    loss even at 10 kHz

9
General considerations - triggering
  • Taking as long-established the idea that we must
    preserve the open trigger model of BaBar and
    Belle
  • Too difficult to narrowly identify specific
    B-physics modes at trigger level
  • Recoil analyses are poorly matched to narrow
    triggering

10
General considerations - Level 1 rate
  • In any of these models
  • Substantive detectable Bhabha rate is O(50 kHz)
  • Rates from beam background are not yet known
  • Expected to be much smaller in LC-type designs
  • Fundamental choice
  • Generate a Level 1 (hardware) trigger on
    everything that looks like a multiple-particle
    interaction coming from the beam spot
  • 50-100 kHz rate
  • LHC-like electronics and front-end DAQ high cost
  • Attempt to veto Bhabhas at Level 1
  • Must not veto interesting events with overlapping
    Bhabhas
  • BaBar experience with vetoing Bhabhas in Level 3
    suggests that fairly simple algorithms can work
    at a 50-70 level, but they do need to be global
    probably increases latency and thus pipeline
    length
  • How good does the veto need to be to be worth
    doing?

11
General considerations - downstream T/DAQ
  • Loosely speaking, this is a solved problem
  • Demonstrated several years ago that
  • Commodity networking hardware can handle event
    building
  • Commodity computing can handle software
    triggering and full event reconstruction
  • just by scaling from BaBar
  • Some changes in how technology evolved
  • Failure of CPU manufacturers to stay on expected
    Moores Law curve for single-core clock speeds
  • Still stuck below 4 GHz
  • Parallelism of anticipated farms will have to be
    higher than expected, by up to 5x
  • Multiple-core CPUs will help keep the number of
    boxes down, but still have to run many streams of
    processing at once
  • Some work on scaling will be needed

12
General considerations - computing
  • Not much to say about post-reconstruction
    computing
  • Analysis on B-Factory-type data at this scale is
    a hard problem
  • Lack of distinctive trigger signals makes this
    harder per unit data than at Tevatron/LHC
  • Skims often have large selection fractions
  • Moores Law does not save yourandom access
    performance is not increasing as fast as other
    indicators of computing technology
  • Electronic-memory-based storage (RAM or flash)
    provides a possible answer
  • Being investigated at SLAC (PetaCache) 1 TB
    prototype running and being studied, 10 TB
    prototype (large enough to use for real BaBar
    analyses) being designed, proposal prepared

13
Desirable near-term actions
  • Collect channel counts and readout requirements
  • Requires detector technology choices
  • Timing requirements, A-to-D bit depth, need for
    waveforms
  • Determine electronics required for calorimeter
  • Collect relevant cost estimates from LHC, LHC-B,
    BTeV
  • Determine single-particle cross-sections for
    photons and charged particles
  • May be less important now if low-rate models have
    really been discarded
  • Study practicality of hardware Bhabha veto
  • Review existing deadtimeless overlapping-frame
    DAQ designs

14
Conclusions
  • Technology required fits within LHC / BTeV
    envelope,so probably no show-stoppers
  • LHC approach is expensiveNeed to evaluate
    estimated cost of electronics required to operate
    at 100 kHz
  • Cost may make intensive RD on efficient Bhabha
    veto well-justified
Write a Comment
User Comments (0)
About PowerShow.com