ATLAS High Level Trigger - PowerPoint PPT Presentation

1 / 38
About This Presentation
Title:

ATLAS High Level Trigger

Description:

ATLAS High Level Trigger Introduction System Scalability Trigger Core Software Development Trigger Selection Algorithms Commissioning & Preparation for Cosmics ... – PowerPoint PPT presentation

Number of Views:191
Avg rating:3.0/5.0
Slides: 39
Provided by: Bee117
Category:
Tags: atlas | atlas | high | level | trigger

less

Transcript and Presenter's Notes

Title: ATLAS High Level Trigger


1
ATLAS High Level Trigger
  • Introduction
  • System Scalability
  • Trigger Core Software Development
  • Trigger Selection Algorithms
  • Commissioning Preparation for Cosmics First
    Beam

2
Introduction
3
Introduction
  • ATLAS trigger comprises 3 levels
  • LVL1
  • Custom electronics ASICS, FPGAs
  • Max. time 2.5ms
  • Use of Calorimeter and Muon detector data
  • Reduce interaction rate to 75 kHz
  • LVL2
  • Software trigger based on linux PC farm (500
    dual CPUs)
  • Mean processing time 10 ms
  • Uses selected data from all detectors (Regions of
    Interest indicated by LVL1)
  • Reduces LVL1 rate to 1 kHz
  • Event Filter
  • Software trigger based on linux PC farm (1600
    dual CPUs)
  • Mean processing time 1s
  • Full event calibration data available
  • Reduces LVL2 rate to 100Hz
  • Note large fraction of HLT processor cost
    deferred ? initial running with reduced computing
    capacity

4
ATLAS Trigger DAQ Architecture
Trigger
DAQ
Calo MuTrCh
Other detectors
40 MHz
specialized h/w ASICsFPGA
LV L1
D E T R/O
2.5 ms
Lvl1 acc 75 kHz
H L T
D A T A F L O W
5
ATLAS Three Level Trigger Architecture
  • LVL1 decision made with calorimeter data with
    coarse granularity and muon trigger chambers
    data.
  • Buffering on detector
  • LVL2 uses Region of Interest data (ca. 2) with
    full granularity and combines information from
    all detectors performs fast rejection.
  • Buffering in ROBs
  • EventFilter refines the selection, can perform
    event reconstruction at full granularity using
    latest alignment and calibration data.
  • Buffering in EB EF

2.5 ms
10 ms
sec.
6
LVL1 - Muons Calorimetry
Toroid
Muon Trigger looking for coincidences in muon
trigger chambers
  • Calorimetry Trigger looking for e/g/t jets
  • Various combinations of cluster sums and
    isolation criteria

7
ATLAS LVL1 Trigger
pT, h, f information on up to 2 m
candidates/sector(208 sectors in total)
ET values (0.2?0.2)EM HAD
ET values (0.1?0.1)EM HAD
O(1M) RPC/TGC channels
7000 calorimeter trigger towers
Muon trigger
Calorimeter trigger
Muon Barrel Trigger
Muon End-cap Trigger
Pre-Processor (analogue ? ET)
Jet / Energy-sum Processor
Cluster Processor (e/g, t/h)
Muon-CTP Interface (MUCTPI)
Multiplicities of m for 6 pT thresholds
Central Trigger Processor (CTP)
Multiplicities of e/g, t/h, jet for 8 pT
thresholds each flags for SET, SET j, ETmiss
over thresholds multiplicity of fwd jets
Timing, Trigger, Control (TTC)
LVL1 Accept, clock, trigger-type to Front End
systems, RODs, etc RoI pointers
8
RoI Mechanism
  • LVL1 triggers on high pT objects
  • Calorimeter cells and muon chambers to find
    e/g/t-jet-m candidates above thresholds
  • LVL2 uses Regions of Interest as identified by
    Level-1
  • Local data reconstruction, analysis,and
    sub-detector matching of RoI data
  • The total amount of RoI data is minimal
  • 2 of the Level-1 throughput but it has to be
    accessed at 75 kHz

H ?2e 2?
9
Physics Selection Strategy
  • ATLAS has an inclusive trigger strategy
  • LVL1 Trigger on individual signatures
  • EM cluster
  • Muon track
  • Jets
  • Total Energy
  • Missing Energy
  • LVL2 confirms refines LVL1 signature
  • requires seeding of LVL2 with LVL1 result i.e.
    RoI
  • Event Filter confirms refines LVL2 signature
    more complete event reconstruction
  • Possibility of seeding of Event Filter with LVL2
    result
  • tags accepted events according to physics
    selection
  • Reject events early
  • Save resources
  • minimize data transfer
  • minimize required CPU power

10
System Scalability
11
ATLAS TDAQ Physical Layout
Central Switches
Events Built
12
System Scalability
  • Extended testing programme for system scalability
    testing
  • Dedicated testbed for dataflow performance
    networking issues
  • ? Data Acquisition group
  • Large clusters worldwide for node scalability
    testing
  • Machine run control
  • Start/end run cycling
  • Software distribution
  • Large scale configuration
  • ? Data Acquisition Trigger groups
  • Trigger focus on Event Filter
  • Recent work
  • Use of LXSHARE cluster at CERN 500 nodes and
    WESTGRID cluster in Canada (840 nodes)
  • Plans
  • Use of 50-700 nodes on LXSHARE this summer
  • http//atlas-tdaq-large-scale-tests.web.cern.ch

13
Summary of Recent Tests
  • Conclusions
  • Primary goal was system porting and debugging
  • Important bug in CORBA lib was found and fixed
  • Many others benefits obtained
  • Experience in porting large-scale DAQ system
  • Many particular indications for weak points and
    possible improvements
  • General impression of run control transition
    times
  • LST _at_ CERN
  • June 6 July 19
  • Many things being tested / investigated /
    measured
  • We are ready following experience from WestGrid

14
System Scalability
  • Many hardware issues need attention
  • How to organize O(2000) PCs
  • racks, space, weight, heat cooling, cabling
  • data I/O networking
  • operating booting, s/w installation,
    operational monitoring
  • dependency on ever evolving PC CPU
    architectures and compilers, applicability of
    Moores Law
  • Remote farms
  • Possible Involvement
  • Longer term possibilities of LSTs at SLAC?
  • Software development testing work in the Event
    Filter to include requirements from overall ATLAS
    monitoring and calibration
  • Work on the specification development,
    installation, maintenance running of the EF

15
Trigger Core Software Development
16
Trigger Core Software Development
  • Provides a coherent software framework for LVL2
    and EF
  • Coherent data access methods
  • Re-use of some offline components where
    appropriate
  • Development platform common across trigger
    offline
  • Facilitates online/offline comparisons ease of
    development
  • Detailed collaboration with core offline
    development group as well as detector software
    development
  • Benefit from detailed expertise in each detector
    group
  • E.g. gt in last years testbeam detector
    monitoring software developed for use in offline
    was also used online in the EF
  • Considerable exchange of ideas development
  • Performance efficiency improvements done for
    the trigger now benefit offline ? some new
    offline functionality benefits the trigger
  • More specific dedicated development for LVL2

17
HLT Event Selection Software
  • HLT Selection Software
  • Framework ATHENA/GAUDI
  • Reuse some offline components
  • Common to Level-2 and EF

HLT Data Flow Software
Offline algorithms used in EF
18
LVL2 Development Environment
  • HLT software development and testing in offline
    environment
  • Final certification procedure in Data Flow
    test-beds

Offline support for Level-2 developers Multithread
ed offline application AthenaMT Emulates complete
L2PU environment No need to setup complex Data
Flow systems As simple to run as a normal offline
application athenaMT ltnumber of threadsgt
ltjob-configurationgt Coding guidelines for Lvl2
developers
Development and Data Flow setup for Level-2
19
Trigger Core Software Development
  • Possible Involvement
  • Work responsibility in specific s/w packages in
    the core s/w
  • Trigger configuration and algorithm control
    system
  • Trigger monitoring framework and strategy
  • Offline/online Software integration

20
Trigger Selection Algorithms
21
Trigger Selection Algorithms
  • On-line event selection in the HLT based on
    algorithmic software tools running in LVL2 and EF
    farms, sequenced by HLT steering
  • LVL2 specialized algorithms, EF algorithms
    adapted from off-line
  • Important deployment in HLT test-beds to assess
    compliance with realistic on-line environment
  • Building on expertise and development inside
    detector communities
  • Calorimeters, Inner Detector, Muon Spectrometer
  • Studies of efficiency, rates, rejection factors,
    physics coverage organized around five main lines
    (vertical slices) coherently mapped to the
    Physics Combined Performance groups (see physics
    session)
  • Electrons and photons
  • Fundamental signatures for both precision
    measurements and discovery signals
  • Muons
  • Low- and High-PT objects, strategic also for
    B-physics programme
  • Jets / Taus / ETmiss
  • Models testing, new physics
  • b-tagging
  • Optimize physics coverage, add flexibility and
    redundancy to HLT selection starting from LVL2
  • B-physics
  • Rich program of work with new strategies
    dependent on luminosity
  • Most recent talks on performance studies
  • http//agenda.cern.ch/fullAgenda.php?idaa052747

22
Trigger Menus and Strategy
  • Extracting tiny signals out of huge backgrounds
    requires the HLT selection strategy to be robust,
    redundant and flexible
  • Selections are mostly inclusive, with
    as-low-as-possible pT thresholds for
    fundamental objects
  • The usage of software tools at both HLT levels
    allows detailed studies of the boundary between
    LVL2 and EF
  • Different paths leading at approximately thesame
    efficiency (electrons in the figure)
  • Example of flexibility and different selection
    sequences
  • Choice will depend on background conditions,
    detector knowledge, luminosity,
  • The building of complete Trigger Menus evolves
    and complement the work done in the slices
  • Moving from single objects to complex topological
    signatures
  • Include issues of pre-scaled triggers, monitor
    triggers, etc
  • Optimize to environmental conditions
  • Commissioning the HLT selection will be an
    important step towards physics data taking
  • Needs to be ready for cosmic period
  • Implies modification to algorithms, new sequences

23
Trigger Selection
  • Possible Involvement
  • Work in trigger algorithm development and
    selection performance evaluation
  • Jet / tau / Etmiss area is in particular need of
    increased effort
  • Other areas would also benefit from new manpower
    and groups willing to take on new responsibility
  • Preparation/adaptation of sets of algorithms
    selection procedures for use in cosmic running
    and in initial beam periods (single beams, very
    initial collisions etc)

24
Commissioning Preparation for Cosmics First
Beam
25
Commissioning
  • Detailed planning for stepwise commissioning of
    the trigger system (LVL1 HLT) is being prepared
  • Planning taking account of detector plans and
    triggering requirements for their commissioning
  • Planning in various phases with increasing levels
    of integration
  • Commissioning planning is broken in 4 broad
    phases
  • Subsystem standalone commissioning
  • Integrate subsystems into full detector
  • Cosmic rays, recording data, analyze/understand,
    distribute to remote sites
  • Single beam, first collisions, increasing rates
  • Phases will overlap
  • ? TDAQ pre-series system

26
TDAQ Pre-series system
  • Fully functional, small scale, version of the
    complete HLT/DAQ system
  • Equivalent to a detectors module 0
  • Purpose and scope of the pre-series system
  • Pre commissioning phase
  • To validate the complete, integrated, HLT/DAQ
    functionality
  • To validate the infrastructure, needed by
    HLT/DAQ, at point-1.
  • Installed at point 1 (USA15 and SDX1)
  • Commissioning phase
  • To validate a component (e.g. a ROS) or a
    deliverable (e.g. a Level-2 rack) prior to its
    installation and commissioning
  • TDAQ post-commissioning development system.
  • Validate new components (e.g. their functionality
    when integrated into a fully functional system).
  • Validate new software elements or software
    releases before moving them to the experiment.

27
Pre-Series
SDX1
USA15

Partial ONLINE rack- TDAQ rack- 4 HLT
PC(monitoring) 2 LE PC(control) 2 Central
FileServers
Partial EF rack- TDAQ rack- 12 HLT PCs
Partial Supervr rack- TDAQ rack- 3 HE PCs
One ROS rack-TC rack horiz. cooling- 12
ROS 48 ROBINs
One Full L2 rack-TDAQ rack- 30 HLT PCs
RoIB rack-TC rack horiz. cooling- 50 of
RoIB
One Switch rack- TDAQ rack- 128-port GEth for
L2EB
Partial EFIO rack- TDAQ rack- 10 HE PC(6 SFI
- 2 SFO - 2 DFM)
ROS, L2, EFIO and EF racks one Local File
Servers, one or more Local Switches
28
Commissioning
  • Phase 1 commissioning will be completely defined
    after the experience with the pre-series
  • Parallelize commissioning work as much as
    possible
  • Use data taken during detector commissioning to
    test data unpacking tools
  • Develop special algorithms to test component
    units
  • Extend offline s/w testing procedures
  • Provide infrastructure to collect systematic
    information from trigger selection studies
  • List of selection variables
  • Graphs of rate and efficiency variation
  • There is a strong coupling with the offline
    commissioning activities
  • Trigger commissioning extends well into
    data-taking
  • Need good coordination with physics groups
  • Treat the trigger as a single object to be
    commissioned (inc. LVL1)
  • Will need a clear strategy for the daily run
    meetings (data request)
  • It is clear that the Extra Triggers (monitoring,
    calibration, etc) will be much larger than the
    foreseen 10 during the first months of
    data-taking

29
Commissioning
  • Possible involvement
  • ? We would like to benefit from your experience
    in commissioning and running the BaBar experiment
    elsewhere
  • Work in installing, developing and exploiting the
    pre-series system
  • Development of algorithms and procedures that
    allow to rapidly check the trigger performance
    with real data and monitor the overall HLT
    commissioning advancement
  • Responsibility in the more general trigger
    commissioning activities and in preparing the
    ATLAS trigger for cosmic tests and first beams in
    LHC
  • There is considerable lack of effort in this area
    and there is room for major involvement and
    responsibility

30
Summary
  • Outlined several areas within the ATLAS HLT
    system where members of the SLAC team could
    contribute and take responsibility
  • Spread of areas ranging from more technical
    software design and implementation to much more
    physics oriented work
  • Many interesting challenges ahead to lead ATLAS
    into data-taking and first physics
  • TDAQ Workshop in Mainz, Germany 10-14 October
    2005
  • WELCOME !!!

31
Backup
32
ATLAS LVL1 Trigger
75(100) kHz
75(100) kHz
75(100) kHz
75(100) kHz
LVL1 Accept 75(100) kHz
33
m-RoI reconstruction at LVL2 using mFast
Muon Road
34
muFast Timing Measurements
Optimized code run on (Pentium III _at_
2.3GHz). Physics single muon,pt100 GeV Cavern
Background High Lumi x 2
  • mFast latency is the CPU time taken by the
    algorithm without considering the data
    access/conversion time
  • the presence of Cavern Background does not
    increase the mFast processing time.
  • The total latency shows timings made on the same
    event sample before and after optimizing the MDT
    data access.
  • Optimized version
  • total data access time 800 ms
  • data access takes the same cpu time of mFast

35
Stepwise HLT Selection
  • Selection takes place in steps
  • Rejection can happen at every step
  • Trigger Decision and Data Navigation is based on
    Trigger Elements
  • Algorithms use the result from previous steps
    (Seeding) using the Data Navigation and the
    Trigger Elements
  • The initial seeds for the LVL2 steps are the LVL1
    RoIs

Event Accepted
e50ie50i ?
e50i
e50i

isolation
isolation
Decision
e50
e50

elecId
elecId
EM50
EM50

RoI
RoI
LVL1 Trigger Element
36
The Different Commissioning Phases (1)
  • HLT standalone commissioning
  • Units of racks (considered to be a unit to be
    commissioned)
  • A rack delivered from installation has
  • Checked the power, cooling and network within and
    outside the rack
  • Operating system installed
  • Commissioning starts with the installation of the
    DAQ and offline software
  • Check internal Dataflow (preloaded data)
  • Monitoring tools
  • Offline software
  • Offline software distribution procedures
  • Automatic testing procedures
  • Testing algorithms

37
The Different Commissioning Phases (2)
  • Integrate subsystems into the full detector.
  • These operations that have a very strong
    coupling with the offline commissioning
    activities
  • First start with data unpacking algorithms
  • Monitoring infrastructure to check this step
  • Use any commissioning data taken by the detectors
    to debug this part of the system
  • Even if the data is corrupted, it might be very
    useful to test the robustness of the code
  • Current activities (or areas where we need to
    concentrate effort)
  • Extend the pool of data prep algorithms
  • Algorithms must be scrutinized and broken up in
    simpler testing units
  • Testing procedures for both offline selection
    software and interface to DAQ software are being
    strengthened and running in the nightly
    automatically
  • The goal is to arrive to a set of tests that
    almost guarantee further test-bed (or pre-series,
    etc) integration will succeed
  • Specify constraints and tests in the offline
    software before distribution
  • Software distribution

38
The Different Commissioning Phases (3)
  • The remaining phases correspond to commissioning
    while data is being taken and assumes
  • Complete HLT Dataflow is working
  • The algorithms start selecting/rejecting events
  • The trigger work will focus more on demonstrating
    that an algorithm gives an Xx.Yy selection
    efficiency with some rejection rate
  • This activities are very important
  • Help to develop and tune the algorithms
  • Give us the building blocks to test the complete
    HLT chain
  • However, for commissioning, we need to be focused
    also in some other aspects
  • Have a centralized place where the complete set
    of parameters that algorithms use (will be inside
    the configuration in the future) are listed
  • Size of data request around the ROI
  • Set of selection cuts
  • For every selection variable we need the graph
    of variation in selection efficiency and
    rejection rate around the chosen optimal point
    (we are sure we will have to tune it with data)
  • Need to prepare a set of algorithms and methods
    that allow us to check the trigger performance
    with data
  • Particles with known mass (selected only
    triggering in one of its decay products)
  • How many hours of data-taking do we need to know
    the selection efficiency within a 5 precision?
Write a Comment
User Comments (0)
About PowerShow.com