Detector and Trigger Plans - Overview - PowerPoint PPT Presentation

1 / 22
About This Presentation
Title:

Detector and Trigger Plans - Overview

Description:

Improvements in peak luminosity are expected soon due to ... Attempt to shave 132ns from FPGA logic. To accommodate CTT trigger delay. Oklahoma Workshop ... – PowerPoint PPT presentation

Number of Views:14
Avg rating:3.0/5.0
Slides: 23
Provided by: Deni150
Category:

less

Transcript and Presenter's Notes

Title: Detector and Trigger Plans - Overview


1
Detector and Trigger Plans - Overview
  • Accelerator
  • Plans
  • October shutdown
  • D0 Control Room shifts
  • Experiment technical support
  • Sub-detectors plans
  • Trigger
  • D0 operating efficiency
  • Commissioning and data collection
  • Data quality monitoring
  • Summary
  • Thanks for the input from many individuals!

2
Accelerator Operation
  • Tevatron peak luminosity is around 2.1031 over
    past two months
  • Improvements in peak luminosity are expected soon
    due to installation of cooling tanks in
    antiproton accumulator during June shutdown
  • only about 30 of antiprotons are reaching final
    collisions for now
  • transverse beam sizes are more then expected
  • Detailed plan of increase of Tevatron luminosity
    is developed
  • studies periods each week
  • Beams Division plans to split weekly 2 days of
    studies into smaller intervals
  • periodic shutdowns

3
Accelerator Performance
  • There is slow improvement in per week delivered
    luminosity
  • higher initial luminosity value
  • better operating efficiency
  • 70 of stores end gracefully
  • Run Ia goal is 15pb-1 per week
  • January 2002 plan is to deliver 300 pb-1 to each
    of the experiments by the end of 2002
  • as of today delivered luminosity about twice
    lower then planned
  • revision of the plan is expected soon
  • How can D0 help
  • utilize luminosity more efficiently
  • reduce number of interruptions in accelerator
    operations
  • there is no 10 minutes access!

4
October Shutdown
  • It is a bad plan which admits of no
    modifications
  • Original plan was 6 weeks shutdown starting
    September 30th 2002
  • Single major reason for the October shutdown
  • Improvement in vacuum and other elements of
    recycler
  • Considerations
  • Last(!) prolonged shutdown for the recycler
  • Are 6 weeks enough?
  • Far from perfect accelerator performance even
    without recycler
  • Prolonged interval needed to get accelerator to
    pre-shutdown conditions
  • Experiments desire to collect decent data samples
    for Winter 2003 conferences
  • Current plan
  • Not earlier then October 2002
  • Not less then 6 weeks

5
Accelerator Backgrounds
  • D0 benefits greatly from extensive shielding
    installed during Run IIa upgrade
  • Much lower backgrounds on muon detectors
  • less then 0.1 occupancy
  • Reduced beam halo
  • D0 beam halo is stable and affects 1-2 of
    crossings
  • Beam related backgrounds are affecting primary
    forward proton detector
  • Task force (with CDF) is formed
  • Late March CDF believed to experience high
    (1Mrad/sec) dose rate irradiation which caused 6
    silicon ladders to fail
  • Abort of partly de-bunched beam
  • Not confirmed by booster studies
  • CDF is located close to the Tevatron abort
  • D0
  • Located favorably
  • D0 silicon High Voltage did not even tripped
  • Typical integrated dose during store shot setup
    for D0 is 5-10 rads
  • Low in comparison with silicon radiation hardness

6
D0 Control Room Shifts
  • Over last 6 months achieved reasonably stable
    shifts sign up
  • 5.103 shifts per 6 months are needed
  • This year requires peak number of shifters
  • Still many systems are in commissioning stage
  • Require extensive trainings and active work
    during data taking
  • Introducing extra shifters as systems become
    available
  • L2
  • Global monitor (planned)
  • Starting combining shifts for different
    sub-detectors
  • Forward Proton Detector (FPD) and Calorimeter
  • Based on relative success of current shift duties
    rules only minor modifications are adopted for
    July-December 2002 period
  • 8 shifts for each non-resident on Masthead per
    6 months
  • 10 shifts for each resident (gt 6 months at
    Fermilab) on Masthead per 6 months
  • Thanks to all shifters who keep the experiment
    running it is fun, experience, understanding of
    your detector performance, and a must to get our
    data sample

7
D0 Experiment Technical Support Groups
  • Two teams of electrical and mechanical
    professionals are reliably supporting operations
    of the D0 experiment
  • Mechanical Russ Rucinski group
  • Electrical Rick Hance group
  • Perform extensive amount of 24/7 duties
  • All cryo control and monitoring
  • Power supplies and magnets
  • Heating and cooling
  • Emergency responses
  • Detector opening and closing
  • Can you move 5000 tons with 0.5mm precision?
  • We plan to continue 24/7 presence of Ops shifter
    for the duration of Run II
  • Let D0 control room crew concentrate on high
    quality data taking
  • Will provide support for commissioning/installatio
    n/operation jobs
  • Requests should be made in advance

8
To do for the D0 Detectors
  • All major detectors are in operation
  • Second order commissioning and improvements
  • Luminosity detector
  • Operates stable over many months
  • Still with Run I electronics
  • Improvements in calibration constant
  • On-line numbers for integrated luminosity
  • Is CDF luminosity really 6 higher?
  • D0/CDF task force
  • Silicon detector
  • Reliability of the cathedral power supplies and
    cooling
  • Understanding of grassy noise issue
  • High voltage leak currents
  • Monitoring of the radiation dose

4 F Disks
6 Barrels
12 F Disks
9
To do for the Detectors
  • Fiber tracker and preshowers
  • Great progress since AFE boards became available
  • Resolve AFE boards failures/problems
  • Understand pedestal variations to reduce hits
    occupancy
  • Monitor light output, VLPS, and AFE responses
  • Provide calibrated thresholds/signals for CTT
    trigger
  • Integrate forward preshower (and FPD)
  • Calorimeter
  • Fix noise on Level 1 trigger signals
  • Stable pulser calibration
  • Operation with 1.5s vs 2.5s zero suppression
    threshold
  • COMICS based reliable parameters download and
    verification
  • 25 BLS low voltage power supplies have to be
    upgraded

10
To do for the Detectors
  • Inter Cryostat Detector (ICD)
  • Replace bad PMTs (few electronics channels)
  • Updated ICD addressing and test using LED pulser
  • MIP calibration with data for every individual
    tile
  • Solenoid
  • Long term stability monitoring
  • Corrections to field map
  • Geometry, tracks fit based corrections
  • Muon system
  • Improve stability of readout (synhronization and
    other errors)
  • Level 2 trigger signals issues
  • Individual thresholds for central A layer
    counters
  • T0s and time-to-distance for PDTs (momentum
    resolution)

11
To do for the D0 Detectors
  • Forward Proton Detector (FPD)
  • Connect dipole spectrometer to AFE boards
  • Phase I (10 detectors/5 spectrometers) to be
    included in readout by October
  • FPD Trigger Manager commissioning will proceed in
    parallel with DFE commissioning
  • Commission luminosity monitor TDC boards to
    include trigger scintillator information

12
D0 Trigger
  • Trigger framework
  • Existing prescaler logic could report
    mis-measured luminosity for prescaled triggers
  • Plan on how to resolve this problem is developed
  • Automated SCL inits
  • To reduce downtime (see below)
  • L1 trigger
  • Fiber tracker (CTT) trigger
  • Implementation is delayed due to higher then
    specified trigger decision time
  • Different options of resolving this issue are
    discussed by experts
  • Estimated delay is 6weeks
  • Calorimeter trigger
  • Debug recently achieved eta coverage up to 2.4
  • Dead and noisy towers, calibration
  • Missing Et
  • Extend eta coverage
  • Muon trigger
  • Optimize new trigger terms based on scintillator
    and wire detectors
  • Re-establish data vs trigger simulator comparison
  • Attempt to shave 132ns from FPGA logic

13
D0 Trigger
  • Level 2 trigger
  • Increase processing bandwidth from current
    0.5kHz
  • Code speedup/max_opt 1-2 kHz
  • Multiple Alphas/Betas in crate gt2 kHz?
  • Silicon track trigger
  • In progress of installation
  • Expect beam commissioning later this year (fiber
    tracker trigger needed)
  • Fiber tracker
  • Waiting for inputs expected in September
  • Calorimeter trigger
  • Running on-line (/eta/lt0.8 for now)
  • Requires better understanding of efficiency,
    cuts, rejection
  • Muon trigger
  • Well developed single muon, di-muon
  • Resolving problems with synhronization of signals
    coming from hundreds of front-ends
  • Detailed monitoring of the inputs is needed
  • Algorithms optimization (geometry for Pt cut,
    etc.) is in progress

14
D0 Trigger
  • Level 3 trigger
  • Running on-line for quite a while
  • Development of fast and efficient algorithms with
    good rejection is driving the schedule
  • Tracking Level 3 is coming on-line next week
  • Improve electron and muon rejection
  • Vertex for missing Et resolution improvement
  • Calorimeter is one of the most advanced systems
  • Efficiently filtering jets and em objects
  • Coming in p12 missing Et, non-linearity
    corrections, hot cell killer
  • Muon
  • Filtering on limited sub-set of muon triggers
  • Technical issues with muon geometry, efficiency
    to be better understood
  • Level 3 trigger basic philosophy
  • Develop as much as possible inclusive single e,
    mu, tau, jet filters
  • Combine tools into di-muon, muonjet, etc.
    filters
  • For all trigger levels
  • After trigger is running on-line and commissioned
  • ID and physics groups have to take the lead to
    optimize cuts, study efficiencies, and select
    optimal trigger terms for their analyses
  • Trigger board will coordinate these efforts

15
DAQ
  • Currently could utilize new DAQ system rate
    capabilities in a limited way due to complex
    system of distributed readout and front-end
    crates
  • Multi-buffering operation in tracking crates just
    arrived
  • Correct tick and turn operation to be verified
  • With estimated DAQ rate capability in the range
    of 1kHz we are currently limited to 140Hz for
    reliable on-line operation
  • Current most serious problem is loss of
    synhronization in muon scintillator crates
  • Every 10 minutes at 140Hz
  • There are other concerns on the horizon
  • Tracking crates front-end busy fraction at 140Hz
    is 4
  • DAQ rate task force has been organized (chaired
    by Ron Lipton)
  • Very well defined goal
  • Bring experts from different sub-systems together
    to resolve problems preventing us from running at
    full DAQ capabilities
  • Support of the DAQ system operation will continue
    to be responsibility of the DAQ team and
    appropriate detector groups
  • DAQ team has developed list of improvements
    related to core of the DAQ system
  • Software bugs fixes (small annoyances)
  • Complete Linux Port
  • Supervisor, Monitor Server are still on Windows
  • Improve shifter monitoring and documentation

16
On-line System and Controls
  • Key system for data handling/transfer and on-line
    data quality monitoring
  • Run II design rate of writing data to tapes at
    50Hz has been achieved
  • Operations
  • get serious with Significant Event System
    (alarms)
  • important issue for high quality data collection
  • active participation from detector/trigger groups
    is required
  • migrate detector groups to use COMICS downloads
  • migrate detector groups to use Hardware Database
  • improve support of Examines
  • Applications
  • deploy run configuration/quality database
  • introduce global DAQ monitor application
  • Software issues
  • install, test Python v2.2 (update to t02.23 soon)
  • support of software releases
  • System issues
  • solve nfs and static route problems
  • add more Examine nodes
  • control room monitor upgrades

17
D0 Operating Efficiency
  • Definition
  • of collisions used
  • Global runs (recorded)
  • Special runs
  • Commissioning
  • Goals
  • From Run I experience
  • 190pb-1 delivered 120pb-1 recorded (60, Main
    ring vetoes, etc.)
  • Should do better in Run II gt 90
  • Moving target
  • Major reasons of luminosity loss are changing as
    more systems come on-line
  • What was a concern a few months/weeks ago is
    replaced by new enemies
  • Steady increase in operating efficiency during
    February/March steady state operation
  • Recent numbers
  • 60 for the last 2 months (lower before)
  • 75 last week (after implementation of
    mutli-buffer readout)

18
D0 Operating Efficiency
  • Major components of the efficiency
  • Dead time
  • Sub-system is setting busy signal due to low
    bandwidth L1, L2, L3, DAQ
  • Adjustable and can be set at almost any value by
    adjusting trigger rates
  • Due to lack of multi-buffering in tracking crates
    had to accept relatively large L1 front-end-busy
    fraction (20) to collect decent data sample
    before June shutdown
  • Currently running at about 4 front-end-busy
    fraction (140Hz readout rate)
  • Downtime
  • Data collection is stopped (or could not be
    started) due to failure of a specific component
    (or components)
  • Currently issues are
  • Muon scintillator front-end crates errors
  • High Voltage trips
  • Redownloading of problematic crates
  • Begin/end runs
  • Operator errors
  • Commissioning activities (prescales selection,
    trigger studies,)
  • SCL inits
  • Used to synhronize trigger/DAQ system
  • Over last 3 months we average 3.103 SCL inits
    per month during global runs
  • About 15 of luminosity blocks (but not events)
    is wiped out

19
Commissioning and Data Collection
  • Many important Run IIa systems are coming late
    (some are still in pipeline)
  • Seriously complicates data collection
  • Commissioning requires substantial amount of
    devoted beam time
  • Requires often changes to operating conditions
    (trigger list, etc.) and documentation
  • We operate balancing between physics data
    collection and commissioning activities
  • Data are needed for Reco development and physics
    analysis
  • We have decent luminosity sample which is
    providing interesting results
  • Our detector is much better then in Run I
  • Even with available on tapes luminosity data
    samples in some channels (J/y -gt mm, etc.) are
    orders of magnitude above Run I
  • Long term we need full D0 detector capabilities
    we planned for
  • Since the beginning of the Run II we perform
    above mentioned activities in parallel
  • All control room activities are coordinated by
    the Run Coordinator
  • Priorities are based on Collaboration goals
    defined together with D0 Spokes
  • Goal is to minimize global data taking
    disruptions (do whatever could be done without
    beam) providing necessary time for sub-systems
    commissioning

20
Data Quality Monitoring
  • This issue was discussed at different meetings
    during this Workshop
  • Basic outcome is that we need multilayer data
    quality checks
  • Significant Event Server (alarms), different
    GUIs (HV, readout, ), check lists,
  • Examines and event Display
  • Detector, trigger, physics (Global Monitor)
  • Bad run lists
  • Off-line physics based monitoring
  • Checks everything from detectors, triggers,
    luminosity to reco, cuts, and databases
  • Run quality database which will combine (most) of
    the above results is under development
  • The sooner in the above list the problem is
    discovered the better
  • Fix problem and mark data set with appropriate
    flag
  • Sub-system experts are working hard optimizing
    monitoring without overloading shifters and
    physics analysis groups
  • Size of minimal unit for monitoring was
    discussed event? Luminosity block number? Run?
  • Complexity is drastically increasing with
    reduction in segmentation
  • Some physics studies may not require every system
    working perfectly
  • Stability of data collection is a key issue
  • In many cases it is easier to fix the problem
    then develop monitoring tools
  • Success on all levels (including analysis)
    strongly depends upon stability
  • Last time forward muon parameters were changed in
    January 2002

21
Specs, Reliability, and Schedule
  • In such large experiment as D0 with many
    independent groups involved following written
    specifications is absolutely critical
  • CTT trigger decision time appeared to be above
    specs (most recent example)
  • Delayed CTT trigger implementation
  • Could cause considerable amount of work for all
    sub-systems for re-timing the experiment
  • Considerable amount of the experiment downtime is
    due to low reliability of different systems
  • We plan to run for many years with limited number
    of accesses
  • Reliability should be taken into account at the
    design and construction/testing stage
  • Recent examples (serious in terms of downtime and
    potential damage)
  • Muon VME power supplies
  • Silicon low voltage power supplies
  • Interface boards water heat exchangers
  • Schedule is not what you might do, it is what
    you will do
  • Lab and funding agencies approach is changing
    (Run IIb)
  • Mixture of different cultures in the experiment
  • Wrong time estimates are causing considerable
    manpower losses and reduce prioritizing
    efficiency

In cathedral area
22
Summary
  • Incredible progress in the D0 detector
    capabilities over last year
  • All detectors are working and down to second
    order improvements
  • No show stoppers!
  • All trigger levels are in operation with rapidly
    improving capabilities
  • New DAQ is performing well
  • Data collection monitoring is advancing
  • Data collection efficiency is reasonable
  • We have to finish installation of a few critical
    systems and commission recently installed
    equipment
  • Will provide necessary support for these
    activities
  • Steady high efficiency operation with in depth
    data quality monitoring is becoming highest
    priority for most of the sub-groups
  • Accelerator performance is critical for success
    of the D0 physics program
  • We should provide feedback and minimize
    disruptions to accelerator commissioning and
    operations
  • Current plan is to deliver 300pb-1 by the end of
    the year 2002
  • Strong support of the control room shifts and
    detector/trigger/on-line activities by the
    Collaboration is critical in getting interesting
    physics results soon
Write a Comment
User Comments (0)
About PowerShow.com