Commissioning Plans - PowerPoint PPT Presentation

1 / 19
About This Presentation
Title:

Commissioning Plans

Description:

pixels were masked as too hot in a source scan in self triggering mode. ... we had a failure in Wiener PS for ROD crates) 17. Shift Training. Typical steps. ... – PowerPoint PPT presentation

Number of Views:29
Avg rating:3.0/5.0
Slides: 20
Provided by: gem79
Category:

less

Transcript and Presenter's Notes

Title: Commissioning Plans


1
Commissioning Plans
  • K. Einsweiler, C. Gemme, C. Young
  • IDWeek 30 June 08

2
Commissioning activities without cooling (I)
  • Detector and safety
  • TX issues
  • Full reference campaign on a reduced sample of
    optoboards, TX,RX (OTDR, loopback, Ipin, power
    measurements)
  • Monitoring of TX health, now in OFF state
  • Monitoring of TX conditions (racks state,
    intelocks, settings)
  • Fibers delays in the configuration?
  • Services monitoring
  • NTC and LV test
  • DSS final implementation
  • Beam pipe bake-out
  • Measurement of thermal inertia of the system
    during bake-out (_at_ reduced heaters temperature)
  • DSS interlock verification before any operation
  • Optimization of Optoheater feedback (sw or/and
    hw) to keep optoboards at stable and set
    temperature.
  • Few Problematic modules follow-up (HV open, high
    dispersion, etc..)
  • Studies on Problematic Cooling loops in SR1
    changes of plant software to be more flexible
    during commissioning (Kevins talk).

3
Commissioning activities without cooling (II)
  • DCS
  • Temperature Overview panel of the pixel package
    (main user-case is bake-out)
  • Cooling loop Detector view
  • Minor optimizations and visualization of software
    interlocks.
  • Optimization of DCS configuration (rights to
    load, tags, interface with DAQ, all parameters
    included)
  • DAQ
  • Calibration tools and procedure (next slides)
  • DSP code validation
  • DAQ/DCS
  • Communication DCS/DAQ to define readiness before
    starting data-taking

4
Calibration tools and Procedures
  • Ongoing for a long time, now our main software
    activity.
  • Focus now on preparing Procedures, i.e. sequence
    of scans, analysis, writing in the calibration DB
    and/or a new configuration tag for BOC/modules
  • Calibration console to run scans and the
    subsequent online analysis. Results are stored
    in the CalibrationDB where we need to import also
    the data coming from the production.
  • offline analysis (Analysis Framework) to
    monitor the evolution of calibration results,
    make differences with reference scans and being
    able to put together results from different part
    of the detector.
  • To speed up finalization of the procedures
  • review process, so that each working group has to
    prepare a report on the validation of the
    procedure to be discussed with external reviewers
  • daily shifts in SR1 devoted to a specific
    procedure

5
Online scans and analysis
Iterative procedure Generation of Masks for
modules to be enabled in the next scan as the
failed (no histo or failing analysis)
Raw data for each pixel
Calibration console
Detector view of calibration results
Raw data for each module
6
Offline analysis Analysis Framework
  • Runs after online analysis access to
  • Calibration DB
  • Raw calibration ROOT data only when necessary
  • PVSS DB to correlated the results with DCS
    conditions
  • Metadata DB
  • Configuration DB in some cases
  • Main User cases
  • Overview of calibration results because a
    complete calibration will likely require multiple
    scans
  • Results Trend vs conditions, as temperature,
    time, etc
  • Comparison with other scans or reference plots
  • Complex analysis that need multiple input scans

7
Commissioning activities with cooling
  • 1) Beam pipe bake-out
  • As soon as cooling is back in the detector,
    priority will be given to be able to operate in
    stable condition at least the B-layer loops (11)
    and the end-cap loops (24).
  • Notice that some of the needed loops are on the
    black list of problematic loops to run safely,
    solving problems observed during the sign-off
    period will be necessary.
  • Stable operation of L1 (19) and optoboards (8)
    loops should be desiderable in case of need to
    produce extra cooling.
  • Understanding of thermal inertia of the system is
    critical for the detector safety In case of
    plant or individual loop failure, how long the
    beam pipe takes to go back at RT?
  • Keeping optosystem ON (opto loops, opto heaters,
    optoboards) could be necessary in case we need
    to switch on modules to stabilize unstable
    cooling loops. No optolink tuning is needed.
  • Time estimation (dominated by problems) two
    weeks (?) for cooling operation and bake-out

8
Commissioning activities with cooling
  • 2a) Optoboards operation heaters
  • Optoboards temperature has to be as stable as
    possible not to affect optolink tuning that is
    (how much?) sensitive to temperature.
  • Optoheater operation is therefore critical in
    this period when environmental temp will not be
    stable ( loops and modules ON will vary often)
    and individual Optoboards will be switched
    ON/OFF.
  • NEEDED an automatic feedback algorithm to keep
    the optoboards array average temperature stable
    vs conditions (one optoheater serves six
    optoboards)

9
Commissioning activities with cooling
  • 2b) Optoboards operation tuning
  • Optolink tuning exists (many scans and analysis).
    Robustness of the old DSP code qualitatively
    tested during the pit CT and the sign-off.
  • In the new DSP code fast and powerful (pseudo
    random sequence, comparison bit by bit vs time,
    able to detect slow turnon). Being fast,
    possible to minimize the error rate by sending a
    longer sequence.
  • Validation of the optolink tuning during pit CT
    and sign-off was done indirectly using a
    threshold scan. Should we systematically use a
    digital scan?
  • Optoboards operation and tuning are the first
    topic on which we can test the performances the
    calibration tools, management of DCS and DAQ
    configuration, etc
  • Low scale of configuration and condition
    management that will be critical in module tuning
  • Exercise doable even now in the pit
  • Time estimation lt one week

10
Commissioning activities with cooling
  • 3) Modules Calibrations
  • During the sign-off, we have used the LOAD
    configuration (tuning at room temperature, pixel
    masks inherited from modules and stave warm
    tests)
  • Only threshold scans run (data for 1100
    (CHECK!!) modules)
  • Temperature 0-5 C (CHECK!!)
  • Noise 170e
  • Threshold average two peaks at 3800e and 4100e
  • Threshold dispersion two peaks at 70e and 130e
  • Configurations looks good enough to be used as a
    starting point.

11
  • Correlation between higher threshold and lower
    RMS
  • It should come from production sites.
  • modules with higher threshold are usually on the
    same mechanical support
  • No difference between disks and barrel.

12
Commissioning activities with cooling
  • 3) Modules Calibrations
  • Aim is to have for each module the conditions
    needed by the offline analysis threshold, TOT,
    pixel masks, etc to have a baseline as soon as
    possible for the detector conditions and
    performances (number of detached bumps, etc..)
  • Minimal program optional, lower priority
  • Starting from LOAD config (GDAC, TDACs, IF, FDACs
    from production), enable all pixels in data
    taking ()
  • Threshold scan
  • Retuning of few modules with threshold dispersion
    gt 250e (Should be O(10))
  • Mask for pixels with bad fits and with Noise/Thgt
    x, x being a cut tuned from data taking
  • Able to Move outliers or apply global shift in
    case of need.
  • ToT scan at 20 ke to evaluate if ToT tuning is
    necessary (assumed not but no data available)
  • ToT tuning
  • ToT calibration
  • Threshold scan HV off to study bump connection
  • Monleak scan to have a reference in case of beam
    losses
  • Other scans less critical, not included in the
    minimal set
  • Xtalk, timing calibration (t0, intime threshold,
    timewalk)
  • Time estimation one (?) week
  • () pixels were masked as too hot in a source
    scan in self triggering mode. It may be that with
    an external trigger not all of them are too noisy
    for the DAQ.

13
Commissioning activities with cooling
  • 4) Data taking
  • NOTICE that data flow from pixel RODs to ROSes,
    SFI, etc should already be qualified by combined
    tests with simulated hits. As well as software
    compatibility with ATLAS TDAQ.
  • When a part of the detector will be
    sufficiently calibrated (threshold and TOT
    tuned, pixel masks from threshold scan)
  • Run in standalone with random triggers to mask
    noisy pixels -gt pixel detector is silent, no
    pixel with occupancy higher than xx (CHECK
    10E-5??)
  • Technical runs with external (Atlas CTP, SCT?)
    trigger for timing
  • Standalone data taking will be run on each part
    of the detector in order to mask the noisy pixels
    before joining a combined run.
  • Time estimation one (?) week

14
Minimal commissioning time
  • Previously described phases will be sequential on
    the same part of the detector but how long they
    will really take and how much we will be able to
    run in parallel (cooling commissioning,
    calibration and data taking) will mainly depend
    on the type, severity of problems we will have to
    solve.
  • This minimal program, including optoboards
    operation, module calibrations, standalone data
    taking and combined technical runs, is expected
    to take O(3 to 4 weeks) assuming that no big
    surprises will come from the detector.
  • Notice that All calibration involving modules
    need HV ON, that is interlocked by STABLE BEAM
    signal. This could be critical due to the present
    LHC schedule. Calibration program could be
    stretched between cooling availability and
    injection of first beams

15
Other slides (Charlie?)
  • Training, shifts (when call for not experts)
    and OTP
  • Operations pages
  • manuals
  • status
  • whiteboard

16
AoB
  • Preparation for shift documentation/training/opera
    tion
  • Training lectures, hands-on sessions, assisted
    shifts
  • Use of operation page
  • Use of white board for communication, e.g. fiber
    repair, PP2 temp not working, etc
  • Manuals (expert, shifter)
  • Follow up of recovery procedure in expert manuals
  • Status page it should reflect the status of DAQ
    and DCS, also during recovery.

Last recovery took 6 days(!) due to a network
switch misbehaving (pixel, netop, sysdamin
communication not very efficient) We finally have
all the LV Wiener! (but yesterday we had a
failure in Wiener PS for ROD crates)
17
Shift Training
  • Typical steps.
  • Presentation by guru.
  • DCS by Susanne.
  • DAQ by Paolo.
  • Hands-on session with experts.
  • Several DCS and DAQ sessions so far.
  • More as needed.
  • Take training/practice shift.
  • Final decision by experts in charge.
  • Participating in the three steps is necessary but
    insufficient condition to be qualified as
    shifter.

18
DAQ
DCS
Date when fulfilled
19
Pit Activities
  • ATLAS plans to have combined runs every weekend.
  • Other activities planned for weekdays.
  • See for example Thorstens talk at Weekly Run
    Meeting (http//indico.cern.ch/conferenceDisplay.p
    y?confId35739).
  • Dont take details at face value.
  • We will have standalone data-taking runs to test
    and validate as we find necessary/useful.
  • Join in combined weekend runs when appropriate.
  • Use simulator.
  • Demonstrate stability over extended periods or
    discover problems to be fixed.
  • Encouragement to take care of all the so-called
    trivial things.
  • Opportunity for everyone to familiarize with
    activities.
Write a Comment
User Comments (0)
About PowerShow.com