INFN and University of Pisa - PowerPoint PPT Presentation

About This Presentation
Title:

INFN and University of Pisa

Description:

ROME (Root based Object Oriented Midas Environment) is a. framework generator. ... trigger info/banks coded in MIDAS: run #, event #, trigger ... – PowerPoint PPT presentation

Number of Views:43
Avg rating:3.0/5.0
Slides: 22
Provided by: Fabriz3
Category:
Tags: infn | midas | pisa | university

less

Transcript and Presenter's Notes

Title: INFN and University of Pisa


1
Status of MEG Software
  • Fabrizio Cei
  • INFN and University of Pisa
  • PSI Review Meeting
  • PSI, 14 February 2007

2
Outline
  • MEG Software organization
  • Status of Monte Carlo simulation
  • Short remind of analysis framework
  • Status of analysis codes
  • Status of MEG computing power _at_PSI
  • Problem of overall data size
  • Conclusions

3
MEG Software Organization
Simulation
WFM and pile-up simulation
Bartender (ROME)
MC
Analyzer (ROME)
Real Data
DAQ
4
Status of Monte Carlo
  • MEGMC program
  • - written in Geant3.21
  • - data output in ZEBRA banks, automatically
    converted
  • to C structures (readable from analysis
    codes)
  • - it simulates pair (e.g. m?eg) or single (e.g.
    Michel positrons) events
  • - Full simulation of detector calibration
    devices
  • - specific modules (tbeam tbtc) for LXe/TC
    beam tests
  • - Next release soon
  • - Updates since last meeting
  • Common magnetic field with MEGAnalyzer
  • Different geometrical configurations (final or
    run 2006)
  • - To be done
  • Few event types missing, e.g. LED
  • Revision for radiative decay and AIF.

5
Examples of MC events
m?eg event
Michel positron
RUN 2006 configuration No LXe, no TICZ, 8 DCH,
TICP displaced by 12 cm
6
MC mass production
  • Expected to start very soon
  • Event types and numbers to be decided.

Typical CPU DISK requirements for gem CPU time / 10 k events disk size / 10 k events
Signal 24 hrs 2.5 GB
alpha 1.3 hrs 400 MB
Michel (gt 40 MeV) 10 min 700 MB
Michel (gt 5 MeV) 5 min 550 MB
7
Software framework ROME
ROME (Root based Object Oriented Midas
Environment) is a framework
generator. It uses only 6 different C
objects. ROME makes the dirty job
creating the structure, defining C classes,
writing many include files, creating the
dependences and the hierarchy the users and
detector experts perform the smart job
writing the analysis methods (tasks) and the
related folders (data stored on memory)
and trees (data stored on disks). The most
important feature is the modularity the
tasks can be exchanged at runtime. Main
developer M. Schneebeli (PSI)
8
ROME Interconnections
Disk (Input)
Read (Format ZEBRA, MIDAS ROOT)
Read
Fill
Folders
Fill
Fill
Flag
Fill
Disk (Output)
Write (ROOT)
9
ROME Event Display (ARGUS)
Display includes tracks and energy deposits
10
Waveform/track display
Liquid Xenon
Drift Chamber
Both used in on-line and analysis too
11
Trigger/TICP display
Rate of individual channels
TC bars
12
Status of MEGBartender
  • MEGBartender runs stable.
  • Event mixing with multiple formats planned
  • (presently only ZEBRA)
  • Refinement of simulation parameters on
  • the basis of run 2006 data
  • Waveform simulation completed for LXe,
  • TICP/TICZ, DCH wires work needed
  • for pad simulation
  • Several trigger simulations included.

13
Status of MEGAnalyzer 1)
  • MEGAnalyzer modified for use in online offline
  • LXe (R. Sawada (Tokyo), G. Signorelli (Pisa),
    Y.
  • Uchiyama (Tokyo), S. Yamada (UCI), F. Cei
    (Pisa))
  • - Waveform decoding implemented
  • Charge-based reconstruction algorithms
    implemented
  • most of them tested
  • Timing reconstruction calibration algorithms
  • under implementation and testing
  • Peak finding and pattern recognition tasks
    existing.

14
Status of MEGAnalyzer 2)
  • TICP/Z (P. Cattaneo (Pavia), Y. Uchiyama, D.
    Zanello (Rome),
  • F. Xiao (UCI), A. Barchiesi (Rome), S.
    Dussoni (Genova))
  • - waveform analysis implemented
  • - preliminary hit reconstruction implemented
    (Q,tL,tR) ? (z,lttgt)
  • - still missing correlation between adjacent
    bars with DCH.
  • DCH (H. Nishiguchi (Tokyo), M. Schneebeli, Y.
    Hisamatsu,
  • V. Tumakov (UCI))
  • - 3D-map of magnetic field
  • - tracking by (preliminary) Kalman filter
    implemented
  • - waveform decoding existing
  • - extraction of z-coordinate from cathode
    pad information in progress.
  • - still missing full hit reconstruction (3D
    information)
  • pattern recognition at the
    beginning.
  • Software changing very fast on the basis of 2006
    data

15
Status of MEGAnalyzer 3)
  • Trigger (G. Signorelli, D. Nicolò (Pisa))
  • - trigger info/banks coded in MIDAS run ,
    event , trigger
  • code WFMs, scalers (useful for
    determining run/live time)
  • - charge and timing reconstruction algorithms
    implemented
  • and under testing.
  • Database (R. Sawada)
  • - two databases MySQL and sqlite3 easy
    conversion
  • - MySQL needs network sqlite3 for
    stand-alone environment in
  • a separate svn module (megdb)
  • - included geometry, trigger/hardware
    configuration, run
  • table, physical constants, reconstruction
    coefficients


16
DRS in MEGAnalyzer
  • Data structure established
  • - It is possible to write limited time regions
    of
  • waveforms
  • - Data size (including header) can be zero if
    the
  • channel is not interesting
  • - Each chip has clock and trigger signal for
    calibration.
  • Calibration procedures under development
  • and to be studied in detail.

17
MEG computing _at_PSI
Offline cluster for MEG (LCMEG)
15 x 500 GB SATA
Sun Fire x4100 quad core 4 GB
Sun Fire x4100 quad core 4 GB
Fiber Channel Switch
Sun Fire x4100 quad core 4 GB
GBit Ethernet
Sun Fire x4100 quad core 4 GB
Sun Fire x4100 quad core 4 GB
  • Presently available 20 CPU cores 30 TB
    disk Sun Grid Engine
  • Final situation 64 CPU cores 100
    TB disk in total
  • Easily extensible planned within 2007

18
Overall Data Rate
  • In 2006 run we had 2.8 MB events (50 DC TC, no
    LXe 1/3 of the final configuration) and could
    run at 10 Hz
  • Full detector estimates without any reduction 9
    MB/event,
  • 5 Hz, 107 sec, 450 TB/year
  • Possible strategies for reducing data size
  • Zero suppression (50 on LXe, 80 on DC)
  • ADC/TDC values for non-signal-like events
  • Partial waveform readout (reduced window size)
  • Keep timing information in SQL database
  • 3rd level trigger in online cluster
  • (e.g. linear fit, fast tracking )

30 Tb/year
Effects on data to be evaluated
19
Conclusions
  • The MEG software is in an advanced
  • state of preparation
  • MEGMC MEGBartender almost finished
  • MEGAnalyzer
  • LXe close to completion
  • TICP/Z DCH some parts missing, but fast
  • evolution and
    significant effort
  • Trigger/Database ok.
  • First part ( 1/3) of MEG offline cluster
  • in operation
  • Data size to be reduced discussion under way.

20
Backup slides
21
MC Man Power
  • All persons _at_ 10 ? 50 of their time
  • Coordination S. Yamada (UCI), F. Cei
    (Pisa)
  • SVN repository S. Yamada
  • Event generationF. Cei, S. Yamada, Y. Hisamatsu
    (Tokyo)
  • LXe S. Yamada, F. Cei, G.
    Signorelli (Pisa)
  • TICP/TICZ P. Cattaneo, Y. Uchiyama
    (Tokyo)
  • DCH H. Nishiguchi (Tokyo), M.
    Hillebrandt (PSI)
  • Beam Magnet W. Ootani (Tokyo)
  • Target V. Tumakov (UCI)
  • NaI Y. Nishimura (Tokyo)
  • Calibrations F. Cei
Write a Comment
User Comments (0)
About PowerShow.com