Report of the_-----------_____ - PowerPoint PPT Presentation

About This Presentation
Title:

Report of the_-----------_____

Description:

Operations Robotics Readout Robotics Readout Report of the_-----_____ Computing and Readout Working Group presented by Adrian Biland / ETH Zurich – PowerPoint PPT presentation

Number of Views:78
Avg rating:3.0/5.0
Slides: 26
Provided by: Bila91
Category:
Tags: report | the

less

Transcript and Presenter's Notes

Title: Report of the_-----------_____


1
Report of the_-----------_____ Computing and
ReadoutWorking Group
Operations Robotics Readout
Robotics Readout
  • presented by
  • Adrian Biland / ETH Zurich
  • CTA Meeting
  • Paris, 2. March 2007

2
Active WG Members
  • Angelo Antonelli (REM)
  • Adrian Biland (MAGIC)
  • Thomas Bretz (MAGIC)
  • Stefano Covino (REM)
  • Andrii Neronov (ISDC)
  • Nicolas Produit (ISDC)
  • Ullrich Schwanke (HESS)
  • Christian Stegmann(HESS)
  • Roland Walter (ISDC)
  • (several more on mailing list)
  • gt50 not from
  • HESS / MAGIC !!
  • Lot of experience from
  • Cherenkov telescopes
  • Astronomy data
  • formats and interfaces
  • huge data rates
  • (accelerator experiments)

exists internal draft of a TDR-like paper
3
two fundamentally different approaches how to
operate
  • - collaboration mode (as in particle physics)
  • - observatory mode (usual in astronomy)
  • main differences (simplified, exist mixed
    modes)

collaboration observatory
funding Inst. ask fund. Agency build part of experim. Agencies pay to Consortium
manpower (salaries) PhD students (ab)used as workhorses for construction and operation construction salaries incl. in cost operation hire staff personal
4
proposals from collaborators from anybody -gt PI
data owner authorlist full collaboration (huge authorlists) PI ( and CoIs) (short authorlists)
data availability kept internally (often lost at the end) public domain (delayed) gt data mining
personal dedication extreme cases start as PhD and stay until retirement (LHC) My Experiment have an idea check data archive write paper get Nobel Prize (without any contribution done to the experiment)
But usually experiment renowned not the PI
e.g. HUBBLE found...
5
doing physics
  • Particle Physics (usually) clean, well defined
    environment
  • clean, self-consistent theories
    to checkgtpredictions
  • dedicated, 'simple' experiments
    to test predictions
  • do EXPERIMENTS gt collaboration mode ok
  • Astronomy no control over environment and 'setup
    of experiment'
  • no clean theory fundamental stuff
    obscured by complicated
  • standard-physics
    processes
  • usually need many sources and/or many
    observatories (MWL)
  • to get conclusive
    answers
  • OBSERVATIONS gt observatory mode, data
    mining (for MWL)

6
doing physics
  • What about CTA ? We are in a paradigm-shift
  • WHIPPLE, HEGRA, CAT, .... 'source hunting'
  • - proof of concept
  • - invent new fundamental techniques
  • - new source gt new conference
  • Collaboration mode is ok
  • H.E.S.S., MAGIC, ... 'proof VHE important part
    of Astronomy'
  • - mature technology
  • - surprising richness of VHE sky
  • - impressive results few conclusive
    answers
  • - MWL becoming more and more important
  • gt must incorporate external
    experts
  • Collaboration mode getting difficult
    (authorlists !!!)

7
doing physics
  • CTA 'understand physics' (hopefully)
  • - expect 1000 sources (cannot use a PhD per
    source)
  • - need automatic data (pre)processing
  • - can do statistical analysis
  • compare astrophysics
  • learned a lot from statistics of
    Hertzsprung-Russel Diagram ...
  • - MWL becoming extremely important
  • - for steady/periodic sources data mining
  • - for transients additionally dedicated MWL
    campaigns
  • Final goal UNDERSTANDING PHYSICS
  • gt need to incorporate brightest minds
  • (not all of them will be willing to dedicate
    themself several years
  • to an experiment ...)

8
doing physics
  • gt CTA better to be operated (part time)
  • as Observatory (allow guest observers)
  • and allow data mining (public data access)
  • details to be discussed/agreed
  • - how much time for guest observers
  • - how long delay until data become public
    domain
  • can make the difference if CTA will be seen as
  • obscure corner or major pillar of astronomy ....

9
CTA as (open) Observatory
  • Need well defined procedure how to submit
  • and process proposals
  • Existence and well defined access to data and
  • analysis programs
  • ...
  • What is more efficient
  • centralized or de-centralized structures ???

10
Array Control Center (onsite)
  • tasks
  • - monitor operation of CTA (goal
    automatic/robotic operation
  • but having gtgt10 (100?) telescopes, there
    will be hardware problems
  • - ensure safety (nobody within array during
    operation what about power failures at sunrise
    ....)
  • - buffer raw-date until shipped to data center
    (even in case of fast internet connection, we
    must foresee a buffer in case of problems)
  • - monitor Quick-look analysis (onsite analysis)
  • - ....
  • Most can be done by local technicians
  • (but if we want to send out alarms, need
    verification by experts)

11
Operations (center?)
  • - submission and handling of proposals
  • - plan operation of CTA scheduling
  • - handle incoming ToO Alarms GRB directly to
    ArrayCtrl
  • - control operation of CTA (automatic/robotic
    operation gt can also work if there is some
    downtime in communication)
  • - control hardware status of CTA (slow control
    level)
  • - ...
  • Needs CTA hardware/physics experts (available/on
    call)
  • (could be 12 timezones away gt no night-shifts
    ?!)

12
Data (center?)
  • exist different possible implementations
  • extreme cases
  • - data-repository
  • science support center
  • For (open) Observatory
  • It is less important to have a place where
  • disks are located, but to have a dedicated
  • contact point for inexperienced users
  • (to be defined on what luxury level this has to
    be
  • users get raw data vs. users get
    final plots)

13
Data (center?)
  • - receive and store data from CTA
  • - calibration (gt check hardware status)
  • - automatic (pre)processing of data
  • - archive data on different stages
  • raw, compressed, preprocessed, photon-list,
    ...(?)
  • - ensure availability of data to predefined
    groups
  • PI, CTA consortium, public domain
  • - make available (and improve) standard analysis
    tools
  • - offer data analysis training sessions/schools
  • ...
  • Needs CTA Analysis Experts

14
  • General Remark
  • CTA staff physicists shall be allowed/
    encouraged to spend reasonable part of time to do
    their own physics within CTA (as PI or CoI ...)

15
Design Studies
  • On next slides,
  • topics for design studies are marked in blue

16
Towards CTA Standard Analysis
  • 0) adapt H.E.S.S./MAGIC software to analyze CTA
    MC
  • for the Hardware Design Studies (partially
    done)
  • 1) define needs
  • -underlying data format (FITS, root, ...)
  • -interfaces to external data (for MWL
    analysis)
  • -what amount of RAW data must be archived
  • for re-analysis (FADC slices? only Pixels
    above threshold?)
  • might have to archive several
    PBytes/year
  • - ....
  • 2) tools survey (what exists and can be re-used)
  • 3) start programing (rather early to be ready for
    first telescope!)
  • package must be useable by non-experts gt
    KISS

17
Towards MC Mass Production
  • In hardware design studies (and during
    operation),
  • need huge amount of MC gt urgent
  • -tools survey
  • -optimize/speedup programs
  • GRID
  • -exists (most probably mature soon)
  • -EU spent gt success by definition
  • -at the moment, huge amount of unused CPU
    available
  • MC easiest to use GRID, can most profit from GRID
  • gt concentrate GRID activities in MC package
  • Analysis software shall be GRID aware, but not
    rely on it

18
Towards Array Ctrl robotic
  • too complicated system to make fully robotic
  • hardware must be very(!) reliable
  • software must be rather simple
  • (no bells and whistles...)
  • limited experience, need test environment

19
Towards Array Ctrl Slow Ctrl
  • -centralized approach
  • powerful central machine controls individual
    (dumb) telescopes knows always everything about
    anything....
  • -distributed approach
  • each intelligent telescope runs
    independent Central Control just distributes
    tasks (e.g. schedule changes) and receives status
    info (but can always request any infos)
  • -mixed mode
  • Design study find optimal
    solution ...

20
Towards Array Ctrl Trigger
  • -most simple approach
  • -each telescope has its own local-trigger
  • -multi-telescope trigger just combines next
    neighbors local-trigger information
  • -very ambitious approach
  • combine adjacent telescopes on the pixel level
  • (technically feasible ? any advantage ???)
  • Design study find optimal
    solution ...

21
Towards Array Ctrl DAQ
  • -centralized approach
  • CC receives all raw-data and writes combined
    CTA events containing several telescopes
  • -distributed approach
  • each telescope writes its own data stream
    (including trigger-info) to local or central file
    server combining data of adjacent telescopes
    done at analysis stage
  • Design study find optimal
    solution ...

22
Towards Operation (center)
  • probably only few CTA specific requirements
  • gt need only tools survey (and take decision)

23
Towards Data (center)
  • - define needs lt experts from Cherenkov and
    Astro
  • - tools survey
  • - use HEGRA data as playground to proof
    feasibility of approach
  • possible extension
  • midterm archive H.E.S.S./ MAGIC data in
    universal format for extended tests
  • longterm allow data mining on old H.E.S.S./
    MAGIC
  • very political decisions... important signal to
    astro-community...
  • also analysis package should be tested on old data

24
Towards Data (center)
  • is the interface to the astro-community
  • (data mining !)
  • basic design might not be crucial for us,
  • but it should please the other users ...

25
Summary
  • -Exists lot of experience in all needed fields
  • -Important to combine and synchronize knowledge
  • -Several decisions to come out of design studies
    find optimal solutions (probably now
    show-stopper wrong solution will also work,
    but less efficient and more expensive)
  • -Some tasks will need rather long time between
    design study and final product (e.g. full
    analysis package) to be ready in time, these
    design studies must be finished rather soon so
    that implementation can start ... ( e-center
    call in 2008 ???)
Write a Comment
User Comments (0)
About PowerShow.com