Title: Panel D: Theory, computing facilities and networks, Virtual Observatory
1Panel D Theory, computing facilities and
networks, Virtual Observatory
- Chair Francoise Combes (Observatoire de Paris)
- Co-Chair Paolo Padovani (ESO)
- Other Panel Members Mark Allen (CDS, Strasbourg)
- James Binney
(Oxford University) - Marco de Vos (Dwingeloo)
- Ă…ke Nordlund
(Copenhagen) - Matthias
Steinmetz (Potsdam)
2Overview
- Virtual Observatory each facility should plan
for an archive which is fully integrated in the
VO and provide science ready data - Astrophysical Software Laboratory ASL
- Funding ensured for software development and
support, user training, post-doc positions - Software as an infrastructure like major
instruments - Data-grids could revolutionise data modelling
- Astronomy, through the VO, should lead
3Panel D and the ASTRONET Roadmap
- Previous panels have discussed facilities,
missions, telescopes, satellites, instruments,
etc. - A huge amount of data will be taken with those
facilities (on top of the data which are already
available) - These data will need to be reduced and then
archived, observations at various wavelengths
will need to be compared, calculations will need
to be done, theoretical models will need to be
compared to data - Panel D deals with the framework for all the
activities that start when all the work discussed
by the previous panels is completed and the fun
starts!
4courtesy of P. Quinn
5Virtual Observatory
- The Virtual Observatory (VO)
- VO as the glue between the various components
of modern Astronomy - Relevant both for observational and theoretical
astronomy - better and easier access to all data
- standard access to numerical simulations and
models - integration between data and theory
- e-Infrastructure concept common to other
disciplines (e.g., biology, geo-science,
meteorology) with which Astronomy shares
requirements - VO still not fully operational but nevertheless
deemed important for archives - 84 of surveyed facilities have plans for (or
have) an archive 53 of these plan to adopt VO
standards (i.e., be VO-compliant) - International effort (International Virtual
Observatory Alliance) comprising 16 projects
world-wide
616
China
Australia
Europe
India
Canada
UK
Russia
Spain
USA
Italy
Armenia
Korea
Hungary
Germany
France
Japan
7Virtual Observatory
- VO in Europe
- Present
- EURO-VO VO-TECH and DCA projects to be completed
in 2008 - European VO initiatives (EURO-VO ESO, ESA,
AstroGrid UK, VO-France, GAVO and AstroGrid-D
Germany, Vobs.it Italy, The Netherlands, SVO
Spain) detailed in Appendix VI A. Overall,
roughly 100 Full Time Equivalents have been
involved in VO projects in Europe over the past 4
years - On-going/near future Astronomical Infrastructure
for Data Access (AIDA) to be completed by
mid-2010 - Science usage (science workshops, Science
Advisory Committee) - Transition to operations
- Assistance in large-scale deployment of VO
protocols and standards - Data centres are ultimately responsible for
building and maintaining archives and services
(and this should be properly financed) - Longer term (10 year) development VO to become
part of the landscape (like the Web now) and to
open up new capabilities (e.g., multi-wavelength
data combination) and new discovery windows
(e.g., time domain)
8Virtual Observatory
- VO-compliance VO only requires data centres to
have a VO layer to translate any locally
defined parameter to the standard (IVOA
compliant) ones - advantageous
- Users interoperability!
- Providers broadens user base exposes
highly-processed data through VO protocols new
technology makes life easier - costs are small if planned from the beginning
- production of science-ready data products
should become the norm for data providers
because - data processing is getting more and more complex
- important for public outreach and educational
bodies (Panel E) - the VO works best with them
9Virtual Observatory
- VO software, tools, and GRIDS theory VO
- Computing essential part of VO but given usage
diversity no single favoured computational
architecture Grid and Web Services IVOA
Working Group - GRID infrastructure needed VO as a service and
data grid - Theory-VO
- framework to publish results of simulations,
models (Theory Working Group in IVOA) - could also lead to the building of codes made up
of modules in standard ways. - VO tools no re-invention of the wheel but
legacy applications should be interfaced to the VO
10Main VO Recommendations
- Public, VO-compliant ( modern and
interoperable) archives should be the norm. Data
centres should aim to provide science-ready
data - Providers (software, theory, modelling) should
make their tools compatible with the VO - VO development should be in line with generic
e-Infrastructure - Modelling codes should be made modular so that
they can be easily be accessed through the VO
11Computing facilities and networks
- Computing grids Huge progress in the recent
years - Europe in the last 3-4 years was a little behind
USA - But there are now big projects to come up with a
few Petaflopic centers for 2010 - European countries has realized that they must
unify forces to be competitive - Special place of astronomy, with huge data flows
foreseen in the near future GOODS, VISTA, VST,
VVDS, LOFAR, RAVE, GAIA, ALMA, SKA. - In particular new technologies in radio-astronomy
where the acquisition of data itself require huge
computing powers - Networks (EVN), Virtual Observatory (leading
position)
12Exponential increase
- First 478TF
- in Livermore (USA)
- Germany
- Sweden
- in the top 10
- ?USA has 60 of the
- 1st 500 machines
- ?Europe share is now
- rising from 25 to 30
- For Europe, UK, France and Germany are at the
first places
13Grand Challenge Codes
? the formation of stars and planetary systems ?
solar and heliospheric physics ? the evolution
and explosions of stars ? Black Hole physics on
stellar and galactic scales ? formation and
evolution of galaxies ? cosmology and the
formation of large-scale structure Factor 10 in
the CPU is expected in the next few yrs several
100 Teraflop/s sustained performance will
become available past 2010. (today 30 Tflops
exceptional)
14- 3D simulations of HII region (G. Mellema)
15Conclusions of other Committees
- ESFRI European High-Performance Supercomputing
Center among accepted proposals - pyramid, with local centres at the base, national
and regional centres in the middle layer and the
high-end HPC centres at the top. - Astronomy is among the disciplines where
High-Performance Computing is required, - PRACE The Partnership for Advanced Computing in
Europe prepares the creation of a persistent
pan-European HPC service, consisting of several
tier-0 centres PRACE is a project funded in part
by the EUs 7th Framework Programme.
16European Petaflop Center
- Current initiatives in Europe towards the
Peta-Center - GENCI in France, coordination Initiative, of
- CNRS, Universities, CEA, building a new
infra-structure - Gauss Center (GCS) in Germany merging of the 3
- largest centers (JĂĽlich, Garching, Stuttgart)
- each of the order of 10 Tflops, links upgraded to
40Gbits/s - and 100Gbits/s in the future
- UK Strategic Framework for HEC High End
Computing - NL NOWNCF Huygens will replace Aster, etc..
17Networks
- GEANT2 (2004-2008) the pan-European research
- and education network 34 countries, and 30 NRENs
10Gbps - Also links to North America or Asia
- Architecture in Tier-0 center, dizains of
Tier-1, and hundreds - Tier-2 computing centers (example of LHC and
CERN, - 15 Peta-bytes of data per year)
- Dedicated or private lines 10Gbps (dark fibers)
- European VLBI for example (1Gbps per station)
18GRID COMPUTING
- ? Grid computing will not be at the cutting-edge
of computations - But are complementary to super-computers
- ?Will revolutionize the data modelling
- Astronomers could be leaders there, in using
spare CPU in - millions of processors
- ? Particle physics and the LHC data-processing
challenge - EGEE Enabling Grids for E-sciencE, funded by EU
Commission - 25 Particle Physics, but all science including
astronomy
19(No Transcript)
20Examples of use
- GSTAT (Asia, USA, Canada, Russia.)
- 40 000 CPU running 15 000 jobs and 136 000 queing
- In October 2007!
- (244 nodes, with 1000 CPU per node. Large
budget) - - The code must be fixed and encrypted (Java
based) - - The data to be transfered not too big
- - The computations independent from each other
- Middleware to deal with priorities, constraints..
- Gravitational lens fitting
- Black hole Hunting
- Modelling the GAIA catalog
21Software and Codes
- Census of a dozen main "power-horses"
- (ASH, CESAM, NBODY, GADGET, RAMSES,FLASH, ZEUS,
- PLUTO, PENCIL, CLOUDY, LORENE)
- Networks and Consortia (Astro-sim,
many-body.org NEMO, - STARS, MODEST.. LENAC, VIRGO Consortium..)
- Â
- Implementing an open "Astrophysical Center"
- Integrated Large Infra-structure for
Astrophysics - Networking activities
- Joint Research Projects focussed on the various
domains - Codes must be considered as a fundamental
infra-structure - Requiring high computing
22Astrophysical Software Laboratory
- Motivate authors of codes to make them available
- Help users to understand them and their
limitations - ?Funds to develop software, and for training,
forums - (both develop existing code, encourage to write
new ones) - Man power estimated to 50FTE (post-docs, steering
committee) - Encourage consortia and collaborations (Virgo,
Horizon) - ASL could make proposals for the European
supercomputers - (via DEISA, DECI), to ensure a larger share for
Astrophysics
23Main Recommendations
- Hardware Astronomy has great benefit to
continue - to share large cutting-edge supercomputer centers
- 10 of a physics oriented center, always
updated, at the front - of the technology
- Software Astrophysical Software Laboratory ASL
- knowledge exchanges,Training, man-power (PhD,
post-docs..) - Select proposals for Petascale computers
- Future instrumentation (e.g. LOFAR, LSST, GAIA,
SKA) - will require supercomputers and networks
- Data processing and VO could be leader in
Grid-computing - (as is LHC in astro-particules)