The ALMA Software System - PowerPoint PPT Presentation

1 / 28
About This Presentation
Title:

The ALMA Software System

Description:

The ALMA Software System. Joseph Schwarz (ESO), Allen Farris (NRAO), Heiko Sommer (ESO) ... P1.28, Dynamic Schedulingt in ALMA, A. Farris & S. Roberts ... – PowerPoint PPT presentation

Number of Views:30
Avg rating:3.0/5.0
Slides: 29
Provided by: josephs9
Category:
Tags: alma | farris | software | system

less

Transcript and Presenter's Notes

Title: The ALMA Software System


1
The ALMA Software System
  • Joseph Schwarz (ESO), Allen Farris (NRAO), Heiko
    Sommer (ESO)

2
ALMA is
  • An array of 64 antennas each 12 meters in
    diameter which will work as an aperture synthesis
    telescope to make detailed images of astronomical
    objects.
  • They can be positioned as needed with baselines
    from 0.5 to 14 kilometers so as to give the array
    a zoom-lens capability, with angular resolution
    reaching 10 milliarcseconds.
  • A leap of over two orders of magnitude in both
    spatial resolution and sensitivity
  • ALMA's great sensitivity and resolution make it
    ideal for medium scale deep investigations of the
    structure of the submillimeter sky.
  • A joint project of the North American and
    European astronomical communities Japan is
    likely to join the project in 2004.

Location the Llano de Chajnantor, Chile, at an
altitude of about 5000m.
Courtesy of Al Wootten, ALMA/US Project Scientist
3
Complete Frequency Access
4
South America
Where can such transparent skies be found??
ALMA
Living Earth
5
ALMA Schedule
  • 2006 First production antenna on site
  • 2007 Q2 First production receiver (4-band) on
    site
  • 2007 Q3 First early ALMA science operations  
  • 2008 Q4 Interim science operations
  • 2012 Q1 Construction complete. Full science
    operations (64 antennas)

6
Software Scope
  • From the cradle
  • Proposal Preparation
  • Proposal Review
  • Program Preparation
  • Dynamic Scheduling of Programs
  • Observation
  • Calibration Imaging
  • Data Delivery Archiving
  • Afterlife
  • Archival Research VO Compliance

7
And it has to look easy
  • From the Scientific Software Requirements
  • The ALMA software shall offer an easy to use
    interface to any user and should not assume
    detailed knowledge of millimeter astronomy and of
    the ALMA hardware.
  • The general user shall be offered fully
    supported, standard observing modes to achieve
    the project goals, expressed in terms of science
    parameters rather than technical quantities...
  • The expert must still be able to exercise full
    control
  • But what is simple for the user will therefore be
    complex for the software developer.
  • Architecture should relieve developer of
    unnecessary complexity
  • Separation of functional from technical concerns

8
The numbers
  • Baseline correlator H/W produces 1 Gbyte/s
  • Must reduce to average/peak data rates of 6/60
    Mbyte/s (baseline)
  • Raw (uv) data 2/3, image data 1/3 of the
    total
  • Implies 180 Tbyte/y to archive
  • Archive access rates could be 5 higher (cf. HST)
  • Proposed 25/95 Mbyte/s to support correlator
    enhancements
  • Feedback from calibration to operations
  • 0.5 s from observation to result (pointing,
    focus, phase noise)
  • Science data processing must keep pace (on
    average) with data acquisition

9
Separation of concerns
  • Functional Physics, algorithms, hardware
  • Pis can concentrate on their research specialties
  • Software encapsulates aperture synthesis
    expertise
  • Technical Communications, databases, etc.
  • Subsystem teams should concentrate on function
  • Technical architecture should provide simple and
    standard ways to
  • Access remote resources
  • Store and retrieve data
  • Manage security needs
  • Communicate asynchronously between subsystems,
    components

10
ALMA is distributed
MPI Bonn
ATC
Jodrell Bank
Edinburgh
Univ.
Calgary
DRAO
c
Penticton
ESO
c
ALMA
ATF
NAOJ
NRAO
ALMA
Santiago
Arcetri
Brera
IRAM
Observatory
IRAM
Observatory
Grenoble
Paris
11
Run-time Challenges Responses
  • Changing observing conditions
  • High data rates
  • Diverse user community (novice to expert)
  • Distributed hardware personnel
  • AOS antennas scattered 0.5-14 km from correlator
  • AOS-OSF operators are 50 km from array
  • OSF-SOC-RSCs PIs, staff, separated from OSF by
    1000s of km, often by many hours in time zone
  • Dynamic Scheduler
  • Integrated scalable Archive
  • Flexible observing tool, GUIs
  • High-speed networks
  • Distributed architecture
  • CORBA CORBA services
  • Container/Component model
  • XML serialization

12
Development-time challenges responses
  • Evolving requirements
  • Changing data rates
  • New observing modes
  • New hardware (ACA)
  • IT advances
  • Distributed development
  • Different s/w cultures
  • Iterative development
  • Modular, flexible design
  • Unified architecture (HLA)
  • Functional subdivisions aligned to existing
    project organization
  • Implemented via ACS
  • Dont do it twice
  • If you must do the same thing, do it the same way
    everywhere
  • E-Collaboration tools

13
Why dynamic scheduling?
14
Scheduling Block
  • Indivisible unit of observing activity
  • Can be aborted but not restarted in the middle
  • Independently calibratable
  • Can be queried
  • What h/w do you need?
  • What conditions do you need?
  • Nominal execution time 30 minutes
  • Scheduler repeats selection process

15
System data flow
0.5 s feedback time
4-40 Mbyte/s
16
The Archive at the Core
  • Just about everything goes here
  • Much more than what we usually think of as a
    science archive
  • High streaming in/out rates, lower random access
    rates
  • Three types of data
  • Bulk data high volume, moderate s of records
  • Stored as binary attachments to VOTable headers
  • Monitor (engineering) data moderate volume,
    large of records
  • Value objects low volume, complex searchable
    structures
  • Observing Projects Scheduling blocks
  • Configuration information
  • Meta-data providing link to bulk data (e.g., via
    VOTables)
  • Underlying DB technology hidden from subsystems
  • Can be replaced when necessary/convenient/desired

17
(No Transcript)
18
ALMA Common Software (ACS)
  • Main vehicle for handling technical concerns
  • Framework for distributed object architecture
  • Used all the way down to device level in control
    system
  • Built on CORBA, but hides its complexity
  • Wraps selected CORBA services
  • Multi-language support Java, C, Python
  • Vendor-independent
  • High-quality open-source ORBs available (e.g.,
    TAO)
  • System evolving to meet developers needs
  • Initial developer resistance
  • Developers now asking for more
  • Dedicated team of systems-oriented developers

19
Components Containers
  • Component
  • Deployable (inside a container) unit of ALMA
    software
  • Arbitrary of components per subsystem
  • Functional interface defined in CORBA IDL
  • Well-defined lifecycle (initialization,
    finalization)
  • Focus on functionality with little overhead for
    remote communication and deployment
  • Similar ideas in EJB, .NET, CCM
  • Container
  • Centrally handles technical concerns and hides
    them from application developers
  • Run-time deployment, Start-up
  • Selected CORBA/ACS Services (Error, Logging,
    configuration, )
  • Convenient access to other components and
    resources
  • New functionality can be integrated in the
    future, w/o modifying application software

20
Container/Component Interfaces
My container starts and stops me and offers its
services, some of which I dont notice
functional interface observe()
container service interface
Comp
lifecycle interface init() run() restart()
I only care about the Lifecycle IF of my
components
other ACS services
Manager deployment configurations
CORBA ORBs Services
21
Data reduction pipelines
  • Baseline AIPS as data reduction engine
  • Audited for compliance w/ALMA reqts
  • Suitability for mm ? verified w/PdB data
  • Systematic benchmarking has led to major
    performance improvements
  • Re-architecting of AIPS framework as ACS
    components with Python replacing glish as
    scripting language
  • Phase A proof of concept completed
  • Python container implementation provided by ACS
    team

22
Role(s) of XML
  • Define data structure and content through XML
    schemas
  • Binding classes from schemas
  • Type-safe native language access to data
  • Automatic validation possible
  • Exchange of value objects between subsystems
  • In a language-independent way
  • Direct input to archive
  • Encourages subsystem-specific data modelling

23
Binding XML Schemas
Castor is an open-source framework for binding
XML schemas to Java classes
24
Avoiding nasty surprises
  • Iterative development
  • Monthly integrations
  • Releases every 6-months
  • Periodic CDRs
  • Focus on plan for next release
  • To date
  • Three major releases of ACS
  • System IDR, PDR, CDR1
  • May 03, R0 Test of build system, some code
  • Oct 03, R1 First integrated system test (albeit
    with minimal functionality)

25
Involving the user early
  • A scientist is assigned to every development
    team
  • Provide advice/help solve problems during
    development
  • Make sure requirements are met are up to date
  • Evaluate status redefine requirement
    priorities
  • Ensure subsystems interface properly
  • Perform periodic testing evaluating
    software/operations from a science user
    perspective.
  • To avoid Its beautiful software but not what
    we wanted.
  • Participation doesnt end w/requirements doc

26
Testing strategy
  • Unit or Automated tests
  • Verify functionality of pieces of code.
  • Responsibility of the developer (test first
    encouraged)
  • JUnit, pyUnit, cppUnit, TAT (homebrew)
  • Performance tests
  • automatic tests to ensure timing constraints are
    being met or data throughput is adequate.
  • Stand alone user tests
  • perform before subsystem releases with adequate
    time to allow subsystem developers to respond.
  • Integrated user tests
  • Execute as soon as possible after integrated
    subsystem releases.

27
High-level Analysis DesignOne ring to bind
them all
  • Develop maintain system architecture
  • Foster its implementation
  • Cooperate closely with ACS team
  • Oversee subsystem-subsystem interfaces
  • Guide planning for incremental releases
  • Collaborate with IT and SE to improve our
    development process

28
ALMA Posters ACS Demo
  • P1.23, Transparent XML Binding using the ALMA
    Common Software (ACS) Container/Component
    Framework, H. Sommer et al.
  • P1.24, ALMA Proposal Preparation Supporting the
    novice and the expert, A. Bridger et al.
  • P1.25, The ALMA Prototype Pipeline, L. Davis et
    al.
  • P1.26, The ALMA Archive A centralized system for
    information services, A. Wicenec et al.
  • P1.27, Flexible Storage of Astronomical Data in
    the ALMA Archive, H. Meuss et al.
  • P1.28, Dynamic Schedulingt in ALMA, A. Farris
    S. Roberts
  • P1.29, ALMA On-Line Calibration Software, R.
    Lucas et al.
  • D8, Generic abstraction of hardware control based
    on the ALMA Common Software, B. Jeram et al.
Write a Comment
User Comments (0)
About PowerShow.com