ARDA Report to the LCG SC2 - PowerPoint PPT Presentation

About This Presentation
Title:

ARDA Report to the LCG SC2

Description:

Alice: Fons Rademakers and Predrag Buncic. Atlas: ... Fabric Interface to resources through CE, SE services ... Grid connectivity rather than interoperability ... – PowerPoint PPT presentation

Number of Views:39
Avg rating:3.0/5.0
Slides: 23
Provided by: usc1
Learn more at: https://uscms.org
Category:
Tags: arda | lcg | connectivity | report | sc2

less

Transcript and Presenter's Notes

Title: ARDA Report to the LCG SC2


1
ARDA Reportto the LCG SC2
  • Philippe Charpentier
  • For the RTAG-11/ARDA group

2
ARDA Mandate

3
ARDA Schedule and Makeup
  • Alice Fons Rademakers and Predrag Buncic
  • Atlas Roger Jones and Rob Gardner
  • CMS Lothar Bauerdick and Lucia Silvestris
  • LHCb Philippe Charpentier and Andrei
    Tsaregorodtsev
  • LCG GTA David Foster, stand-in Massimo Lamanna
  • LCG AA Torre Wenaus
  • GAG Federico Carminati

4
ARDA Distributed Analysis Services
  • Distributed Analysis in a Grid Services based
    architecture
  • ARDA Services should be OGSI compliant -- built
    upon OGSI middleware
  • Frameworks and applications use ARDA API with
    bindings to C, Java, Python, PERL
  • interface through UI/API factory --
    authentication, persistent session
  • Fabric Interface to resources through CE, SE
    services
  • job description language, based on Condor
    ClassAds and matchmaking
  • Database(ses) through Dbase Proxy provide
    statefulness and persistence
  • We arrived at a decomposition into the following
    key services
  • API and User Interface
  • Authentication, Authorization, Accounting and
    Auditing services
  • Workload Management and Data Management services
  • File and (event) Metadata Catalogues
  • Information service
  • Grid and Job Monitoring services
  • Storage Element and Computing Element services
  • Package Manager and Job Provenance services

5
ARDA Key Services for Distributed Analysis
6
ARDA roadmap to Distributed Analysis
  • HEPCAL-II Use Case Group Level Analysis (GLA)
  • User specifies job information including
  • Selection criteria
  • Metadata Dataset (input)
  • Information about s/w (library) and configuration
    versions
  • Output AOD and/or TAG Dataset (typical)
  • Program to be run
  • User submits job
  • Program is run
  • Selection Criteria are used for a query on the
    Metadata Dataset
  • Event ID satisfying the selection criteria and
    Logical Dataset Name of corresponding Datasets
    are retrieved
  • Input Datasets are accessed
  • Events are read
  • Algorithm (program) is applied to the events
  • Output Datasets are uploaded
  • Experiment Metadata is updated
  • Report summarizing the output of the jobs is
    prepared for the group (eg. how many evts to
    which stream, ...) extracting the information
    from the application and GridMW
  • Authentication
  • Authorization
  • Metadata catalog
  • Workload management
  • Metadata catalog
  • Package manager
  • Compute element
  • File catalog
  • Data management
  • Storage Element
  • Metadata catalog
  • Job provenance
  • Auditing

7
API to Grid services
  • Importance of UI/API
  • Interface services to higher level software
  • Exp. framework
  • Analysis shells, e.g. ROOT
  • Grid portals and other forms of user interactions
    with environment
  • Advanced services e.g. virtual data, analysis
    logbooks etc
  • Provide experiment specific services
  • Data and Metadata management systems
  • Provide an API that others can project against
  • Benefits of common API to framework
  • Goes beyond traditional UIs à la GANGA, Grid
    portals, etc
  • Benefits in interfacing to analysis applications
    like ROOT et al
  • Process to get a common API b/w experiments --gt
    prototype

8
API and User Interface

9
ARDA and Grid services architecture
  • OGSI gives framework in which to run ARDA
    services
  • Addresses architecture for
  • Communication, lifetime support,
  • Provides framework for advanced interactions with
    the Grid
  • This is outside of the analysis services API,
    but must be implemented in standard ways
  • Need to address issues of OGSI performance and
    scalability up-front
  • Importance of modeling, plan for scaling up,
    engineering of underlying services infrastructure

10
Roadmap to a GS Architecture for the LHC
  • Transition to grid services explicitly addressed
    in several existing projects
  • Clarens and Caltech GAE, MonaLisa
  • Based on web services for communication,
    Jini-based agent architecture
  • Dirac
  • Based on intelligent agents working within
    batch environments
  • AliEn
  • Based on web services and communication to
    distributed database backend
  • Initial work on OGSA within LCG-GTA
  • GT3 prototyping
  • Leverage from experience gained in Grid M/W RD
    projects

11
On the road again
  • No evolutionary path from GT2-based grids
  • David Foster at June 24th POB
  • We have a complex software infrastructure that
    needs simplifying
  • .
  • Cannot simply incrementally improve the software
    we have.
  • Based on Globus GT2 design (which is being
    replaced by OGSA GT3)
  • Augment LCG-1 and other grid services
  • ARDA Services deployed and run together with
    existing ones on LCG1 resources
  • Keep possibility to bridge to existing services
    if feasible
  • Grid connectivity rather than interoperability
  • Use invaluable experience of LCG1 deployment for
    deploying ARDA
  • ARDA provides decomposition into those services
    that address the LHC distributed analysis use
    cases
  • Recommendation build early a prototype based on
    refactoring existing implementations

12
ARDA Roadmap for Prototype
  • Prototype provides the initial blueprint
  • Do not aim for a full specification of all the
    interfaces
  • 4-prong approach
  • Re-factoring of AliEn, Dirac and possibly other
    services into ARDA
  • Initial release with OGSILite/GT3 proxy,
    consolidation of API, release
  • Implementation of agreed interfaces, testing,
    release
  • GT3 modeling and testing, ev. quality assurance
  • Interfacing to LCG-AA software like POOL,
    analysis shells like ROOT
  • Also opportunity to early interfacing to
    complementary projects
  • Interfacing to experiments frameworks
  • metadata handlers, experiment specific services
  • Provide interaction points with community
  • Early releases and workshops every few months
  • Early strong feedback on API and services
  • Decouple from deployment issues

13
Experiments and LCG Involved in Prototyping
  • ARDA prototype would define the initial set of
    services and their interfaces. Timescale spring
    2004
  • Important to involve experiments and LCG at the
    right level
  • Initial modeling of GT3-based services
  • Interface to major cross-exp packages POOL,
    ROOT, PROOF, others
  • Program experiment frameworks against ARDA API,
    integrate with experiment environments
  • Expose services and UI/API to other LHC projects
    to allow synergies
  • Spend appropriate effort to document, package,
    release, deploy
  • After the prototype is delivered, improve on
  • Scale up and re-engineer as needed OGSI,
    databases, information services
  • Deployment and interfaces to site and grid
    operations, VO management etc
  • Build higher-level services and experiment
    specific functionality
  • Work on interactive analysis interfaces and new
    functionalities

14
Possible Strawman
  • Strawman workplan for ARDA prototype

15
Setting up the project
  • Propose ARDA to become now an LCG project
  • Project should start with a definition of the
    work areas identifying where the effort will come
    from
  • Core development team 2-3 good (experienced)
    people plus 1 person from each experiment
  • Estimate roughly total effort of some 10-15
    people for the 6-month timescale to be practical
  • Relevant experience and manpower coming from
    AliEn Dirac developers, other LHC experiments,
    GTA, AA,
  • Alice LHCb needs to evaluate the impact on
    AliEn/Dirac planning and makes a strong
    commitment to provide the relevant expertise

16
Process within the LHC experiments
  • Alice
  • Was discussed in the Core Offline and the Offline
    board
  • Support was enthousiastic
  • Evaluating the impact on the previously foreseen
    work-plan in AliEn
  • Full commitment to the project and prototype
    development
  • LHCb
  • Early commitment from the Dirac development team
  • Dirac is already based on the concept of services
  • Was part of the Dirac plan to move to OGSI
    services
  • This move is ongoing in order to prepare for ARDA
    integration
  • Leverages many issues (e.g. file catalog, VO
    organization)
  • Evaluation going on of LHCb file catalog and
    bookkeeping using AliEn
  • Commitment on API early definition,
    implementation of Python binding (needed for
    Dirac and Ganga)

17
Process within the LHC experiments
  • ATLAS
  • Discussed extensively within ATLAS (meeting,
    mailing)
  • Agreed that AliEn tries to cover most services in
    one product
  • Various products used by ATLAS together span a
    similar set of services
  • Not an endorsement of AliEn as an implementation
    of any particular service
  • Willing to contribute to services and interfaces
    definition
  • Need for early publishing, allowing parallel
    implementations and comments
  • Prototype short term, followed by alternative
    implementations

18
Process within the LHC experiments
  • CMS
  • Good interactions with CMS dedicated meeting on
    ARDA during September CMS Week involvement
    Computing Core Software group (CCS), CMS
    Architect, CMS-related Grid projects, CMS
    production team, Physics Reconstruction and
    Selection groups (PRS)
  • Discussions on interfacing to framework, metadata
    system and higher level services, like virtual
    data etc
  • Willing to contribute to services and interfaces
    definition
  • Not an endorsement of AliEn as final solution
  • Agreement on and endorsement of the ARDA
    prototype approach to be valuable importance of
    early feedback to see what works and what does
    not
  • Impact has to be evaluated on the ability for LCG
    and experiments to keep the timetable for
    Computing TDRs

19
Synergy with other Projects
  • How ARDA maps to Ganga

20
Synergy with other Projects
  • Mapping ATLAS grid components to ARDA

21
Synergy with other Projects
  • CMS Grid components
  • and ARDA

COBRA Octopus/RefDB
Octopus
LCG-VOMS
Octopus/ McRunJob ImpalaLite/ CMSProd EDG-UI
COBRA/API
MonaLisa Grid-ICE
POOL/ COBRA
Octopus/RefDB
EDG RB
RSL
LCG Replica Manager
Octopus DAR/SCRAM PacMan
VDT-EDT LCG-1
Octopus/BOSS
22
Recommended action list
  • Identify project lead --gt PEB
  • Develop ARDA work plan, schedule, milestones --gt
    PEB, SC2
  • Identify effort and build team(s) --gt Project
    Leader
  • Develop plan for interfacing to and engaging of
    LHC experiments and Grid community
  • --gtteam expts EGEE VDT
  • Release ARDA RTAG document --gt ARDA-RTAG
Write a Comment
User Comments (0)
About PowerShow.com