Open Science Grid Interoperability - PowerPoint PPT Presentation

1 / 30
About This Presentation
Title:

Open Science Grid Interoperability

Description:

'Federation': the act of constituting a political unity out ... A few basic policies, rules, decisions, technologies that are ... Brasil. Taiwan. Joining OSG ... – PowerPoint PPT presentation

Number of Views:36
Avg rating:3.0/5.0
Slides: 31
Provided by: vicky96
Category:

less

Transcript and Presenter's Notes

Title: Open Science Grid Interoperability


1
Open Science Grid Interoperability
2
Federation and Interoperability
  • Federation the act of constituting a political
    unity out of a number of separate states or
    colonies or provinces so that each member retains
    the management of its internal affairs
  • A few basic policies, rules, decisions,
    technologies that are agreed upon
  • Interoperable pertaining to systems that work
    together or communicate
  • Might involve work to build adapters

3
Grid3 in the U.S.
  • We built a demonstration functioning Grid in the
    U.S. Strongly encouraged by NSF and supported
    through iVDGL, GriPhyN, PPDG (DOE) as well as
    US-CMS, US-ATLAS, Ligo, SDSS and University of
    Buffalo, LBNL, some Biology applications, and
    more. Based on a simple to install VDT package
  • Allowing diversity at different sites
  • It worked and it stays working and it is being
    used by ATLAS and CMS for their data challenges
  • http//ivdgl.org/Grid3

4
(No Transcript)
5
Grid3 Shared Use Over 6 months
Usage CPUs
Sep 10
6
Open Science Grid Consortium
  • What is it?
  • It is NOT a project (unlike EGEE, iVDGL,
    TeraGrid, etc)
  • It is a collaboration a Consortium of many
    institutions. Universities and Labs, projects,
    experiments, middleware providers, campus Grids
  • Who want to leverage their efforts by joining
    together to build a sustainable infrastructure
    for physics and other sciences

7
Open Science Grid Consortium
  • Enables US-CMS and US-ATLAS to work together in a
    highly coordinated way
  • They can provide resources to ATLAS and CMS
    through the Open Science Grid as participants in
    the global LHC Computing Grid
  • Seeing partners and interest in joining this Grid
    outside of Physics outside of the US also
  • Korea
  • Brasil
  • Taiwan
  • ?
  • Joining OSG isnt exclusive. We believe resource
    providers can be a member of many Grid
    infrastructures

8
The Blueprint and Roadmap
  • Agreement by architects on some fundamental goals
  • Aligned with and informed by EGEE architecture
    and design
  • Illustrative Principles
  • Common Interfaces not necessarily
    Implementations.
  • Autonomy of Sites and Resources
  • Overriding Policy Enforcement Points.
  • Move to pull model for Work to be done.
  • Organization based environment services
    management
  • Common services with VO-specific instantiations.
  • Mix of Assured and Opportunistic use.
  • Varied expectations on resource availability,
    performance and support.

9
OSG Infrastructure - a Federated Grid of Grids
  • Heterogeneous Infrastructure made coherent by a
    set of Interfaces, core Services and Policies.
  • Campus, Facility, Experiment, (Commercial?),
    Other Grids present (existing) Resources and
    Services that can be shared across local and wide
    area Grids, as well as support direct non-grid
    based interfaces.
  • Organizations VOs present a transparent
    distributed execution and storage environment to
    the Users that spans across Grid and Facility
    boundaries.

10
http//www.opensciencegrid.org
11
OSG Organization
Technical Groups
Advisory Committee
Universities,Labs
Service Providers
Executive Board (8-15 representatives Chair,
Officers)
Sites
Researchers
VOs
Research Grid Projects
Enterprise
OSG Council (all members above a certain
threshold, Chair, officers)
Core OSG Staff (few FTEs, manager)
12
OSG Technical Groups
13
OSG Activities
14
Interoperability Federations
  • Interoperability demonstrations between OSG and
    LCG-2
  • TeraGrid Grid Integration Group (GIG) plans to
    interoperate with OSG.
  • Milestones included in their proposal.
  • Many LCG EGEE contacts with OSG activities.
    Security, Operations, Storage, Interoperability,
    Accounting

15
Core InfrastructureServices and Middleware
  • Core middleware packages based on VDT 1.3.6.
  • Added EDG/EGEE VOMS, LBNL DRM, Privilege Services
    (GUMS,Prima).
  • Integration and testing provided by
    multi-platform NMI support infrastructure.
  • US CMS Tier-1 Tier-2 Storage Elements
    accessible via SRM Grid interfaces
  • Support for opportunistic use of this storage by
    non-US CMS VOs.
  • VO role based authorization through Gatekeeper
    and GridFTP standard callouts to Prima.
  • Site based dynamic and role based account
    management and policies applied using GUMS.

16
Integration Activity Testbed
  • The specific program of work is driven by the
    stakeholders and OSG technical groups and
    activities.
  • Integrating middleware services from technology
    providers targeted for the OSG
  • Providing testbed for evaluation and testing of
    new services and applications test and exercise
    installation and distribution methods
  • Providing feedback to service providers and VO
    application developers
  • Preparing release candidates for provisioning
  • An iterative activity involving simultaneous
    activity at many sites and workshops where real
    work is done

17
OSG Integration Testbed
18
Towards OSG Deployment
  • Maintaining Grid3 operations
  • In parallel with extending Grid3 to OSG
  • OSG Integration Testbed (ITB) has 20 sites and
    ITB infrastructure has had 5 releases.
  • ATLAS, CMS, STAR, CDF, D0, BaBar, FMRI, readying
    their applications infrastructure for the
    common grid
  • CDF run simulation on an OSG ITB site
  • D0 running re-reconstruction on US CMS Tier-1
    Grid3.
  • STAR running on BNL and LBNL ITB sites
  • SLAC, FermiGrid, PDSF Facilities
  • Deployment now started to support applications
    from 10 VOs and transition from Grid3.

http//osg.ivdgl.or/twiki/bin/view/Integration/Web
Home
19
Grid Bazaar New partners contributing
  • Dartmouth psychology brain sciences research
  • Co-chair of Policy Group
  • GRASE VO with a basket of applications from the
    HPC consortium
  • Co-chair of Monitoring Group
  • New Storage Management solutions?
  • NFS4 - University of Michigan
  • IBP - Advanced storage management - Vanderbilt
  • Co-chair of Networks Storage Group
  • Interest from DOSAR, SURA, etc.
  • Certificate Handling, Accounting, Planning from
    diverse small groups.

20
OSG deployment landscape
VOs apps
TG MonInfo
TG Policy
Arch
MIS
Policy
OSG deployment
TG Storage
Support Centers Technical Group oversees
Operations Activity (Ops)
TG Security
TG Support Centers
Chairs
21
Path to Operations
OSG Integration Activity
Readiness plan Effort Resources
Readiness plan adopted
VO Application Software Installation
Software packaging
OSG Deployment Activity
Service deployment
OSG Operations-Provisioning Activity
Release Candidate
Application validation
Middleware Interoperability
Functionality Scalability Tests
feedback
Metrics Certification
Release Description
22
Operations Support Activities
  • Operations runs grid-wide services including
    provisioning and installation of middleware and
    operational support for those services, resource
    providers and Users running on OSG.
  • Support Centers Coordinate and Track problems
    for service providers, Security incidents,
    Requests for assistance.
  • They both monitor the status of resources
    services
  • Operations publishes accounting and status
    information and monitors policy compliance
  • All the support centers hosting services and
    providing user support for OSG contribute.

23
(No Transcript)
24
Operations Infrastructure
  • Trouble Ticketing system and interface
  • Monitoring tools development and maintenance
  • Accounting services
  • Discovery services
  • Identity services
  • Grid information index
  • Grid Catalog
  • VO-level services for monitoring services
  • Knowledge base
  • Mailing Lists
  • Formal and collaborative web information
    repositories

25
Operations Organizations
  • Participants
  • US CMS Tier 1 Support Center
  • US Atlas Tier 1 Support Center
  • ESnet DOEGrids CA services
  • VDT at UW services
  • FermiLab Support Center
  • Indiana University Global NOC
  • iVDGL Grid Operations Center
  • Campus Grids that are part of OSG Glow, Grase,
    FermiGrid.

26
Operations Organizations Interactions with
partners
  • LCG - participate in Operations Workshops
    together to work out problem exchange and
    resolution policies etc.
  • EGEE Operations will have interaction with OSG
    Operations.
  • LCG will interface to ATLAS and CMS Tier1 centers
    in US.
  • Each are somewhat equivalent to a ROC, but
    details and terminology are different than LCG
    because OSG Consortium is organized a bit
    differently than LCG.
  • TeraGrid Interoperation and common project
    discussions started at AllHands meeting and
    Operations Workshop.

27
Current issues
  • Space Management of storage elements
  • Diagnostic tools
  • Auditing and Accounting - User, Facility,
    Organization
  • Evolution to new technologies while supporting
    running applications is a challenge.
  • Define, publish and understand all policies
  • VO infrastructure - can we make the current
    design support light-weight VOS
  • Scalability continuous learning process

28
Interoperability and Federation
  • Transparent use of Federated Grid infrastructures
    part of the roadmap of the LHC experiments (we
    believe)
  • There are sites that appear as part of LCG as
    well as part of OSG/Grid3.
  • CMS and ATLAS can run their jobs on both LCG and
    OSG.
  • CMS and ATLAS simulation jobs are running on
    TeraGrid.

29
Federation and Interoperability
  • Lots of progress has been made
  • Basic interoperability between OSG and LCG has
    been demonstrated
  • There is a common base for a Federation OSG,
    LCG, TeraGrid
  • and between many other Grids based on VDT (or
    Globus)

30
Grid vision Federation and Interoperability
  • The Grid vision is a work in progress and all
    those with energy and ideas need to step forward
    and work together
  • We can and will make it work
  • At least as well as the detectors work!
  • Lets keep working at it and working together and
    lets keep choosing common services and defining
    common interfaces wherever we can across all of
    the global Grid efforts
Write a Comment
User Comments (0)
About PowerShow.com