les robertson cernit 1 - PowerPoint PPT Presentation

1 / 40
About This Presentation
Title:

les robertson cernit 1

Description:

Gilbert Poulard. Alberto Masoni. PEB. Nicholas Brook. John Harvey. Norman McCubbin. Gilbert Poulard. Federico Carminati. Wisla Carena. SC2. Tatsuya Nakada. Torsten ... – PowerPoint PPT presentation

Number of Views:68
Avg rating:3.0/5.0
Slides: 41
Provided by: lesr150
Category:

less

Transcript and Presenter's Notes

Title: les robertson cernit 1


1
LHC Computing Grid ProjectGoals, Organisation
Status
  • LHCC
  • 21 November 2001
  • Les Robertson
  • CERN - IT Division
  • les.robertson_at_cern.ch

2
  • Background

3
  • Worldwide distributed computing system
  • Small fraction of the analysis at CERN
  • ESD analysis using 12-20 large regional centres
  • how to use the resources efficiently
  • establishing and maintaining a uniform physics
    environment
  • Data exchange with tens of smaller regional
    centres, universities, labs

Importance of cost containment
  • components architecture
  • utilisation efficiency
  • maintenance, capacity evolution
  • personnel management costs
  • ease of use (usability efficiency)

4
The MONARC Multi-Tier Model (1999)
les.robertson_at_cern.ch
5
LHC Computing Model2001 - evolving
The opportunity of Grid technology
The opportunity of Grid technology
The LHC Computing Centre
les.robertson_at_cern.ch
6
  • The Project

7
The LHC Computing Grid Project
Goal Prepare and deploy the LHC computing
environment
  • applications tools, frameworks, environment
  • computing system ? services
  • cluster ? fabric
  • collaborating computer centres ? grid
  • CERN-centric analysis ? global analysis
    environment
  • foster collaboration, coherence of LHC computing
    centres
  • This is not yet another grid technology project
    it is a grid deployment project

8
The LHC Computing Grid Project
  • Phase 1 2002-04
  • Development and prototyping
  • Approved by CERN Council 20 September 2001
  • Phase 2 2005-07
  • Installation and operation of the full world-wide
    initial production Grid
  • Costs (materials staff) included in the LHC
    cost to completion estimates
  • Two phases

9
The LHC Computing Grid Project
Phase 1 Goals
  • Prepare the LHC computing environment
  • provide the common tools and infrastructure for
    the physics application software
  • establish the technology for fabric, network and
    grid management
  • buy, borrow, or build
  • develop models for building the Phase 2 Grid
  • validate the technology and models by building
    progressively more complex Grid prototypes
  • maintain reasonable opportunities for the re-use
    of the results of the project in other fields
  • Operate a series of data challenges for the
    experimentsDeploy a 50 model production GRID
    including the committed LHC Regional Centres
  • Produce a Technical Design Report for the full
    LHC Computing Grid to be built in Phase 2 of the
    project

50 of the complexity of one of the LHC
experiments
10
Areas of Work
  • Applications Support Coordination
  • Computing System
  • Grid Technology
  • Grid Deployment

11
Applications Support Coordination
  • Application Software Infrastructure libraries,
    tools
  • Object persistency, data management tools, data
    models
  • Common Frameworks Simulation, Analysis, ..
  • Adaptation of Physics Applications to Grid
    environment
  • Grid tools, Portals

12
Computing System
  • Physics Data Storage and Management
  • Fabric Management
  • LAN Management
  • Wide-area Networking
  • Security
  • Internet Services

13
Grid Technology
  • Grid middleware
  • Scheduling
  • Data Management
  • Monitoring
  • Error Detection Recovery
  • Standard application services layer
  • Inter-project coherence/compatibility

14
Grid Technology for the LHC Grid
  • An LHC collaboration needs a usable, coherent
    computing environment a Virtual Computing
    Centre - a Worldwide Grid
  • Already even in the HEP community - there are
    several Grid technology development projects,
    with similar but different goals

15
A few of the Grid Technology Projects
  • Data-intensive projects
  • DataGrid 21 partners, coordinated by CERN
    (Fabrizio Gagliardi)
  • CrossGrid 23 partners complementary to DataGrid
    (Michal Turala)
  • DataTAG funding for transatlantic demonstration
    Grids (Olivier Martin)
  • European national HEP related projects
  • GridPP (UK) INFN Grid Dutch Grid NorduGrid
    Hungarian Grid
  • US HEP projects
  • GriPhyN NSF funding HEP applications
  • PPDG Particle Physics Data Grid DoE funding
  • iVDGL international Virtual Data Grid
    Laboratory
  • Global Coordination
  • Global Grid Forum
  • InterGrid ad hoc HENP Grid coordination (Larry
    Price)

16
Grid Technology for the LHC Grid
  • An LHC collaboration needs a usable, coherent
    computing environment a Virtual Computing
    Centre - a Worldwide Grid
  • Already even in the HEP community - there are
    several Grid technology development projects,
    with similar but different goals
  • And many of these overlap with other communities
  • How do we achieve and maintain compatibility,
    provide one usable computing system?
  • architecture? api? protocols?
  • while remaining open to external, industrial
    solutions

This will be a significant challenge for
the LHC Computing Grid
Project
17
Grid Deployment
  • Data Challenges
  • Grid Operations
  • Integration of the Grid Physics Environments
  • Network Planning
  • Regional Centre Coordination
  • Security access policy

18
Time constraints
proto 1
proto 3
proto 2
continuing RD programme
prototyping
pilot technology selection
pilot service
system software selection, development,
acquisition
hardware selection, acquisition
1st production service
2001 2002
2003 2004
2005 2006
19
First milestone - Prototype 1
  • Deploy a first prototype Grid
  • performance scalability testing of components
    of the computing fabric (clusters, disk storage,
    mass storage system, system installation, system
    monitoring)
  • straightforward physics applications
  • demonstrate job scheduling, data replication
  • March 2002 target
  • Regional Centres from Europe and USTechnology
    from several grid projects (DataGrid, PPDG, )

20
  • Organisation

21
The LHC Computing GridProject Structure
LHCC
CommonComputingRRB
Reviews
The LHC Computing Grid Project
Resource Matters
Reports
Project Overview Board
Project Manager ProjectExecutionBoard
Software andComputingCommittee(SC2)
Requirements, Monitoring
implementation teams
22
The LHC Computing GridProject Structure
LHCC
CommonComputingRRB
Reviews
The LHC Computing Grid Project
Resource Matters
Reports
Project Overview Board
OtherComputingGridProjects
Project Manager ProjectExecutionBoard
Software andComputingCommittee(SC2)
Requirements, Monitoring
Other HEPGridProjects
implementation teams
Other Labs
23
  • Status

24
Funding of Phase 1 at CERN
  • Funding for RD activities at CERN during
    2002-2004 partly through special contributions
    from member and associate states -
  • Austria, Belgium, Bulgaria, Czech Republic,
    France, Germany, Greece, Hungary, Israel, Italy,
    Spain, Switzerland, United Kingdom
  • Industrial funding CERN openlab Intel,
    Enterasys, KPNQwest
  • European Union Datagrid, DataTag further
    possibilities (FP6)
  • Funded so far - all of the personnel, 40 of
    the materials

25
Status of Funding Agreements
26
Project Startup
  • Collaborations have named their representatives
    in the various committees
  • First pre-POB meetings being scheduled will not
    be fully populated
  • PEB - this week
  • SC2 beginning of December
  • Kick-off workshop in February?

27
  • Additional information

28
The LHC Computing GridProject Structure
LHCC
CommonComputingRRB
Reviews
Project Overview BoardChair CERN Director for
Scientific ComputingSecretary CERN IT Division
Leader MembershipSpokespersons of LHC
experimentsCERN Director for CollidersRepresent
atives of countries/regions with Tier-1 center
France, Germany, Italy, Japan, United Kingdom,
United States of America4 Representatives of
countries/regions with Tier-2 center from CERN
Member StatesIn attendanceProject LeaderSC2
Chairperson
The LHC Computing Grid Project
Reports
Resource Matters
Project Overview Board
Project Manager ProjectExecutionBoard
Software andComputingCommittee(SC2)
Requirements, Monitoring
implementation teams
29
The LHC Computing GridProject Structure
LHCC
CommonComputingRRB
Reviews
Software and Computing Committee
(SC2)(Preliminary) Sets the requirements Approves
the strategy workplan Monitors progress and
adherence to the requirements Gets technical
advice from short-lived focused RTAGs
(Requirements Technology Assessment
Groups) Chair to be appointed by CERN Director
GeneralSecretary Membership2 coordinators
from each LHC experiment Representative from CERN
EP Division Technical Managers from centers in
each region represented in the POBLeader of the
CERN Information Technology DivisionProject
LeaderInvited POB Chairperson
The LHC Computing Grid Project
Reports
Resource Matters
Project Overview Board
Project Manager ProjectExecutionBoard
Software andComputingCommittee(SC2)
Requirements, Monitoring
implementation teams
30
The LHC Computing GridProject Structure
  • Project Execution BoardGets agreement on
    milestones, schedule, resource allocation
  • Manages the progress and direction of the project
  • Ensures conformance with SC2 recommendations
  • Identifies areas for study/resolution by SC2
  • Membership (preliminary POB approval required)
  • Project Management Team
  • Project Leader Area Coordinators
  • Applications
  • Fabric basic computing systems
  • Grid technology - from worldwide grid projects
  • Grid deployment, regional centres, data
    challenges
  • Empowered representative from each LHC
    Experiment
  • Project technologist Resource
    manager
  • Leaders of major contributing teams
  • Constrain to 1518 members

LHCC
CommonComputingRRB
Reviews
The LHC Computing Grid Project
Reports
Resource Matters
Project Overview Board
Project Manager ProjectExecutionBoard
Software andComputingCommittee(SC2)
Requirements, Monitoring
implementation teams
31
Experiment members
32
The Large Hadron Collider Project 4 detectors
CMS
ATLAS
Storage Raw recording rate 0.1 1
GBytes/sec Accumulating at 5-8
PetaBytes/year 10 PetaBytes of
disk Processing 200,000 of todays fastest
PCs
LHCb
33
(No Transcript)
34
Summary of Additional Resources needed
35
Distributed Computing
  • Distributed computing - 1990s - locally
    distributed systems
  • Clusters
  • Parallel computers (IBM SP)
  • Advances in local area networks, cluster
    management techniques
  • ? 1,000-way clusters widely available
  • Distributed Computing 2000s
  • Giant clusters ? fabrics
  • New level of automation required
  • Geographically distributed systems
  • Computational Grids
  • Key areas for RD
  • Fabric management
  • Grid middleware
  • High-performance networking
  • Grid operation

36
  • Grid Technology and LHC

37
Computational Grids Significance for CERN and
LHC
  • LHC Data Analysis
  • enormous computing requirements processors,
    data storageglobal distribution of the user
    communitydistributed funding model for particle
    physics research
  • implies
  • a geographically distributed computing
    facilitywith independent ownership/management of
    the different nodeseach with different access
    and usage policiesand serving multiple user
    communities
  • but an LHC collaboration should see this complex
    environmentas a coherent virtual computing centre

38
The promise of Grid Technology
  • Emerging computational grid technology promises
    to provide solutions to many of the problems
    inherent in the geographically distributed
    environment
  • wide area distributed data management
    migration, caching, replication, synchronisation,
    ..
  • efficient resource scheduling mapping between
    global and local policies, problem decomposition,
    matching available processing resources with
    distributed data, ..
  • security global authentication, local
    authorisation, data protection, ..
  • performance and status monitoring
  • reliability and sustained throughput - system
    resilience application recovery
  • efficient use of high throughput wide area
    networks

39
What will the Grid do for you?
  • you submit your work
  • and the Grid
  • Finds convenient places for it to be run
  • Optimises use of the widely dispersed resources
  • Organises efficient access to your data
  • Caching, migration, replication
  • Deals with authentication to the different sites
    that you will be using
  • Interfaces to local site resource allocation
    mechanisms, policies
  • Runs your jobs
  • Monitors progress
  • Recovers from problems
  • .. and .. Tells you when your work is complete

40
Grid LHC
  • In the longer term
  • Grid technology could provide mainline, standard,
    industrial solutions to help build the LHC
    computing environment
  • In the medium term -
  • LHC offers an ideal testbed for Grid technology
  • giant computing problem but relatively simple
    computing model
  • global community encourage inter-working of
    technology from different projects
  • clear timescale ensure a strong focus on solving
    the real not the theoretical problems
Write a Comment
User Comments (0)
About PowerShow.com