CMS Software and Computing - PowerPoint PPT Presentation

About This Presentation
Title:

CMS Software and Computing

Description:

Architecture Working Group, CAFE. CMS software architect Core developers ... Boston, and others. Martti Pimi , CERN. 14 Nov. 2000. CMS distributed computing ... – PowerPoint PPT presentation

Number of Views:90
Avg rating:3.0/5.0
Slides: 23
Provided by: ygap5
Category:

less

Transcript and Presenter's Notes

Title: CMS Software and Computing


1
  • CMS Software and Computing
  • Martti Pimiä
  • CERN/CMS
  • DOE/NSF Baseline Review of US CMS
  • Software and Computing
  • Brookhaven National Lab
  • November 14, 2000

2
Outline
  • Introduction
  • CMS Software Architecture development plans
  • Status of Software / Computing Milestones
  • Interactions within CMS Collaboration
  • Distributed computing
  • Evolving organization
  • Key issues in 2001-2002

3
CMS Core Software and Computing Scope
  • Framework, Architecture, Tools and Facilities for
  • Design, evaluation and calibration of the
    detector
  • Storage, access, distribution and processing of
    data
  • Event simulation, reconstruction and analysis
  • Distributed collaboration, software development,
    data processing, and physics analysis
  • An integral part of CMS and crucial to its
    success
  • Now, similar to a subdetector system in terms of
    scale and complexity
  • Treated organizationally in CMS as any
    subdetector system
  • Will be a main activity of CMS during LHC
    operation

4
Software Life Cycle
Design
Prototypes
Detector
Mass-production
Installation Commissioning
Maintenance Operation
Computing Hardware
Functional Prototype
Fully Functional
Production System
Further Development
Design
Software Integration
Prototype
Test Integrate
Deploy
Cyclic Releases
2002
2001
2004
2003
2005
2000
2015
5
Software Development Phases
  • 5 Production System
  • Online / Trigger Systems 75 ? 100Hz
  • Offline Systems few 1015 Bytes / year
  • 109 events / yr to look for a handful of
    (correct!) Higgs
  • Highly distributed collaboration and resources
  • Long lifetime
  • 1 Proof of Concept End of 1998
  • Basic functionality
  • Very loosely integrated
  • 2 Functional Prototype
  • More complex functionality
  • Integrated into projects
  • Reality Check 1 Data Challenge
  • 3 Fully Functional System
  • Complete Functionality
  • Integration across projects
  • Reality Check 5 Data Challenge
  • 4 Pre-Production System
  • Reality Check 20 Data Challenge

2002
2001
2004
2003
2005
2000
2015
6
Significant Requirements from CMS TDRs or
its not just an evolution to 2005 software
TDR
Software Computing Dec 2002
Trigger Dec 2000
DAQ Dec 2001
Physics Dec 2003
Major Core Software Milestones
2002
2001
2004
2003
2005
2000
2015
7
Staying on the Technology Curve and Tracking
Fallback Solutions
Fallback strategiesalways one or two in
addition to the current baseline
Baseline strategy
Technologies die and perform in unexpected ways
gt design for change There is never a point
where technology choices are frozen gt never in
no-return situation
Functional Prototype
Fully Functional
Production System
2002
2001
2004
2003
2005
2000
2015
8
Milestones Software
  • CMS software development strategy
  • First, completed transition to C, now
    functionality with good performance, then
    optimized performance

In 2005, need fully functional, tested, high
quality, performing software Phases test ideas,
make prototypes, develop modules, and
integrate In 2000, we have functional prototypes
(see talk by D.Stickland) Next, create the basis
for final system
9
CMS SW Architecture Working Group
  • Next iteration on software development to build
    basis for fully functional (pre-production)
    software
  • Iterate and extend on CMS software architecture
  • Keep efficient data access flexible analysis
    strategy
  • Domains, Interfaces, Designs
  • Documentation, interface stability, tested
    functionality
  • GRID - awareness - for distributed data analysis
  • Architecture Working Group, CAFE
  • CMS software architect Core developers
  • Open meetings
  • Review present system, review requirements
  • Document, higher level tools prepare for wide
    deployment

10
Contributions to Core Activities
  • CERN
  • General software support, software repository
    services, software process, development and
    support of tools
  • Architecture contributions
  • Database contributions (event data, geometry)
  • Development and support for distributed computing
  • Finland
  • Quality, Testing, Librarian services for
    simulation
  • France
  • Testing, Simulation tools
  • Italy
  • Modelling, Tests
  • Pakistan
  • Tools for distributed computing
  • US
  • Lots of activities (see presentations) Leading
    in several areas

11
SC interaction with the full CMS
  • CMS SC Effort is well integrated with the whole
    collaboration, users and developers
  • Interaction at all levels, technical,
    organisational, managerial,
  • Subdetector representatives in Technical Board
    (SCTB), regional representatives and finances in
    Institution Board (SCB)
  • Active participation of all parties
  • Testing detectors at test beams
  • Development and use of high level trigger (HLT),
    reconstruction (ORCA), physics selection (PRS),
    and simulation code
  • ORCA tutorials
  • Workshops Computing for Physics
  • May 17, 2000 Data model and storage (event
    selection, data access)
  • Sept 26, 2000 Calibrations, Online Offline
    issues (data streams, farms)
  • Early 2001 Analysis strategy and tools
  • Architecture Working Group

12
Milestones Data
  • 10 Petabytes for raw data, simulated data,
    reconstructed data, physics objects, calibration
    data, user data, ...

Need to add GRID milestones for distributed
system integration
13
Worldwide Data Grid for CMS
Bunch crossing per 25 nsecs.100 triggers per
secondEvent is 1 MByte in size
Experiment
PBytes/sec
Online System
100 MBytes/sec
Offline Farm,CERN Computer Center
Tier 0 1
0.6 - 2.5 Gbits/sec
Air Freight
US Center _at_ FNAL
France Center
Italy Center
UK Center
Tier 1
2.4 Gbits/sec
Tier 2
622 Mbits/sec
Tier 3
Physicists work on analysis channels. Each
institute has 10 physicists working on one or
more channels
Institute
Institute
Institute
Institute
100 - 1000 Mbits/sec
Physics data cache
Tier 4
Workstations
14
CMS distributed computing
  • Match
  • Top-down approach
  • Scaling and test of the distributed system,
    progressively growing in the years, to get to
    the foreseen dimension
  • Bottom-up approach
  • Build a distributed system able to provide the
    resources for Trigger and Physics studies in a
    progressive growth of required resources
  • Need prototypes (CERN and Outside) to
  • Test scaling of the Computing Model
  • Provide the resources to Physical activities
  • Exercise the software system to match the
    physicists needs
  • Build systems progressively match todays needs
    for production, meet the requirements at startup
    in 2005, and beyond
  • Continue development and adaptation to new
    technologies

15
CMS distributed computing
  • CERN (Tier0Tier1) and many Tier1 candidates have
    plans for prototyping and implementation
    starting from next year. Most of them already
    contribute to the production for trigger studies.
  • CERN Event Filter Farm prototype ( gt 200 CPUs)
    used in 1999-2000 as a pre-prototype for
    Tier0Tier1. Part-time use of 500 CPUs at
    CERN foreseen in 2001.
  • Prototypes are foreseen to meet in 2003
  • About 50 of CMS 2005 system at CERN
    (Tier0Tier1). Common shared effort, for all
    LHC Experiments funded by CERN.
  • At least 10 of the final system for CMS at many
    Tier1 centres
  • Reasonable number of Tier2 prototypes already
    one being built in California

16
Milestones Hardware
TDR and MoU

17
CMS regional center candidates
  • Tier1 Tier2
  • Finland - Helsinki
  • France CCIN2P3/Lyon ?
  • India - Mumbai
  • Italy INFN INFN, at least 3 sites
  • Pakistan - Islamabad
  • Russia Moscow Dubna
  • UK RAL ?
  • US FNAL Caltech, UC-San Diego, Florida, Iowa,
    Maryland, Minnesota, Wisconsin,
    Boston, and others

18
CMS distributed computing
  • CMS is progressing towards a coherent distributed
    system,
  • to support production and analysis
  • We need to start studying the problems and
    prototyping solutions for the distributed
    analysis by hundreds of users in many countries
  • Production, via prototypes, will lead to
    decisions about the architecture on the basis of
    measured performances and possibilities

19
Evolving CMS organization
  • Evolve and refine the CMS Software/ Computing
    organization to build a complete and consistent
    Physics Software
  • Joint Technical Board
  • Core Software and Computing Project (CSC)
  • SoftwareComputing TDR, Support Physics TDR
    Computing MoU
  • Distributed computing, GRID integration
  • Physics Reconstruction Software (PRS)
  • Consolidate Physics Software work between the
    detector groups targeted at CMS deliverables
    (HLT design, testbeams, calibrations, Prepare
    for Physics TDR, etc.)
  • Trigger and Data Acquisition (TriDAS)
  • Online Event Filter Farm, Online framework

20
(No Transcript)
21
Summary
  • US CMS contributions well integrated within the
    CMS needs
  • Important software engineering contributions to
    the
  • Architecture
  • Analysis tools
  • Distributed data access and processing
  • CMS software widely used for the US CMS physics
    activities
  • Important to Baseline the major national
    projects (see M. Kasemann
    presentation)
  • Work towards Computing MoU in the collaboration
    at political level
  • US CMS Core Software structures match well with
    the evolving CMS Core Software Computing
    organization

22
Key issues in 2001-2002
  • Distributed Computing
  • Understand Tier0 - Tier1 - Tier2 relationships
  • Choice of baseline for database
  • Significant ramp-up of Grid RD and production
  • Core Software engineering
  • Proved invaluable in the Functional Prototype
    phase successfully completed
  • Software Engineering Manpower Shortage
  • Slipped some milestones, or passed them with
    minimal functionality
  • Apparent functionality achieved at some expense
    of quality and engineering
  • Perform HLT data challenge (20 TB in 2001) to
    apply for trigger studies
  • Start up Analysis Software (see J. Branson
    presentation)
  • Overall assessment, formalization and
    consolidation of SW systems
Write a Comment
User Comments (0)
About PowerShow.com