US-CMS User Facility and Regional Center at Fermilab - PowerPoint PPT Presentation

About This Presentation
Title:

US-CMS User Facility and Regional Center at Fermilab

Description:

US-CMS Regional Center at Fermilab. Hardware and Networking. Data required ... Scope and Size: support US-CMS collaboration ... 2006 onward: $3.9 M hardware ... – PowerPoint PPT presentation

Number of Views:10
Avg rating:3.0/5.0
Slides: 16
Provided by: Matthias93
Learn more at: http://home.fnal.gov
Category:

less

Transcript and Presenter's Notes

Title: US-CMS User Facility and Regional Center at Fermilab


1
US-CMS User Facilityand Regional Centerat
Fermilab
  • Matthias Kasemann
  • FNAL

2
Outline
  • User Facility
  • Goals Schedule Organization
  • US-CMS Regional Center at Fermilab
  • Hardware and Networking
  • Data required
  • Support Functions
  • Size of Regional Center
  • Cost and Personnel Profile
  • Summary

3
User Facility Goals
  • Scope and Size support US-CMS collaboration
    (20 of full CMS).
  • Provide enabling infrastructure of software and
    computing to allow US physicists to fully
    participate in physics program of CMS.
  • Support the development and data analysis
    activities of US-CMS
  • acquire, develop, install, integrate, commission
    and operate hardware and software.
  • This will include a major Tier1 regional
    computing center at Fermilab to support US
    physicists working on CMS.

4
US_CMS RC1 at FNALSchedule Organization
  • 1999 - 2003 RD Phase
  • 2003 - 2005 Implementation Phase
  • 2006 onwards Operations Phase
  • All Phases of the US-CMS RC1 will be managed and
    operated within the FNAL Computing Division.
  • FNAL-CD is collecting very qualified experience
    from Tevatron RunII preparations and operations.
  • This will put us in a unique position to support
    collaborations of
  • 4-500 CMS scientists doing pp physics.

5
Fermilab Run II Computing and Software
  • Run II Parameters for DØ
  • Trigger rate 50 Hz (LHC / 2)
  • Raw data event size 250 kB (LHC / 4)
  • Data collection 6 x 108 evts/ yr. (LHC
    / 1.6)
  • Summary event size 150 kB (LHC x 1.5)
  • Physics sumry evt size 10 kB (LHC)
  • Total dataset size 300 TB/yr. (LHC / 3)
  • Bottom line Computing project O (Run I x 20)

  • O (LHC / 2-3)
  • This will be accomplished with resources
    available in 2000.

6
US-CMS Regional Center at FNALHardware and
Networking
  • Implement the main US-CMS regional center at
    Fermilab
  • include substantial CPU for analysis,
    reprocessing and simulation
  • data storage
  • data access facilities.
  • Networks and networking will play a key role in
    this distributed computing model
  • Deliver data to other US-CMS institutions through
    high-speed networks.
  • High bandwidth network connection to CERN.
  • Technology and policy in this area is in a state
    of flux. We have to track technology and
    participate in prototyping work.

7
Data required at US-CMS RC1
  • Event data available at regional center
  • event samples for testing and code development,
  • a collection of very interesting events
  • the full set of the Analysis Object data (AOD)
  • about 10 of the raw and reconstructed data (ESD)
  • all available event meta data
  • Non-event data
  • detector databases for calibration, monitoring,
    geometry, cabling
  • data related databases production log, run
    conditions, trigger setups

8
US-CMS Regional Center at FNALSupport Functions
  • Include user support personnel
  • training, documentation, code distribution, ...
  • Personnel to manage licenses and license
    acquisition.
  • Contract for needed services.
  • Responsibility and personnel to develop or
    acquire any software that is required to carry
    out its production and operation activities.
  • Provide support for many development activities
    during the detector construction period before
    data taking begins.
  • Provide ongoing support during operations phase
    of the experiment.

9
RD Phase activities
  • Participate in RD to prove the concept of LHC
    regional centers
  • MONARC testbed
  • Object database testbeds
  • Prototype regional center by 2002
  • Networking testbeds
  • In RD phase and beyond
  • Provide CMS user support
  • documentation
  • code management and distribution
  • training
  • user help desk

10
Size of the Tier 1 Regional Center
  • The base for these estimates is a set of figures
    for CMS offline computing at CERN that was made
    available by CMS in mid-1998.
  • CPU (1 TIPS 106 MIPS ? 25k SpecInt95)
  • in 2005 3.6 TIPS
  • 2006 on 1.2 TIPS new 1.2 TIPS replacement
  • Disk storage
  • in 2005 108 TB
  • 2006 on 40 TB new 27 TB replacement
  • Serial storage
  • in 2005 1 robot 0.4 PB
  • 2006 on 0.2 PB new robot every 3 years

11
Hardware cost estimation
  • Costs de-escalated according to technology-cost
    development expectations.
  • Actual costs for CPU and disk are based on our
    Run II experience
  • CPU 3.5M/TIPS in 2003 (extrapolated from 1999)
  • disk 26.90/GB in 2003 (extrapolated from 1999)
  • The cost reflect the use of expensive SMP
    machines for analysis in Run II.
  • Cost for serial storage also based on Run II
  • 1.1M per robot plus 0.5M/PB for media

12
Regional Center ScheduleImplementation and
Operation phase
  • To distribute cost over 2003, 2004 and 2005 and
    profit from price/performance evolution we plan
    to acquire 10 of disk and CPU in 2003, 30 in
    2004, 60 in 2005.
  • Integrated capacity installed per year
  • 2003 2004 2005 2006 CMS operation
  • CPU 0.4 1.44 3.6 4.8 TIPS
  • disk 11 43 108 148 GB
  • tape 0.03 0.05 0.4 0.6 PB
  • robots 1 0.33 robots

13
Cost and Personnel Profile
14
Cost Summary
  • Total costs through 2005
  • hardware 10.6 M
  • networking in US 2.1 M
  • licenses 0.1 M / year
  • Operating phase
  • 2006 onward 3.9 M hardware 35 people
  • Note
  • This hardware costs are based on what it is
    costing to serve 400-500 people in CDF and DØ
    under the assumption of exponential
    price/performance evolutions (9.1M for CDF or
    D0).
  • Numbers are conservative (SMP-CPU,)

15
US-CMS User FacilitySummary
  • The User Facility has to provide US-CMS
    scientists with a competitive infrastructure to
    fully participate in the science program of CMS.
  • FNAL with the experience of Tevatron experiments
    is well suited to host the US-CMS major Tier 1
    Regional Center.
  • Setup and support of the Regional Center will
    happen during 2003 - 2005. Initial hardware cost
    estimations (w.o. networking) are 10.6M 35
    FTEs (by 2005).
  • Operations cost estimates are 2.7M/year for
    hardware (w.o. networking) and about 35 people.
  • Networking cost will be substantial.
Write a Comment
User Comments (0)
About PowerShow.com