ATLAS the Grid - PowerPoint PPT Presentation

About This Presentation
Title:

ATLAS the Grid

Description:

Light-weight Grid user interface, working prototypes etc. John Kennedy, University of Glasgow ... studies thus providing the ATLAS community with a real gain. ... – PowerPoint PPT presentation

Number of Views:19
Avg rating:3.0/5.0
Slides: 11
Provided by: rogerw73
Category:
Tags: atlas | gain | grid | tips | weight

less

Transcript and Presenter's Notes

Title: ATLAS the Grid


1
ATLAS the Grid and ScotGRID John kennedy
2
The ATLAS Computing Challenge
  • Running conditions at startup
  • 0.8x109 event sample ? 1.3 PB/year, before data
    processing
  • Reconstructed events, Monte Carlo data ? 10
    PB/year (3 PB on disk)
  • CPU 1.6M SpecInt95 including analysis
  • CERN alone can handle only a fraction of these
    resources

3
The Hierarchical View
1 TIPS 25,000 SpecInt95 PC (1999) 15
SpecInt95
PBytes/sec
Online System
100 MBytes/sec
Offline Farm20 TIPS
  • One bunch crossing per 25 ns
  • 100 triggers per second
  • Each event is 1 Mbyte

100 MBytes/sec
Tier 0
CERN Computer Centre gt20 TIPS
Gbits/sec
or Air Freight
  • HPSS

Tier 1
UK Regional Centre (RAL)
US Regional Centre
French Regional Centre
Italian Regional Centre
  • HPSS
  • HPSS
  • HPSS
  • HPSS

Tier 2
Tier2 Centre 1 TIPS
Tier2 Centre 1 TIPS
Tier2 Centre 1 TIPS
Gbits/sec
Tier 3
Physicists work on analysis channels Each
institute has 10 physicists working on one or
more channels Data for these channels should be
cached by the institute server
Lancaster 0.25TIPS
Sheffield
Manchester
Liverpool
Physics data cache
100 - 1000 Mbits/sec
Tier 4
Workstations
4
A More Grid-like Model
The LHC Computing Facility
5
Features of the Cloud Model
  • All regional facilities have 1/3 of the full
    reconstructed data
  • Allows more on disk/fast access space, saves tape
  • Multiple copies mean no need for tape backup
  • All regional facilities have all of the analysis
    data (AOD)
  • Resource broker can still keep jobs fairly local
  • Centres are Regional and NOT National
  • Physicists from other Regions should have also
    Access to the Computing Resources
  • Cost sharing is an issue
  • Implications for the Grid middleware on
    accounting
  • Between experiments
  • Between regions
  • Between analysis groups
  • Also, different activities will require different
    priorities

6
Resource Estimates
7
Resource Estimates
  • Analysis resources?
  • 20 analysis groups
  • 20 jobs/group/day 400 jobs/day
  • sample size 108 events
  • 2.5 SI95s/ev gt 1011 SI95 (s/day) 1.2106 SI95
  • Additional 20 for activities on smaller samples

8
Test Beds
  • EDG Test Bed 1
  • Common to all LHC experiments
  • Using/testing EDG test bed 1 release code
  • Already running boxed fast simulation and
    installed full simulation
  • US ATLAS Test Bed
  • Demonstrate success of grid computing model for
    HEP
  • in data production
  • in data access
  • in data analysis
  • Develop deploy grid middleware and applications
  • wrap layers around apps
  • simplify deployment
  • Evolve into fully functioning scalable
    distributed tiered grid
  • NorduGrid
  • Developing a regional test bed
  • Light-weight Grid user interface, working
    prototypes etc

9
ATLAS Data Challenge
  • ATLAS Data challenge 1 runs from April 2002 until
    Early 2003.
  • 39 Institutes in 18 countries participating in
    the data challenge and numbers are growing.
  • Between 9 and 900 processors available at
    individual sites, with 3200 processors in
    total.
  • In Total about 50 million events were generated
    in the first phase of the data challenge.
  • An Estimated 24 Tbytes of data will be produced
    during DC1.
  • The produced data will be used for physics
    studies thus providing the ATLAS community with a
    real gain.
  • The DC produces real physics data in a
    distributed computing environment.

10
The UK and ScotGRID
  • The UK has five major sites already participating
    (Birmingham, Lancaster, Liverpool, RAL,
    ScotGRID).
  • Many other sites are joining this year.
  • We already provide more computing power than
    CERN.
  • What ScotGRID is doing.
  • 200,000 Min Bias events for pileup studies.
  • 400 Simulation jobs 18hrs and 450Mb each.
  • 41,561 pileup events (more in pipeline).
  • 100 jobs 6hrs and 1.5Gb each.
  • Files need to be registered with Atlas
    bookkeeping and migrated to storage at
    Edinburgh/RAL/CERN.
Write a Comment
User Comments (0)
About PowerShow.com