BaBar and the Grid - PowerPoint PPT Presentation

1 / 16
About This Presentation
Title:

BaBar and the Grid

Description:

Dave Bailey, Chris Brew, Giuliano Castelli, James Werner, Fergus Wilson and Will Roethel ... For brief instructions see http://www.hep.man.ac.uk/u/roger/easyroot.html ... – PowerPoint PPT presentation

Number of Views:29
Avg rating:3.0/5.0
Slides: 17
Provided by: RogerB99
Category:
Tags: babar | grid | roger | wilson

less

Transcript and Presenter's Notes

Title: BaBar and the Grid


1
BaBar and the Grid
  • Roger Barlow
  • Dave Bailey, Chris Brew, Giuliano Castelli, James
    Werner, Fergus Wilson and Will Roethel
  • GridPP18 Glasgow
  • March 20th 2007

2
What were doing
  • Monte Carlo
  • Skimming
  • Data Analysis

3
History
  • 2002
  • ? Pioneering BaBarGrid demonstrator
  • ? BaBar analysis software set up at RAL Tier A
    centre. Successful displacement of physics
    analysis off-site. Common fund rebate to PPARC
  • 2007
  • BaBarGrid still not in general use
  • ? PPARC renege on MoU disc/CPU allocation, RAL
    lose Tier A status. PPARC loses rebate

4
BaBar Tier A news
  • IN2P3 The commitments of CCIN2P3 for 2007 (150
    TB and 1500 CPU units) and 2008 (200 TB and 1500
    CPU units) are confirmed. For both years, the
    CPUs will be there at the end of June and all
    workers will be made available to users during
    the summer the disks will be available from
    mid-July and will need a couple of months to be
    fully deployed. We foresee four shutdowns, about
    one day long each, per year they will be
    announced well in advance. For 2007, the dates
    are March 20, June 12, September 18 and December
    4. SL4 driven by LHC.
  • GridKa the situation for GridKa hasn't change
    27 TB of disk and 100 SLAC units of CPU in 07 and
    08. Hardware for 2007 is already in place,
    installed and currently running the burn-in
    tests. CPUs 2007 will be delivered on April 1st,
    disk 2007 has to be configured and should be made
    available during April as well. Concerning 2008,
    the current milestone is again April. SL4 new
    CPUs already running SL4 other CPUs will be
    upgraded from SL3 when gLite is shown to work
    properly with SL4.
  • RAL no new investment at RAL Tier A for babar.
    Non-LHC nominally get 5-10 of the overall
    computing resources (dominated by the LHC MOU)
    but currently going through a budget crisis. SL4
    will be driven by CERN and LHC Tier 2s likely to
    follow RAL's lead.
  • INFN Padova has bought its 07 hardware, some
    already delivered. CNAF disk installed CNAF cpu
    will be installed after their shutdown which
    should be in May (subject to sign-off on safety
    aspects by fire department etc...). For 08, no
    formal decision. Funding will no longer be direct
    to CNAF but via experimental budgets. In this
    case, BaBar Italy can either pay from their
    budget to install hardware in Italy or pay the
    common fund to install at SLAC. SL4 Padova is a
    babar-only site so can change when we need CNAF
    will follow LHC.

5
Are we downhearted? No!
  • Reasons to be cheerful
  • 1) Tier 2 centre at Manchester with 2000 CPUs,
    500 TB. With a fair share of this we can really
    do things
  • 2) Release 22 of BaBar software is now out. Root
    based conditions database installed
  • last use of Objectivity finally removed.

6
Monte Carlo (SP)
  • Tarball made of all programs and files
  • Runs at Manchester and RAL as production system
  • gt500 Million events generated and processed and
    sent to SLAC
  • Will extend to more sites now Objectivity is not
    required

7
Skimming
  • BaBar Analysis model

Skims
220 different Skims (and growing) For different
analysis selections
Some pointer skims Some deep copies
AllEvents 66 TB
8
Skimming details
  • Major computing load CPU and I/O
  • Skimming 100K events takes 10 hours
  • and there are 109 events in AllEvents
  • BaBar looking for resources outside SLAC
  • Skim process uses TaskManager software (written
    and gridified by Will Roethel)
  • Test at RAL Tier 2 centre production at
    Manchester Tier 2 (Chris Brew, Giuliano
    Castelli, Dave Bailey)

9
Skimming details
  • Set up 2TB xrootd server.
  • Import data from SLAC (slow 10 MBit but were
    working on it)
  • Submit skim jobs to Tier 2 using Grid
  • Moving data between server and farm is fast
    (Gbit)
  • Skim files (1Gbyte/job) sent to RAL for merging.
    (Will do at Manchester in due course.)
  • System running successfully. Going into
    production

10
EasyGrid the job submission system that works!
James Cunha Werner
GridPP18 Meeting University of Glasgow
11

Available since GridPP11 - September/2004
  • http//www.gridpp.ac.uk/gridpp11/babar_main.ppt
  • Several benchmarks with BaBar experiment data
  • Data Gridification
  • Particle identification http//www.hep.man.ac.uk/
    u/jamwer/index.html06
  • Neutral pion decays http//www.hep.man.ac.uk/u/ja
    mwer/index.html07
  • Search for anti deuteron http//www.hep.man.ac.uk
    /u/jamwer/index.html08
  • Functional gridification
  • Evolutionary neutral pion discriminate function
    http//www.hep.man.ac.uk/u/jamwer/index.html13
  • Documentation (main web page)
  • http//www.hep.man.ac.uk/u/jamwer/
  • 109 html files and 327 complementary files
  • 60 CPUs production and 10 CPUs development farms
    running independently without any problem between
    November/2005 and September /2006.

12
Over a year ago
  • Date Thu, 22 Dec 2005 155108 0000
  • From Roger Barlow ltroger.barlow_at_manchester.ac.ukgt
    To babar-users_at_lists.man.ac.ukSubject
    BABAR-USERS Manchester babarDear Manchester
    BaBarians,2 bits of good news.1) easyroot
    works. I have carefully idiot-proofed it, and if
    I can make it work then anyone can. Today it
    gives access to a small farm, meaning you can run
    several jobs in parallel and speed up your
    tauuser analysis by an order ofmagnitude. Soon
    we will enable the rest of the existing BaBar
    farm. And beforelong we have the 1000 node Dell
    farm.For brief instructions see
    http//www.hep.man.ac.uk/u/roger/easyroot.htmlFor
    full instructions see http//www.hep.man.ac.uk/u/
    jamwer/rootsrc.html2) we have a new big disk,
    thanks to Sabah. 1.6 TB. We need to decide what
    toput on it (and what to call it.)Father
    Christmas has been busy...Roger

13
? mesons in ? decays
Source Dr Marta Tavera
14
Physics Analysis on the Tier 2
  • Copied ntuples for a complete analysis to
    dCache
  • Run ROOT jobs using minimal afs/gsiklog/vanilla
    globus system
  • Struggling with dCache problems
  • Stress testing our dCache exposes weak points
  • dCache files distributed over 1000 nodes.
    Inevitably, some nodes fail. dCache catalogue
    doesnt know this. Jobs die
  • Progress is slow but positive.
  • Will run standard BaBar analysis (BetaApp) on
    data collections as next step

15
Outlook
  • GridSP In production. Will extend to more
    sites
  • GridSkimming Ready to go
  • EasyGrid Works. For users needs farms with BaBar
    data
  • BaBar Data at Manchester Tier 2
  • dCache being tested
  • xrootd now possible
  • Plan to try slashgrid soon
  • ntuples today, full data files tomorrow

16
And finally
See you in Manchester for OGF20/EGEE and EPS
conference has Detectors and Data Handling
session Now open for registration and abstract
submission
Write a Comment
User Comments (0)
About PowerShow.com