LHCb computing model and the planned exploitation of the GRID PowerPoint PPT Presentation

presentation player overlay
1 / 13
About This Presentation
Transcript and Presenter's Notes

Title: LHCb computing model and the planned exploitation of the GRID


1
LHCb computing model and the planned exploitation
of the GRID
  • Eric van Herwijnen, Frank Harris
  • Monday, 17 July 2000

2
Overview
  • LHCb computing model and existing resources
  • EU GRID project participation
  • GRID unique opportunity to unify CPU resources in
    the collaborating institutes
  • Mini project
  • EU GRID project submitted
  • Paper by F. Harris at
  • http//lhcb.cern.ch/computing/hoff-grid/compmodela
    ndgrid.doc

3
LHCb computing model
  • RICH and VertexDectector require detailed
    optimisation studies
  • High level triggers and physics studies require
    large amounts of simulated background and channel
    data
  • Simulation largely (2/3) to be done outside CERN
    (1/3)
  • GRID to be used with existing FTN based
    simulation program

4
Current resources for MC production
  • CERN PCSF, 60 queues running WNT
  • RAL 30 queues running WNT
  • Liverpool 300 PC farm running Linux
  • Short term objective move away from WNT to Linux
    and use the GRID

5
LHCb WP8 application(F. Harris)
  • MAP Farm(300 cpu) at Liverpool to generate 107
    events over 4 months.
  • Data volumes transferred between facilities
  • Liverpool to RAL 3TB (RAW,ESD,AOD,TAG)
  • RAL to Lyon/CERN 0.3TB (AOD and TAG)
  • Lyon to LHCb inst. 0.3TB (AOD and TAG)
  • RAL to LHCb inst. 100GB (ESD for sys. studies)
  • Physicists run jobs at regional centre or move
    AOD TAG data to local institute and run jobs
    there.
  • Also, copy ESD for 10 of events for systematic
    studies.
  • Formal Production scheduled ? start 2002 to
    mid-2002 (EU schedule)
  • BUT we are pushing ahead to get experience so we
    can define project requirements
  • Aim for a production run by end 2000
  • On basis of experience will give input on HEP
    application needs to the Middleware groups

6
GRID starting working group
  • Glenn Patrick
  • Chris Brew
  • Frank Harris
  • Ian McArthur
  • Nick Brook
  • Girish Patel
  • Themis Bowcock
  • Eric van Herwijnen
  • others to join from France, Italy, Netherlands
    etc.

7
Mini project (1)
  • Install Globus 1.1.3 at CERN, RAL and Liverpool
  • CERN installation completed on a single Linux
    node (pcrd25.cern.ch) running Redhat 6.1, not
    available to public yet
  • RAL installed but not yet tested
  • MAP being installed
  • Members to get access to respective sites
  • Timescale 1st week in August

8
Mini project (2)
  • Run SICBMC at CERN, RAL and Liverpool
  • Prepare a script using Globus commands to run
    sicbmc and copy the data back to the host where
    the job was executed from
  • Other partners to test the script from their
    sites
  • Timescale middle of August

9
Mini project (3)
  • Verify data can be shipped back to CERN and
    written onto tape
  • Use globus commands
  • Some scripting required to use SHIFT and update
    bookkeeping database
  • Time scale end of August
  • Benchmark sustained datatransfer rates

10
Integration of the GRID between CERN, RAL and MAP
(1)
  • Globus
  • toolkit has a C API (easy to integrate with
    Gaudi)
  • commands for remotely running scripts (or
    executables), recovering data from std output,
    saving consulting metadata through LDAP
  • The gains
  • Everyone uses the same executable
  • Everyone uses the same scripts
  • Data is handled in a uniform way
  • Batch system (LSF, PBS, Condor) to be discussed

11
Integration of the GRID between CERN, RAL and MAP
(2)
  • Explore the use of LDAP for bookkeeping database
  • API in C would solve current Oracle -gt Gaudi
    interface problem
  • Simplification of DB updating by MC production
  • Everybody is heading in this direction
  • Oracle have an LDAP server product, someone
    should investigate
  • Java job submission tools should be modified to
    create Globus jobs. Timescale October (to be
    done in parallel with current NT production)

12
Integration of the GRID between CERN, RAL and MAP
(3)
  • RAL NT farm to be converted to Linux this autumn
  • MAP uses Linux
  • CERN have been asked to install Globus on the
    Linux batch service (LSF)
  • A full MC production run using Globus is aimed
    for in december

13
Extension to other institutes
  • Establish a GRID architecture
  • Intel PCs running Linux Redhat 6.1
  • LSF, PBS or Condor for job scheduling
  • Globus 1.1.3 for managing the GRID
  • LDAP for our bookkeeping database
  • java tools for connecting production, bookkeeping
    GRID
Write a Comment
User Comments (0)
About PowerShow.com