Title: The ALICE Framework at GSI
1The ALICE Framework at GSI
- Kilian Schwarz
- ALICE Meeting
- August 1, 2005
2Overview
- ALICE framework
- What part of ALICE framework is installed where
at GSI and how can it be accessed/used - ALICE Computing model (Tier architecture)
- Resource consumption of individual tasks
- Resources at GSI and GridKa
3ALICE Framework
G3
G4
FLUKA
ISAJET
HIJING
AliRoot
AliEn
Virtual MC
EVGEN
MEVSIM
HBTAN
STEER
PYTHIA6
PDF
EMCAL
ZDC
ITS
PHOS
PMD
TRD
TOF
RICH
HBTP
CRT
FMD
MUON
TPC
STRUCT
START
RALICE
ROOT
F. Carminati, CERN
4Software installed at GSI AliRoot
- Installed at /d/alice04/PPR/AliRoot
- Newest version AliRoot v4-03-03
- Environment setup via
- gt . gcc32login
- gt . alilogin dev/new/pro/version-number
- ? gcc295-04 not supported anymore
- ? corresponding ROOT version initialized, too
- responsible person Kilian Schwarz
5Software installed at GSI ROOT(AliRoot is
heavily based on ROOT)
- Installed at /usr/local/pub/debian3.0/gcc323-00/r
ootmgr - Newest version 502-00
- Environment setup via
- gt . gcc32login / alilogin or rootlogin
- Responsible persons
- - Joern Adamczewski / Kilian Schwarz
- See also http//www-w2k.gsi.de/root
6Software installed at GSI geant3(needed for
simulation accessed via VMC)
- Installed at /d/alice04/alisoft/PPR/geant3
- Newest version v1-3
- Environment setup via gcc32login/alilogin
- Responsible person Kilian Schwarz
7Software at GSI geant4/Fluka(simulation
accessed via VMC)
- Both so far not heavily used from ALICE
- Geant4 standalone versions up to G4.7.1
- newest VMC version geant4_vmc_1.3
- Fluka not installed so far by me
- Environment setup via
- gt . gsisimlogin -vmc dev/new/prod/version
- See also http//www-linux.gsi.de/gsisim/g4vmc.htm
l - Responsible person Kilian Schwarz
8Software at GSI event generators(task
simulation)
- Installed at /d/alice04/alisoft/PPR/evgen
- Available
- - Pythia5
- - Pythia6
- - Venus
- Responsible person Kilian Schwarz
9Software at GSI AliEnThe ALICE Grid Environment
- Currently being set up in the version2 (AliEn2)
- Installed at /u/aliprod/alien
- Idea global production and analysis
- Environment setup via . .alienlogin
- Copy certs from /u/aliprod/.globus or register
own certs - Usage /u/aliprod/bin/alien (proxy-init/login)
- Then register files and submit grid-jobs
- Or directly from ROOT !!!
- Status global AliEn2 production testbed
currently being set up. - Will be used for LCG SC3 in September
- Individual analysis of globally distributed Grid
data at the latest during LCG SC4 2006 via
AliEn/LCG/PROOF - Non published analysis possible already now
- - create AliEn-ROOT Collection (xml file
readable via AliEn) - - analyse via ROOT/PROOF (TFileOpen(alien
//alice/cern.ch/production/) - - Web Frontend being created via ROOT/QT
- Responsible person Kilian Schwarz
10AliEn2 services (see http//alien.cern.ch)
11Software at GSI Globus
- Installed at /usr/local/globus2.0 and
- /usr/local/grid/globus
- Versions globus2.0 and 2.4
- Idea can be used to send batch jobs to GridKa
(far more resources available than at GSI) - Environment setup via . globuslogin
- Usage
- gt grid-proxy-init (Grid certificate needed
!!!) - gt globus-job-run/submit alice.fzk.de
Grid/Batch job - Responsible person Victor Penso/Kilian Schwarz
12GermanGrid CA
How to get a certificate in detail See
http//wiki.gsi.de/Grid/DigitalCertificates
13Software at GSI LCG
- Installed at /usr/local/grid/lcg
- Newest version LCG2.5
- Idea global batch farm
- Environment setup . lcglogin
- Usage
- gt grid-proxy-init (Grid certificate needed
!!!) - gt edg-job-submit batch/grid job (jdl-file)
- See also http//wiki.gsi.de/Grid
- Responsible person Victor Penso, Anar Manafov,
Kilian Schwarz
14LCG the LHC Grid Computing project(with ca. 11k
CPUs worlds largest Grid Testbed)
15Software at GSI PROOF
- Installed at /usr/local/pub/debian3.0/gcc323-00/r
ootmgr - Newest version ROOT 502-00
- Idea parallel analysis of larger data sets for
quick/interactive results - Personal PROOF Cluster at GSI, integrated in
batch farm, can be set up via - gt prooflogin ltparametersgt (e.g. number of
slaves, data to be analysed, -h (help)) - See also http//wiki.gsi.de/Grid/TheParallelRootF
acility - Later personal PROOF Cluster including GSI and
GridKa via Globus possible - Later global PROOF Cluster via AliEn/D-Grid
possible - Responsible person Carsten Preuss, Robert
Manteufel, Kilian Schwarz
16Parallel Analysis of Event Data
proof.conf slave node1 slave node2 slave
node3 slave node4
Remote PROOF Cluster
Local PC
root
.root
node1
ana.C
.root
root
node2
root root 0 tree.Process(ana.C)
root root 0 tree.Process(ana.C) root 1
gROOT-gtProof(remote)
root root 0 tree.Process(ana.C) root 1
gROOT-gtProof(remote) root 2
dset-gtProcess(ana.C)
.root
node3
.root
node4
17 LHC Computing Model(Monarc and Cloud)
One Tier 0 site at CERN for data taking ALICE
(Tier 01) in 2008 500 TB disk (8), 2
PB tape, 5.6 MSI2K (26) Multiple Tier 1 sites
for reconstruction and scheduled analysis 3 PB
disk (46), 3.3 PB tape 9.1 MSI2K (42) Tier 2
sites for simulation and user analysis 3 PB
disk(46), 7.2 MSI2K (33)
18ALICE Computing model more in detail
- T0 (CERN) long term storage for raw data,
calibration and first reconstruction - T1 (5, in Germany GridKa) long term storage of
second copy of raw data, 2 subsequent
reconstructions, scheduled analysis tasks,
reconstruction of MC Pb-Pb data, long term
storage of data processed at T1s and T2s - T2 (many, in Germany GSI) generate and
reconstruct simulated MC data and chaotic
analysis - T0/T1/T2 short term storage in multiple copies
of active data - T3 (many, in Germany Münster, Frankfurt,
Heidelberg, GSI) chaotic analysis
19CPU requirements and Event size
p-p / KSI2k x s/ev. Heavy Ion KSI2k x s/ev.
Reconstruction 5.4 68
Scheduled analysis 15 230
Chaotic analysis 0.5 7.5
Simulation (ev. cr. and rec.) 350 15000 (2-4 hours on standard PC)
Raw / MB ESD / MB AOD / MB Raw MC ESD MC
p-p 1 0.04 0.004 0.4 0.04
Heavy Ion 12.5 2.5 0.25 300 2.5
20ALICE Tier resources
Tier0 Tier1s Tier2s Total
CPU (MSI2k) 7.5 13.8 13.7 35.0
Disk (PB) 0.1 7.5 2.6 10.2
Tape (PB) 2.3 7.5 - 9.8
Bandwidth in (Gb/s) 10 2 0.01
Bandwidth out (Gb/s) 1.2 0.02 0.6
21GridKa (1 of 5 T1s)IN2P3, CNAF, GridKa, NIKHEF,
(RAL), Nordic, USA (effective 5)ramp up time
due to shorter runs and reduced luminosity at the
beginning not full resources needed 20 2007,
40 2008, 100 end of 2008
Total
Status 2005 2006 2007 2008 2009 2009
CPU (kSI2k) 243 57 300 600 1800 3000
Disk (TB) 28 (50 used) 12 160 200 600 1000
Tape (TB) 56 24 220 300 900 1500
22GSI T3(support for the 10 German ALICE members)
Total
Status 2005 2006 2007 2008 2009 2009
CPU (kSI2k) 64 Dual P4, 20 DP3, (80 DOpteron new bought) - - 400 130 530(800) 500 T3
Disk (TB) 2.23 (0.3 free) 15 TB new 200 30 230 100 T3
Tape (TB) 190 (100 used) 500 500 1000
T3 Münster, Frankfurt, Heidelberg, GSI