Title: Grid for CBM
1Grid for CBM
2What is Grid ?
- Sharing of distributed resources within one
Virtual Organisations !!!!
3LHC Wissenschaftler weltweit
Europa 267 Institute, 4603 User Sonstige 208
Institute, 1632 User
4Start of CBM Grid
- There are considerations to start a CBM Grid
- Task distributed MC production
- Potential sites 3 (Bergen, Dubna, GSI)
- After positive experiences the Grid can be
enlarged to more sites and tasks, like
distributed analysis
5requirements
- Globus-style X509 user certificates
- issued for CBM by GermanGrid CA
- http//www.gridka.de
- How to get a certificate ?
- at GSI gt . globuslogin
- gt grid-cert-request cn ltsurnamegt
ltnamegt - certificate request file and private key will
be stored in HOME/.globus - The request file has to be signed (openssl) by
the CA responsible person and mailed to
GermanGrid CA - The certificate will be mailed back via e-mail
6GermanGrid CA
How to get a certificate in detail See
http//wiki.gsi.de/Grid/DigitalCertificates
7requirements CBM VO Server (one per VO)
additional sites - Bergen, Dubna additional
users - to be added
8Globus/LCG creation of grid-mapfilenecessary
for each site
- E.g. with gLite-security tools
- - adjust GLITE_LOCATION/etc/glite-mkgridmap.co
nf - add group ldap//glite001.gsi.de8389/ocbm,d
cde,dcde - - Create grid-mapfile
- GLITE_LOCATION/sbin/glite-mkgridmap
output/etc/grid-security/grid-mapfile
9user creation on each site (support of CBM VO)
- Each site has to create cbm-user-IDs onto which
the Grid-users will be mapped - EGEE/LCG a certain number of POOL accounts,
e.g. cbmvo00 cbmvo10 - Globus AliEn one production user via this
userID the jobs will be submitted. E.g. cbmprod
10CBM software environment
- To be able to send real CBM jobs to the Grid, the
participating sites have to - Install the CBM software and prepare the
environment - Or the job has to bring its own environment
(static links)
11Agreement on common Grid middleware
- basically, the possibilities are
- - Globus
- - NorduGrid
- - LCG-2
- - AliEn
- - gLite (EGEE)
- - gLite (AliEn)
12LHC Computing Grid Project
Fundamental Goal of the LCG
To help the experiments computing
projects Phase 1 2002-05prepare and deploy
the environment for LHC computing Phase 2
2006-08acquire, build and operate the LHC
computing service
- SC2 Software Computing Committee
- SC2 includes the four experiments, Tier 1
Regional Centres - SC2 identifies common solutions and sets
requirements for the project - PEB Project Execution Board
- PEB manages the implementation
- organising projects, work packages
- coordinating between the Regional Centres
13EDG Middleware Architecture
Local Computing
APPLICATIONS
Grid
M / W
Grid
GLOBUSCondorG (via VDT)
Fabric
14Dubna (JINR) LCG-2 site
15Dubna (JINR) LCG-2 siteLCG-test mostly
successful
16JINR (LCG-2 site job-submit)
17Timeline
First production (distributed simulation)
10 DC (analysis)
2001 2002
2003 2004 2005
- After only 2 years of development, we have
deployed a distributed computing environment
which meets the needs of Alice experiment - Simulation Reconstruction
- Event mixing
- Analysis
- Using Open Source components (representing 99
of the code), internet standards (SOAP,XML, PKI)
and scripting language (perl) was the key element
that alllowed quick prototyping and very fast
development cycles
P. Buncic, CERN
18Building AliEn
P. Saiz, CERN
19AliEn Grid (ALICE VO)
- 77 configured sites worldwide
20DC Monitoring http//alien.cern.ch
- Monalisa http//aliens3.cern.ch8080
21lxts05.gsi.de AliEn client (PANDA VO)
22JINR and Bergen AliEn sites
23JINR and Bergen AliEn sites
24Grids and Open Standards
Increased functionality, standardization
Time
25Architecture Guiding Principles
- Lightweight (existing) services
- Easily and quickly deployable
- Use existing services where possible asbasis for
re-engineering - Interoperability
- Allow for multiple implementations
- Resilience and Fault Tolerance
- Co-existence with deployed infrastructure
- Run as an application (e.g. on LCG-2 Grid3)
- Reduce requirements on site components
- Basically globus and SRM
- Co-existence (and convergence) with LCG-2 and
Grid3 are essential for the EGEE Grid service - Service oriented approach
- WSRF still being standardized
- No mature WSRF implementations exist to date, no
clear picture about the impact of WSRF hence
start with plain WS - WSRF compliance is not an immediate goal, but we
follow the WSRF evolution
26Approach
- Exploit experience and components from existing
projects - AliEn, VDT, EDG, LCG, and others
- Design team works out architecture and design
- Architecture https//edms.cern.ch/document/476451
- Design https//edms.cern.ch/document/487871/
- Components are initially deployed on a prototype
infrastructure - Small scale (CERN Univ. Wisconsin)
- Get user feedback on service semantics and
interfaces - After internal integration and testing components
are delivered to SA1 and deployed on the
pre-production service
27gLite (AliEn)
- From now on used by ALICE for globally
distributed analysis in connection with - PROOF (at GSI http//www-w2k.gsi.de/root/
- ? PROOF at GSI )
28gLite (EGEE)
- Will replace LCG-2.X in near? future, but
nobody has real experience with it
29summary (middlewares)
- LCG-2 GSI and Dubna
- - pro large distribution, support
- - contra difficult to set up, no distributed
analysis - AliEn GSI, Dubna, Bergen
- - pro in production since 2001
- - contra unsecure future, no support
- Globus 2 GSI, Dubna, Bergen?
- - pro/contra simple, but functioning (no RB,
no FC, no support) - gLite/GT4 new on the market
- - pro/contra nobody has production experience
(gLite)
30lxg01-05.gsi.de
- LCG test installation, visible in LCG
preproduction testbed - Trying to port LCG to Debian Linux