LCG Deployment in Japan - PowerPoint PPT Presentation

1 / 21
About This Presentation
Title:

LCG Deployment in Japan

Description:

Hiroshi Sakamoto. ICEPP, Univ. of Tokyo. 2 /21 ... Tier2 center at ICEPP, U. Tokyo. Decision made in October 2004. Man power consideration ... – PowerPoint PPT presentation

Number of Views:51
Avg rating:3.0/5.0
Slides: 22
Provided by: www2T1
Category:

less

Transcript and Presenter's Notes

Title: LCG Deployment in Japan


1
LCG Deployment in Japan
  • Hiroshi Sakamoto
  • ICEPP, Univ. of Tokyo

2
Contents
  • Present status of LCG deployment
  • LCG Tier2
  • Certification authority
  • Implementation
  • Recent topics
  • KEK-ICEPP joint RD program
  • Network
  • Upgrade of resources
  • Future plan

3
LCG in Japan
  • Tier2 center at ICEPP, U. Tokyo
  • Decision made in October 2004
  • Man power consideration
  • A few dedicated people
  • Including engineers and outsourcing
  • Contribution to LHC/ATLAS
  • The size of community 4 of ATLAS
  • Want to contribute more

4
Japanese CA for HENP
  • KEK-CA is ready for operation
  • Japanese HENP society KEK users
  • LHC ATLAS
  • KEK-B Belle
  • JPARC (50GeV PS at Tokai)
  • RHIC Phenix (RIKEN)
  • CP/CPS prepared
  • Discussion between KEK and ICEPP
  • To be submitted to the EU Grid PMA
  • Or to the AP Grid PMA?

5
(No Transcript)
6
TOKYO-LCG2 cluster
  • LCG-2 cluster_at_u-tokyo
  • 52 Worker Nodes
  • Upgraded to LCG-2_3_1 (last week)
  • From LCG-2_1_1 last week
  • With YAIM
  • Redhad 7.3 (will replace Scientific Linux)

7
PC Farm
  • HP ProLiant BL20p
  • Xeon 2.8GHz 2CPU/node
  • 1GB memory
  • SCSI 36GBx2, hardware RAID1
  • 3 GbE NIC
  • iLO remote administration tool
  • 8 blades/ Enclosure(6U)
  • Total 108 blades(216 CPUs) in 3 racks

8
Gateway (dggw0.icepp.jp)
WN (hpbwn7-1)
. . 52 WNs .
CE (dgce0.icepp.jp)
SE (dgse0.icepp.jp)
RB (dgrb0.icepp.jp)
WN (hpbwn13-8)
BDII (dgbdii0.icepp.jp)
Campus Network (133.11.24.0/23)
Private Network (172.17.0.0/24)
PXY (dgpxy0.icepp.jp)
LCG nodes HP Blade BL20P G2 CPU Xeon 2.8GHz
dual memory 1GB(plan to 2GB) GbE NIC
UI (dgui0.icepp.jp)
NFS Server DELL 1750 CPU Xeon 2.8GHz dual/
memory 2GB IDE-FC RAID Infortrend
controller 250GB HDD1610
NFS sever (dgnas0.icepp.jp)
FC SW
/storage 1.75TB
/home 1.75TB
/storage 1.75TB

1.75TB 20 35TB
9
KEK-ICEPP joint RD
  • Testbed cluster_at_u-tokyo
  • 1 Worker Node
  • LCG-2_4_0 with VOMS
  • Simple CA for testbed user
  • Scientific Linux with autorpm
  • Testbed cluster_at_KEK
  • Computing Research Center

10
KEK LCG2
UI
Proxy
(remaining)
LCFGng
BDII-LCG2
AMD Opteron-basedLinux System as WNs (under
integration)
CE(SiteGIIS)
RB
Managed by LSF
WN
WN
WN
WN
ClassicSE
CE(SiteGIIS)
WN
WN
WN
WN
IBM eServer 326 Opteron 2.4GHz 4096MB
20 nodes
WN
WN
WN
WN
WN
WN
IBM eServer xSeries Pen III 1.3 GHz 256MB
RAM (Test WN)
WN
Managed by PBS
WN
WN
WN
WN
WN
WN
11
(No Transcript)
12
RD Menu
  • Stand-alone grid connecting two clusters
  • 1Gbps dedicated connection between KEK and ICEPP
    (SuperSINET)
  • Exercises understanding LCG middleware
  • Special interests
  • SRB
  • Grid datafarm (Osamu Tatebe AIST)

13
Network
  • Peer to peer 1Gbps between CERN and ICEPP
  • Sustained data transfer study
  • 10Gbps to US and EU
  • Still thin, but improving connection among
    Asia/Pacific countries
  • JP-TW to 1Gbps very soon.
  • JP-HK, JP-CN

14
(No Transcript)
15
(No Transcript)
16
PC Farm Upgrade
  • IBM BladeCenter HS20
  • Xeon 3.6GHz 2CPU/node
  • EM64T 2GB memory
  • SCSI 36GBx2, hardware RAID1
  • 2 GbE NIC
  • Integrated System Management Processor
  • 14 blades/ Enclosure(7U)
  • Total 150 blades(300CPU) in 2 rack 1rack for
    consoleNetwork SW

17
(No Transcript)
18
  • FOUNDARY BigIron MG8
  • two 4x10GbE modules
  • four 60xGbE modules
  • Disk Array
  • 16x250GB SATA HDD 2 FibreChannel I/F
  • 27 Cabinets in total

19
(No Transcript)
20
Future plan
  • LCG Memorandum of Understanding
  • To be signed in JFY2005
  • University of Tokyo as the funding body
  • LCG Tier2 Resources
  • More resources added to our testbed
  • in JFY2005 approved
  • LCG SC4 ATLAS DC3 in 2006
  • Production system
  • Budget request submitted for JFY2006
  • Expected to become operational in Jan. 2007

21
(No Transcript)
Write a Comment
User Comments (0)
About PowerShow.com