National Grid Infrastructure and Core Application - PowerPoint PPT Presentation

1 / 30
About This Presentation
Title:

National Grid Infrastructure and Core Application

Description:

Atlanta Hartsfield. International Airport (Surface Movement Advisor. AATT Project) ... Airport Digital Video (Remote Tower Sensor System) Engine Models (GRC) To ... – PowerPoint PPT presentation

Number of Views:56
Avg rating:3.0/5.0
Slides: 31
Provided by: ales62
Category:

less

Transcript and Presenter's Notes

Title: National Grid Infrastructure and Core Application


1
National Grid Infrastructure and Core
Application
  • Dr. Sangsan Lee
  • Director of Supercomputing Center
  • KISTI, Korea
  • sslee_at_hpcnet.ne.kr

Grid Forum Korea 2001 Seoul, 25 to 26 October 2001
2
Grid Forum Korea(GFK) History
2002
2001. 3
5
6
10
8
7
9
GGF1
NGrid Plan (MIC)
Euro-Globus Workshop
GGF2
GFK Committee
Asia-Pacific Grid Implementation Project(APEC TEL)
GGF3
ApGrid
GFK 2001
GFK 2002
3
Proposed Working Groups(19)
Collaborative Supercomputing Environment
CFD
HEP-Data
Grid Service
Nano Material Computing
Grid QoS
Post-Genomics
Grid Toolkit
Genomics
Grid Network Management
Non-equilibrium statistical Physics Biophysical
applications
Grid High Performance Networking
InetCompu
CC-Grid
Materials Simulations
Grid Security
Grid Resource/Job Management
Grid Contents
Seoul Grid Center
Until 8.31.
After 8.31.
4
Participating Organizations Participants in GFK
Mailing List(Oct., 2001)
Industry
Research Institute
Research Institute
51
Industry
107
11
75
54
257
University
University
415 Participants
140 Organizations
5
Main Roles of KISTI Supercomputing Center
  • National Grid center
  • Cooperative effort with HPC centers
  • Providing high performance supercomputing
    environments for scientists to deal with more
    challenging problems
  • Collaborative work with research institutes
  • Development and support of Data/Computational/Acce
    ss Grid application and core middleware
    technology
  • Grid technology transfer agent (management,
    coordination and dissemination)
  • Leading partner with GGF, APEC-TEL, PRAGMA and
    ApGrid, etc.

6
Why we need NGrid Program
  • To provide infrastructure and facilities for the
    next generation collaborative research in
  • - genomics and life science
  • - particle physics
  • - astronomy
  • - meteorology
  • - engineering design
  • - social sciences, and so on
  • To solve major challenges in processing,
    communication and storage of very large volume
    data
  • To provide generic solutions for the needs of
    individual disciplines and applications

7
Current Status of Research Environment
Client-Server
Scientist
8
Research Environment on NGrid
MI DDLEWARE
Scientist
  • Data Grid
  • Computational Grid
  • Access Grid

Visualization at the Desktop
Global Collaboration of Scientists
9
USA NASA Information Power Grid(Citation
William Johnston(NASA Ames)
Wing Models (ARC)
StabilizerModels
Human Models
Airframe Models
Engine Models(GRC)
Landing Gear Models (LaRC)
Application framework
compute and data management requests
West Coast TRACON/Center Data (Performance Data
Analysis Reporting System (PDARS) - AvSP/ASMM
ARC) Atlanta Hartsfield International
Airport (Surface Movement Advisor AATT
Project) NOAA Weather Dbase (ATL Terminal
area) Airport Digital Video (Remote Tower Sensor
System)
Grid Services Uniform access to distributed
resources
Information Power Grid managed compute and data
management resources
10
BIRN
Developing the Grid
Cyber Architecture To Support Biomedical
Informatics Research Network
BIRN - Phase I - 2001-2002
Form a National Scale Testbed and Federate
Multi-scale NeuroImaging Data from Centers with
High Field MRI and Advanced 3D Microscopes
NIH/NCRR Centers for BioMedical Imaging and
Computational Biology UCSD-wide with Medical
School
UCSD
Harvard
Caltech
NPACI/ SDSC
Cal-(IT)2
Surface Web
Deep Web
UCLA
Duke
Integrating Cyber Infrastructure to Link
Advanced Imaging Instruments Data Intensive
Computing Multi-Scale Brain Databases
Wireless Pad Web Interface
Citation Dr. Paul Messina(Caltech), GGF3
11
CERN Large Hadron Collider(LHC)
  • Raw Data 1 Petabyte/sec
  • Filtered 100Mbyte/sec 1 Petabyte/year 1
    Million CD ROMs

Tire0
Tire1
Tire2
Tire3
Tire4
12
GEODISE(UK) Grid Enabled Optimisation and
Design Search for Engineering
  • Simon Cox- Grid/ W3C Technologies and High
    Performance Computing
  • Global Grid Forum Apps Working Group
  • Andy Keane- Director of Rolls Royce/ BAE Systems
    University Technology Partnership in Design
    Search and Optimisation
  • Mike Giles- Director of Rolls Royce University
    Technology Centre for Computational Fluid
    Dynamics
  • Carole Goble- Ontologies and DARPA Agent Markup
    Language (DAML) / Ontology Inference Language
    (OIL)
  • Nigel Shadbolt- Director of Advanced Knowledge
    Technologies (AKT) IRC

BAE Systems- Engineering Rolls-Royce-
Engineering Fluent- Computational Fluid
Dynamics Microsoft- Software/ Web Services Intel-
Hardware Compusys- Systems Integration Epistemics-
Knowledge Technologies Condor- Grid Middleware
  • Industrial analysis codes
  • Applied to industrial problems - large scale CFD
    codes

Citation Dr. Paul Messina(Caltech), GGF3
13
Grid Activities in Japan (Citation Dr. Paul
Messina(Caltech), GGF3)
  • Ninf ETL/TIT
  • Developing Network enabled servers
  • Collaborating with NetSolve, UTK
  • Grid RPC APM WG proposal
  • Metacomputing TACC/JAERI
  • MPI for vectors, PACX-MPI, STAMPI
  • Stuttgart, Manchester, Taiwan, Pittsburgh
  • Virtual Supercomputing Center
  • Deploy portal for assembling supercomputer center
  • Globus promotion ?
  • Firewall compliance extension
  • ApGrid
  • A regional testbed across the Pacific Rim
  • Resources
  • Data Reservoir 300M JPY x 3yrs
  • Ninf-g/Grid-RPC 200M JPY
  • Networking Infrastructure
  • SuperSINET, JGN unknown
  • GRID-like

Grid RPC
14
History of NGrid with KISTI
  • 1998-1999 Preceding research
  • Development of metacomputing technology on
    heterogeneous supercomputing resources(The result
    was published at PDPTA2000 conference)
  • Development of a portable computational steering
    package CSP
  • 2000 Initiation of Globus-related researches and
    activity
  • Building Globus-based metacomputing test-bed
    between Seoul and Taejon using KOREN/KREONET2.
  • 2001 A Beginning of NGrid
  • KISTI Supercomputing Center suggested of carrying
    out NGrid to MIC(Feb.)
  • MIC announced the beginning of NGrid as a 5
    year(2002-2006) national program (May.)

15
Grid Researches Activities at KISTI (I)
  • Restructuring as a virtual metacomputing center
    with supercomputing resources (Cray T3E, IBM SP2,
    Compaq HPC GS systems, clusters and CAVE, etc)
  • Playing a main role in national Grid
  • Dispersed Top 10 supercomputers connected through
    KOREN/KREONet2(Seoul, Taejon, Chonju and Pusan,
    etc.)
  • Pilot projects for Grid
  • 1st pilot application development for Bio and CFD
  • Participation in Global Grid
  • KOREN/KREONet2 are connected to STAR TAP
  • Efforts to cooperate with NCSA, SDSC, TACC

16
Grid Research Activity at KISTI (II)
KISTI Virtual Metacomputing Environment
HPC160
(45Mbps)
(155Mbps)
Cluster/16CPUs
Cluster / 8CPUs
Cray T3E
HPC320
Cluster System/8CPUs (Soongsil/Chonan Univ.)
Gigabit ethernet
IBM SP2 (Pusan Dong-Myoung Univ.)
GS320
KISTI Virtual Metacomputing Environment

(45Mbps)
Software
(45Mbps)
  • HPC320, SP2, Cluster
  • Globus 1.1.3, MPICH-G
  • GS320, T3E
  • Globus 1.1.4, MPICH-G2

Cluster System / 110 CPUs (Pusan N. Univ.)
IBM SP2 (Chonbuk N. Univ.)
17
Grid Researches Activities at KISTI (III)
Application Computational Fluid
Dynamics(TR01-0412-003)
DFVLR Axial Fan
Chonbuk N. Univ. IBM SP2
Chonan/Soongsil Univ. Cluster
Taejon KISTI Compaq GS320
KISTI Compaq GS320
Full Body Airplane
KREONet2
Globus/MPICH-G
Chonbuk N. Univ. IBM SP2
Pusan N. Univ. Cluster
18
Grid Researches Activities at KISTI (IV)
Middleware A Job-Allocation Scheme (PDPTA2000)
Windows for User
Solvers on PEs of Cray T3E
19
Grid Researches Activities with KISTI (V)
Metacomputing Web Interface
1. Input Parameter
2. Operations
3. Reporting
20
Asia-Pacific Grid Implementation Project(I)
Outline of Project
  • Asia Pacific Grid Implementation Project under
    APEC TEL
  • Grid application project survey and support(Bio,
    CFD, Meteo, etc. for Asia Pacific Region)
  • Building and operation of Asia Pacific Grid
    NOC(using APII Testbed, APAN, SingAREN, TANet,
    CERNET, etc.)
  • Grid related RD and standardization(Grid
    middleware Grid browser, etc.)
  • Promotion for sharing high performance computers
    and large-scale research instruments(Supercomputer
    , Storage, etc. in the Asia Pacific Region)

21
Asia-Pacific Grid Implementation Project(II)
Asia-Pacific Grid is
  • Resources Grid in the Asia-Pacific region
  • ? Supercomuters
  • ? Large-scale Data Storages
  • ? RD Networks
  • Etc.
  • Applications Grid in the Asia Pacific region
  • ? AP Bio Grids
  • ? AP Nano Grids
  • ? AP CFD Grids
  • ? AP Meteo. Grids
  • ? AP HEP Grids
  • Etc.

22
Asia-Pacific Grid Implementation Project (III)
Potential Partner
USA - PNNL, SDSC, ANL Canada CRC, C3
Korea - KISTI
Japan AIST, TACC, KEK
Malaysia - USM
Singapore iHPC/Sun
Taiwan - NCHC
Australia ANU/APAC, Monash Univ.
23
International Grid Network
Korea
USA
Europe
Seoul
APII Testbed KREONet2 (STAR TAP)
Taejon
Pusan
TEIN (Trans-Eurasia Information Network)
chonju
Japan
APII Testbed KREONet2
24
International Collaboration on Grid
Global Grid Forum
NCSA
PRAGMA
collaboration
MOA
participation
support
endorsement
APEC TEL Asia-Pacific Grid Implementation Project
GFK
collaboration
fund
APGrid
25
Researches on Tera-scale Linux Cluster (I)
Resource Plan
Vector
13 330 users
Mechanics 60 Atm. Env. 26 Etc 14
Industry 45 Edu. 32 Research 7 Etc
13
Cray C90 (Vector)
NEC SX4/5
SMP
9 111 users
Physics 59 Chemistry 16 Mechanics 13
Etc 12
Industry 2 Edu. 87 Research 10 Etc
0
Compaq SMP
IBM SP
Cray T3E
MPP
78 283 users
Mechanics 64 Chemistry 12 Atm. 9
Etc 15
Industry 5 Edu. 79 Research 15 Etc
0
Cluster
User Affiliation
Application
1993
1997
2000
2001
2002
2003
Arch.
User
26
Researches on Tera-scale Linux Cluster(II)
  • Teracluster development
  • Object Replacement of Cray T3E by the end of
    2002
  • Yearly schedule
  • 1998 Tflops cluster concept design planning
  • 1999 Building test system 16 CPU
  • 2000 Application BMT in prototype systems(64 CPU)
    target system design
  • 2001 Building phase-1 system(128 CPU)
    management tool development
  • 2002 Building/stabilizing phase-2 system(256
    CPU)
  • 2003 Starting Tflops cluster(512 CPU) public
    service

27
Researches on Tera-scale Linux Cluster(III)
Year 2000 Application specific cluster design
  • System character
  • Stability test
  • Optimization

Alpha processors
Intel processors
Alpha processors
DS10 Cluster Benchmarking
Intel Cluster Benchmarking
UP2000 Cluster Benchmarking
  • Low latency
  • High Bandwidth
  • Device character

SCI
Myrinet
FastEthernet
Structural Analysis
CFD
Physics
Meteorology
Chemistry
  • Application characters
  • ?Extendibility
  • Competitiveness test
  • Optimization

28
Researches on Tera-scale Linux Cluster(IV)
Development Plan
29
Appendix About KISTI Supercomputing Center
Supercomputer Resources
Supercomputers
Supporting Equipments
GS320 SMP Peak Perf 46.8Gflops Memory
32GB CPU 32CPU
SX-5/5X PVP Peak Perf 80/160Gflops Memory
128/128 MMU CPU 8/16CPU
Application Server SGI Origin 2000 (4CPU)
HPC320/HPC160 SMP Peak Perf
42.7/21/35Gflops Memory 32/16GB CPU
32/16CPU
Cray T3E Massively Parallel(MPP) Peak Perf
115Gflops Memory 16GB PE 128
New System(IBM) SMP Cluster System Peak
Performance 4-5Tflops Installation Apr. 2002
SV server SGITM Onyx 3400 20 CPU
VR system CAVE
30
Thank you for your kind attention. If you want
more information, contact GridForumKorea(httpwww
.gridforumkorea.org) or send e-mail to
sslee_at_hpcnet.ne.kr
Write a Comment
User Comments (0)
About PowerShow.com