HPC Middleware on GRID - PowerPoint PPT Presentation

1 / 15
About This Presentation
Title:

HPC Middleware on GRID

Description:

August 2nd, 2001, ACES/GEM at MHPCC. Kihei, Maui, Hawaii. 2. Background ... PC Clusters, Distributed Parallel MPPs, SMP Clusters. 8-Way SMP, 16-Way SMP, 256-Way ... – PowerPoint PPT presentation

Number of Views:19
Avg rating:3.0/5.0
Slides: 16
Provided by: fmv3
Category:
Tags: grid | hpc | gem | middleware

less

Transcript and Presenter's Notes

Title: HPC Middleware on GRID


1
HPC Middleware on GRID as a material for
discussion of WG5
  • GeoFEM/RIST
  • August 2nd, 2001, ACES/GEM at MHPCC
  • Kihei, Maui, Hawaii

2
Background
  • Various Types of HPC Platforms
  • MPP, VPP
  • PC Clusters, Distributed Parallel MPPs, SMP
    Clusters
  • 8-Way SMP, 16-Way SMP, 256-Way SMP
  • Power, HP-RISC, Alpha/Itanium, Pentium, Vector PE
  • Parallel/Single PE Optimization is Important
    Issue for Efficiency
  • Everyone knows that ... but it's a big task
    especially for application experts such as
    geophysics people in ACES community.
  • Machine dependent optimization/tuning required.
  • Simulation Methods such as FEM/FDM/BEM/LSM/DEM
    etc. have Typical Processes for Computation.
  • How about "Hiding" these Processes from Users ?
  • code development efficient, reliable, portable,
    maintenance-free
  • line number of the source codes will be reduced
  • accelerates advancement of the applications (
    physics)

3
Background (cont.)
  • Current GeoFEM provides this environment
  • limited to FEM
  • not necessarily perfect
  • GRID as next generation HPC infrastructure
  • Currently, middlewares and protocols are being
    developed to enable unified interface to treat
    various OS, computers, ultra-speed network and
    database.
  • What are expected to GRID ?
  • Meta-computing simultaneous use of
    supercomputers in the world
  • Volunteer-computing efficient use of idling
    computers
  • Access Grid research collaboration environment
  • Data Intensive Computing computation with
    large-scale data
  • Grid ASP application services on WEB

4
Similar Research Groups
  • ALICE(ANL)
  • CCAforum(Common Component Architecture,DOE)
  • DOE/ASCI/Distributed Computing Research Team
  • ESI(Equation Solver Interface Standards)
  • FEI(The Finite Element/Equation Solver Interface
    Specification)
  • ADR(Active Data Repository)(NPACI)

5
Are they successful ? It seems NO
  • Very limited targets, processes
  • Mainly for Optimization of Linear Solvers
  • Where are Interfaces between Applications and
    Libraries ?
  • Approach from Computer/Computational Science
    People
  • Not Really Easy to Use by Application People

Computer/ Computational Science
Applications
-Linear solvers -Numerical Algorithms
-Parallel Programming -Optimization
-FEM -FDM -Spectral Methods -MD, MC -BEM
6
Example of HPC Middleware (1)Simulation Methods
include Some Typical Processes
O(N) Ab Initio MD
7
Example of HPC Middleware (2)Individual Process
could be optimized for Various Types of MPP
Architectures
O(N) Ab Initio MD
8
Example of HPC Middleware (3)Use Optimized
Libraries
O (N) ab - initio M D
9
Example of HPC Middleware (4)- Optimized code is
generated by special language/ compiler based on
analysis data and H/W information.- Optimum
algorithm can be adopted
O (N) ab - initio M D
Special Compiler
10
Example of HPC Middleware (5)- On
network-connected H/W's (meta-computing)-
Optimized for individual architecture- Optimum
load-balancing
O (N) ab - initio M D
11
Example of HPC Middleware (6)Multi Module
Coupling through Platform
12
PETAFLOPS on GRIDfrom GeoFEM's Point of View
  • Why? When?
  • Datasets (mesh, observation, result) could be
    distributed.
  • Problem size could be too large for single MPP
    system.
  • according to G.C.Fox, S(TOP500) is about 100
    TFLOPS now ...
  • Legion
  • Prof.Grimshaw (U.Virginia)
  • Grid OS, Global OS
  • Can handle MPP's connected through network as one
    huge MPP ( Super MPP)
  • Optimization on Individual Architecture (H/W)
  • Load balancing according to machine performance
    and resource availability

13
PETAFLOPS on GRID (cont.)
  • GRID (OS) HPC MW/PF
  • Environment for "Electronic Collaboration

14
(No Transcript)
15
"Parallel" FEM Procedure
Write a Comment
User Comments (0)
About PowerShow.com