Pacific Rim Application and Grid Middleware Assembly: PRAGMA - PowerPoint PPT Presentation

About This Presentation
Title:

Pacific Rim Application and Grid Middleware Assembly: PRAGMA

Description:

... a G p S s t F T O d f e n ... RZZ t t t tt t t t Zs s Z s tk J t ... – PowerPoint PPT presentation

Number of Views:93
Avg rating:3.0/5.0
Slides: 35
Provided by: admi1106
Category:

less

Transcript and Presenter's Notes

Title: Pacific Rim Application and Grid Middleware Assembly: PRAGMA


1
Pacific Rim Application and Grid Middleware
Assembly PRAGMA
A communitybuilding collaborations and
advancing grid-based applications
Peter Arzberger and Philip Papadopoulos University
of California San Diego
APAN 22 January 2003
http//www.pragma-grid.org
2
Acknowledgements
  • Chair Satoshi Sekiguchi, AIST
  • Co-Chair David Abramson, Monash University
  • Program Chair Hiroshi Takemiya
  • Co-Host Shinji Shimojo, CMC Osaka University
  • Tomomi Takao
  • Susumu Date
  • Many Others

3
Founding Motivations
4
Components of Cyberinfrastructure-enabled science
engineering
A broad, systemic, strategic conceptualization
High-performance computing
for modeling, simulation, data
processing/mining
Humans
Instruments for
observation and
characterization.
Individual
Global Connectivity
Group Interfaces
Physical World
Visualization
Facilities for activation,
manipulation and
Collaboration
construction
Services
Knowledge management
institutions for collection building
and curation of data, information,
literature, digital objects
Source Dan Atkins
Grid implies global (international) system for
collaboration
5
(No Transcript)
6
Overarching Goals
Establish sustained collaborations and Advance
the use of the grid technologies for applications
among a community of investigators working with
leading institutions around the Pacific Rim
Working closely with established activities that
promote grid activities or the underlying
infrastructure, both in the Pacific Rim and
globally.
7
The website is a unique collaboration among
webmasters from different countries Japan,
Korea, Singapore, US
http//www.pragma-grid.org
8
Steering Committee
  • John OCallahan, Bernard Pailthorpe, David
    Abramson APAC
  • Larry Ang BII
  • Yan Baoping, Nan Kai CAS/CNIC
  • Satoshi Masuoka TITech/GSICC
  • Satoshi Sekiguchi, Yoshio Tanaka AIST
  • Sangsan Lee, Jysoo Lee KISTI
  • Whey-Fone Tsai, Fang-Pang Lin NCHC
  • Shinji Shimojo Osaka University/CMC
  • Royol Chitradon, Joe Piyawut NECTEC
  • Maxine Brown StarTap
  • Rick McMullen, Jim Williams
  • Habibah Wahab U Sains Malaysia
  • Philip Papadopoulos, Peter Arzberger
    UCSD/SDSC/Cal-(IT)2/CRBS

9
Activities
  • Encourage and conduct joint (multilateral)
    projects that promote development of grid
    facilities and technologies
  • Share resources to ensure project success
  • Conduct multi-site training
  • Exchange researchers
  • Meet and communicate regularly
  • Collaborate with and participate in major
    regional and international activities such as
    APAN, APGrid, GGF, APEC TEL
  • Disseminate and promote knowledge of using the
    grid among domain experts and scientists
  • Provide and raise resource for PRAGMA members to
    raise level of awareness and funding for grid
    activities

10
Schedule of Meetings
  • PRAGMA 1 11-12 March 2002, San Diego, CA, USA
  • NPACI All Hands Meeting 7-8 March
  • Philip Papadopoulos(UCSD/SDSC/Cal(IT)2/CRBS)
    Chair, vice Chair Sangsan Lee
  • PRAGMA 2 10-11 July 2002, Seoul, Korea
  • GFK 12 July
  • Sangsan Lee (KISTI) Chair, co-chair Yoshio
    Tanaka
  • PRAGMA 3 23-24 January 2003, Fukuoka, Japan
  • APAN 22-23 Jan 2003
  • Satoshi Sekiguchi (AIST) Chair, co-chair David
    Abramson
  • PRAGMA 4 4-5 June 2003, Melbourne, Australia
  • ICCS2003 3-4 June
  • David Abramson (APAC) Chair, co-Chair Fang-Pang
    Lin
  • PRAGMA 5 October 2003, Hsinchu/Fushan, Taiwan
  • Taiwan Grid Meeting
  • Fang-Pang Lin (NCHC) Chair co chair TBD

11
Computer Network Information Center, Chinese
Academy of Science Scientific Data Grid
Project Overview
  • Targets
  • platform, technologies, and applications to
    improve data resources sharing and collaboration
    among scientists
  • Built on Scientific Database Project
  • Funded by the Tenth Five-year Program of CAS
    (2001-2005)
  • Would be a part of PRAGMA Testbed

Source Nan Kai from 2nd PRAGMA Workshop
12
National Electronics and Computer Technology
Center
  • GRID Computing Resources
  • Thai GRID - www.thaigrid.org
  • GRID Computing Resource at NECTEC
  • Spatial Data GRID
  • Natural Resource Data GRID
  • BioInformatics Data GRID
  • Focuses
  • Tools to Integrate and Represent or Visualize
    Data Set
  • Data GRID to fill in some missing data
  • Modelling and Computing
  • Web Services

Source Royol CHITRADON, 2nd PRAGMA Workshop
13
Universiti Sains Malaysia Compute Power Market
/ P2P and e-Science Grid
Source Chan Huah Yong 2nd PRAGMA Workshop
14
Australian Partnership for Advanced Computing
GrangeNetGRid And Next GEneration NETwork
Source Bernard Pailthorpe, 2nd PRAGMA Workshop
15
New Applications Korea and NGrid
  • Biogrid Systems in Korea Kyoung Tai No BioGrid
    Working Group, GFK
  • CFD Grid (Numerical Wind Tunnel) Jang Hyuk Kwon,
    KAIST
  • National Instrument Grid and Collaboratory System
    at KBSI Kyung-Hoon Kwon, KBSI

2nd PRAGMA Workshop
16
Telescience with Ultra-high Voltage Electron
Microscopy
  • World largest (3MV) Ultra-high Voltage Electron
    Microscope in Osaka university
  • Integrate it as a telescience portal
  • International Collaboration of CMC of Osaka
    University (Shinji Shimojo), UCSD/SDSC (Mark
    Ellisman), and National Center for
    High-Performance Computing (NCHC) (Fang-Pang Lin)
    in PRAGMA
  • Network Challenge
  • High bandwidth in IPv6
  • HDTV over IPv6

Source Shinji Shimojo
17
iGrid 2002
  • Demonstrate Advanced features of the Telescience
    Portal
  • Perform Telemicroscopy controlling the IVEM at
    NCMIR
  • Digital Video is encapsulated in IPv6 and
    transmitted at 30fps over native IPv6 networks
  • (SDSC, Abeline, SURFnet) between San Diego and
    Amsterdam
  • Data will be computed with heterogeneous,
    distributed resources within NCMIR, NPACI, NCHC
    and Osaka University
  • Render and visualize data in Amsterdam using
    distributed resources in NCHC

18
Network Environment in iGrid 20002
United States Internet2
Pacific APAN (TransPAC)
JAPAN JGNv6
Amsterdam SURFnet
StarLight (Seattle)
STAR TAP (Chicago)
Sannyvale
WIDE
iGrid
Tokyo
SDSC
Osaka
Asia APAN
TAIWAN TANet2
NCHC
Source Shinji Shimojo
19
NCHC Auto - Segmentation
  • Data is automatically selected from SRB
  • Visualization parameters are computed on NCHC
    resource
  • Data Input / Output to SRB
  • Job is initiated via Portal but computation is
    on NCHC compute resource
  • Output Formats vrml, Open Inventor, IvI Vis
    Browser (vtk)

20
Grid Datafarm for a HEP application
SC2002 High-Performance Bandwidth Challenge
  • Osamu Tatebe
  • (Grid Technology Research Center, AIST)
  • Satoshi Sekiguchi(AIST), Youhei Morita (KEK),
    Satoshi Matsuoka (Titech NII), Kento Aida
    (Titech),
  • Donald F. (Rick) McMullen (Indiana, TransPAC),
  • Philip Papadopoulos (SDSC)
  • Additional Help
  • Hisashi Eguchi (Maffin) Kazunori Konishi,
    Yoshinori Kitatsuji, Ayumu Kubota (APAN) Chris
    Robb (Indiana Univ, Abilene) Force 10 Networks,
    Inc

Source Osamu Tatebe
21
Overview of Our Challenge
  • Seven clusters in US and Japan comprise a
    cluster-of-cluster file system
  • Gfarm file system
  • The FADS/Goofy simulation code based on the
    Geant4 toolkit simulates the ATLAS detector and
    generates hits collection (a terabyte of raw
    data) in the Gfarm file system
  • The Gfarm file system replicates data across the
    clusters

Source Osamu Tatebe
22
Parallel file replication
Indiana Univ
AIST
Tsukuba WAN
MAFFIN
10Gbps
1Gbps
622Mbps
622Mbps
Seattle
Tokyo
US Backbone
SC2002 Booth
KEK
271Mbps
Chicago
10Gbps
20Mbps
622Mbps
1Gbps
1Gbps
San Diego SDSC
Titech
U Tokyo
Baltimore
23
Grid experiment between US and Japan
Using 4 nodes in each US and Japan, we achieved
741 Mbps for file transfer!
(10-sec average bandwidth)
Source Osamu Tatebe
24
TranPAC provide high-performance network
connectivity between the Asia-Pacific region and
the United States for the purpose of encouraging
educational and scientific collaboration among
scientists and researchers
links APAN to the US high-performance
infrastructure (Abilene, the vBNS and Fednets)
and to other international high-performance
networks (Canarie, and EU networks).
25
PRAGMA and TransPAC - a perfect fit!!
TransPAC supplies infrastructure. But, for
infrastructure to be useful, it must facilitate
research collaborations. PRAGMAs focus is on
developing and facilitating applications
(research collaborations) between research and
education organizations in the Pacific Rim area.
A great fit! A more specialized focus of PRAGMA
is on grid middleware and applications. Networks
such as TransPAC are a critical part of any grid
activity. An even better fit!! Even though
PRAGMA and TransPAC may seem to be two
independent projects, their goals link them
together into a single effort to develop grid
applications and network infrastructure
collaborations between Asia and the US.
Web site www.transpac.org Source
williams_at_indiana.edu
26
Accomplishments and Activities
  • Expanded Telescience Collaborative Activity
  • Demos at iGRID 2002 and SC02
  • Expanded Data Farm Grid Resources
  • Demo at SC02
  • Exchanged information about local
  • and national grid activities
  • Active Participation with many conferences
  • Builing partnerships with ApGrid, APEC Tel, GGF,
    APAN Grid Working Group

27
Other Accomplishments
  • Resource sharing More than 240 nodes
  • BII, CNIC/CAS, KISTI, U S Malaysia, NCHC, NECTEC,
    Osaka, TransPAC/Indiana, UCSD/SDSC
  • Technology deployment
  • Japan and Thailand (NLANR monitor deployment)
  • Training and exchanges
  • Singapore Bioinformatics Institutes (Clusters,
    Grid, Portals, EOL, SRB)
  • Computer Network Information Center (Measurement
    Analysis)
  • New collaborations NCHC and TERN
  • EcoGrid

28
Founding Motivations
IVOA
29
Data Broker brings
  • e.g. Meteorological Databases

Heterogeneous Databases
MetBroker
Growth Prediction Model
Weather DB A
Weather DB B
Disease Prediction Model
Meta DB
Weather DB C
Farm Management Model
Weather DB D
New plug-in for new DB
Do not need to modify applications for new
weather DB
Very high efficiency of database use and
application development
Source S. Ninomiya, M. Laurenson
30
PRAGMA Ecoinformatics ProjectA Web Services
Architecture for Ecological and Agricultural
Data
  • Collaboration between SDSC, NPACI, the LTER
    Network, PRAGMA, APAN, NCHC
  • Scalable and extensible approach to integrated
    data management and analysis for computational
    ecology
  • Current prototype links the SDSC Spatial Data
    Workbench (sdw.sdsc.edu) with MetBroker system
    (www.agmodel.net/MetBroker) at Japan National
    Agricultural Research Center (NARC)
  • Developing additional Asian partnerships with
    Taiwan and China.

31
Some Future Opportunities
  • Earthquake Simulation and Sensing
  • ACES APEC Cooperation for Earthquake Simulation
  • http//www.quakes.uq.edu.au/ACES
  • Australia, China, Japan, US
  • Ocean Drilling Program
  • POGO Partnership for Observation of the Global
    Oceans
  • http//oceanpartners.org
  • Pacific Rim Digital Library Alliance
  • GBIF Global Biodiversity Information Facility
  • http//www.gbif.org
  • Australia, Japan, Korea, US
  • International Long Term Ecological Research
  • China, Korea, Taiwan, US
  • International Virtual Observatory Alliance
  • Others High Energy Physics

32
(No Transcript)
33
Building Collaborations
34
Integrating Knowledge
Our Common Journey A Transition Toward
Sustainability National Research Council
35
Thank you
Write a Comment
User Comments (0)
About PowerShow.com