High Performance Computing and Computational Science at AHPCC - PowerPoint PPT Presentation

About This Presentation
Title:

High Performance Computing and Computational Science at AHPCC

Description:

LosLobos & Roadrunner Superclusters. Research Environment at the AHPCC ... Research Clusters: Black Bear, Vista Azul, Roadrunner ... – PowerPoint PPT presentation

Number of Views:267
Avg rating:3.0/5.0
Slides: 25
Provided by: robertab2
Learn more at: http://www.hpc.unm.edu
Category:

less

Transcript and Presenter's Notes

Title: High Performance Computing and Computational Science at AHPCC


1
High Performance Computing and Computational
Science at AHPCC
  • Brian T. Smith
  • Professor, Department of Computer Science
  • Director, Albuquerque High Performance Computing
    Center (AHPCC)

2
High Performance Computing Education Research
Center
UNM strategic center to initiate and focus
activities in high performance computing
technology, research, and education
Mission accomplished through two centers
Established in 1994 as a training and resource
center for MHPCC now a national supercomputing
Center within the NSF National Computational
Science Alliance, serving as an academic center
of excellence for research and education in
computational science.
Established in 1994 under the auspices of the DoD
Modernization Program, through a Cooperative
Agreement between the University of New Mexico
and the Air Force Research Laboratory. Provides
production computing cycles for DoD researchers.
3
High Performance Computing, Education Research
Center
  • EXECUTIVE DIRECTOR
  • CO-DIRECTOR
  • CO-DIRECTOR
  • ASSOCIATE DIRECTOR

Frank L. Gilfeather Brian T. Smith John S.
Sobolewski Ernest D. Herrera
Maui High Performance Computing Center
DIRECTOR ASSOCIATE DIRECTORS
Eugene Bal Gary Jensen Steve Karwoski Margaret
Lewis
Albuquerque High Performance Computing Center
DIRECTOR ASSOCIATE DIRECTORS
Brian T. Smith Susan R. Atlas Robert A.
Ballance Ernest D. Herrera
4
Supercomputing Capabilities
  • AHPCC
  • Ranks in the top 5 US academic institutions in
    supercomputing power (effective 5/00)
  • A member of the NSF Alliance and a node on the
    National Technology Grid
  • 60 associated faculty, staff, postdocs and
    students
  • Computing systems
  • 512 processor IBM PIII Linux Supercluster (5/00)
  • 128 processor Alta PII Linux Supercluster
  • 32 processor VA Linux PIII Cluster
  • Vista Azul - advanced IBM hybrid system
  • 8 node SGI Origin 2000
  • 16 processor Alta PII Linux development cluster
  • Visualization laboratory
  • 0ver 500 academic, industry, and government users
  • MHPCC
  • One of the top 30 supercomputingcenters in the
    world
  • A DoD Shared Centera node on the National
    Technology Grid
  • 65 staff members
  • Computing systems
  • 699 node IBM SP
  • 400 GFLOPS computing power
  • 167 GB total memory
  • 2.1 TB internal disk storage
  • 1.3 external disk storage
  • 20 TB mass storage
  • Visualization laboratory
  • 0ver 1,100 government, industry, and academic
    users

Both centers support a significant number of
users in academia and government, particularly
the DoD and NSF, and are key players in the
national supercomputer community.
5
LosLobos Roadrunner Superclusters
6
Research Environment at the AHPCC
  • 38 Graduate Research Assistants
  • 16 Associated Faculty (Physics Astronomy,
    Chemistry, Biology, Mechanical Engineering,
    Computer Science, EECE)
  • 6 Permanent Research Staff
  • 6 Visiting Scientists, Postdoctoral Fellows
  • Undergraduate Workstudy Students NSF REU
  • Research Facilities Supercomputers, High
    Performance Clusters, Workstations, Workshop
    Area, Seminar Room and Access Grid Studio
  • Educational Programs SEC Program, Workshops,
    AHPCC Seminar Series, Alliance Activities, Native
    American Outreach, NSF AMO Summer School, UNM
    Course Laboratories

7
Computer Systems Research
  • To anticipate, develop, deploy, and support
  • high-performance computing technology and systems
  • Superclusters
  • Open computing tools
  • Grid-Based Computing
  • Visualization

8
Superclusters Beyond Beowulf
  • System design and integration
  • Off-the-shelf symmetric multiprocessor
    subsystems
  • High-speed interconnects
  • Terabyte hierarchical mass storage systems
  • Research Areas
  • Networking Portals
  • Hybrid (SMP) programming models
  • Cluster Management Maui Scheduler, PBS
  • Condor high-throughput computing

9
Grid-Based Computing Sharing Resources Across
the Matrix
  • Computational Grid People to Machines, Machines
    to Machines
  • Globus
  • Virtual Machine Room (VMR)
  • Wireless networking
  • Access Grid People to People and Machines
  • Telemedicine
  • Visualization
  • Human Factors
  • Production Studio Deployment
  • Education Training

10
TOUCH Telehealth Virtual CollaboratoryDr. Dale
Alverson (UNM), Dr. Richard Friedman (UH)
  • Access Grid multi-group Internet video
    conferencing for distance education
  • Virtual Reality training environment
  • 3D image/model manipulation and simulation
    environment using large, remote datasets
  • Problem-based learning
  • Figure A user and their avatar in the
    BioSIMMER environment (brain injury patient).

A user and their avatar in the BioSIMMER
environment - brain injury patient
11
Scientific Visualization Computational
Environments
  • Visualization Laboratory Homunculus Project
  • Flatland Virtual Reality Environment
  • Vista Azul Scalable Graphics Engine parallel
    rendering
  • CoMeT Computational Mechanics Toolkit
  • Scientific Visualization Research

12
Science and Engineering Research
  • Development of advanced algorithms and parallel
    software
  • for application of high-performance computing
    technology
  • to problems at the forefront of science and
    engineering
  • Optics and Imaging
  • Computational Physics
  • Computational Fluid Dynamics
  • Ecological Modeling
  • Chemistry and Materials
  • Computational Biology

13
Quantum Optics Optics Imaging
  • Image Processing and Astrophysical Observation
    Techniques for Astronomy and Space Surveillance
    Applications (D. Tyler, S. Prasad, W. Junor, R.
    Plemmons, T. Schulz, J. Green, J. Seldin, P.
    Alsing)
  • Quantum Computing and Quantum Optics (I. Deutsch,
    C. Caves, P. Alsing, G. Brennan, J. Grondalski,
    S. Ghose, P. Jessen)
  • Optical Pulse Interactions with Nonlinear
    Materials (P. Bennett)

14
Quantum Computing
  • Prof. Ivan Deutsch, and Prof. Carl Caves (Physics
    and Astronomy) Dr. Paul Alsing (AHPCC)

Quantum Optical Lattices By shining
counter-propagating laser beams, crystals of
light can be formed (egg crate structures) which
can be used to trap neutral atoms, e.g. cesium.
By changing the phase of the light, atoms can be
brought together (shift the egg crate minima) and
made to interact by an additional catalysis
laser. The interacting atoms form qubits and the
shifting egg crate potentials act as a computer
bus.
15
Chemistry Materials
  • Defect Centers in a-SiO2 Using Computational
    Chemistry Techniques (S.P. Karna, A.C. Pineda)
  • Defects in Al and Cu ULSI Interconnects
    Materials/Solid State Physics (S.R. Atlas, S.M.
    Valone, L.A. Cano)
  • Electron Transfer in Dendrimers (T.S. Elicker,
    D.G. Evans)
  • Dynamics at Metal Surfaces (D. Xie, H. Guo)
  • Molecular Dynamics of Proteins in Solution (P.
    Alsing, E. Coutsias)
  • Atom-Ion Collisions (P. Alsing, M. Riley, A.
    Hira)

16
Defects in SiO2
  • Dr. Andrew Pineda, AHPCC
  • Dr. Shashi Karna, AFRL
  • Defects are detected experimentally via EPR.
  • Quantum mechanical (Hartree-Fock) calculations
    provide detailed information candidate structure
    and formation mechanisms.
  • Same computational techniques are used to model
    active sites of biological molecules in rational
    drug design.
  • Computations involve hundreds of electrons and
    dozens of atoms 100s of CPU hours on 832
    processors of a supercomputer.

a-SiO2 is the dielectric (insulator) material
used in todays semiconductor devices. Defect
centers are created in manufacture and by
irradiation. They are believed to be the primary
charge traps in semiconductors degrading
current/voltage performance and sometimes
destroying them.
17
Molecular Dynamics simulation of the role of
water in protein folding
  • Dr. Paul M. Alsing (AHPCC) Prof. Evangelos
    Coutsias (Mathematics Statistics)
  • Prof. Jack McIver (Physics and Astronomy)

18
Visualization of large data sets from molecular
dynamics simulations in Flatland
19
Computational Genomics
  • Systems design and management
  • Storage and manipulation of large microarray and
    patient datasets
  • Database/annotation design
  • Firewall to protect patient privacy
  • Customized hierarchical mass storage system
  • Visualization
  • Mathematical and computational analysis
  • Molecular classification clustering and
    neighborhood analysis
  • Identification of genetic correlations in
    microarray data
  • Collaboration between biologists, medical
    scientists, mathematicians, computational
    scientists will be essential

20
Computer Science Research
  • Parallel Algorithms and Numerical Mathematics
    (D.A. Bader, P. Bennett, P. Alsing, B. Minhas)
  • Condor Flocking and Turing Cluster High
    Throughput Computing (Z. Chen, B.T. Smith, X.
    Wang, M. Livny, C.D. Maestas)
  • Scalable Systems Lab (A.B. Maccabe)
  • Research Clusters Black Bear, Vista Azul,
    Roadrunner (R. Ballance, P. Kovatch, J.R.
    Barnes, C. Maestas) Programming Paradigms for
    SMP Architectures Code Development and
    Optimization Cluster Management

21
Activities
22
RD Projects
Project
Flatland
SMP Programming

Portals, NGIO
Access Grid Tools
CoMeT
Maui Scheduler
23
Production Systems
  • Condor
  • Distributed Workstations
  • Remote Job Submission and Management
  • Roadrunner
  • Alliance Shared Computational Resource
  • Production Linux Cluster from Alta Technology
    Corporation
  • 64 Nodes, 128 Processors, Myrinet Networking

24
Research Systems
  • Black Bear
  • Linux Cluster Provided by VA Linux Systems
  • 16 Nodes, 32 Processors, Myrinet Network
  • Vista Azul
  • Hybrid IBM Linux/SP with in situ Graphics
  • Linux 8 Nodes, 32 Processors, Graphics-Enabled
  • SP 8 Nodes, 32 Processors
  • 360 GB Storage, Shared Graphics Framebuffer
Write a Comment
User Comments (0)
About PowerShow.com