NAMD - PowerPoint PPT Presentation

About This Presentation
Title:

NAMD

Description:

NIH Resource for Biomolecular Modeling and Bioinformatics. http://www.ks.uiuc.edu ... for Biomolecular Modeling and Bioinformatics. http://www.ks.uiuc.edu ... – PowerPoint PPT presentation

Number of Views:216
Avg rating:3.0/5.0
Slides: 18
Provided by: tski
Learn more at: http://charm.cs.uiuc.edu
Category:

less

Transcript and Presenter's Notes

Title: NAMD


1
NAMD
  • September 18, 2003
  • L.V. (Sanjay) Kale (kale_at_cs.uiuc.edu)
  • http//www.ks.uiuc.edu/Research/namd/

2
NAMD Vision
  • Make NAMD a widely used MD program
  • For large molecular systems,
  • Scaling from PCs, clusters, to large parallel
    machines
  • For interactive molecular dynamics
  • Goals
  • High performance
  • Ease of use
  • Ease of modification (for us and advanced users)
  • Incorporation of features needed by Scientists

3
Three Easy Goals for NAMD 3
  • Easy to configure and run
  • Help the user to avoid mistakes during setup
  • Expect the machine to fail during the simulation
  • Easy to extend and modify
  • Maximize reuse of communication and control
    patterns
  • Push parallel complexity down into Charm
    runtime
  • Easy to publish first!
  • New Linux clusters with high-latency gigabit
    ethernet
  • New cellular machines with 10K-100K processors

4
NAMD 3 Design
  • NAMD 3 will be a major rewrite of NAMD
  • Incorporate lessons learned in the past years
  • Use modern features of Charm
  • Refactor software for modularity
  • Restructure for supporting planned features
  • Algorithms that scale to even larger machines

5
NAMD 3 Programmability
  • Scientific Modules
  • Forces, integration, steering, analysis
  • Keep code with a common goal together
  • Add new features without touching old code
  • Parallel Decomposition Framework
  • Support common scientific algorithm patterns
  • Avoid duplicating services for each algorithm
  • Start with NAMD 2 architecture (but not code)

6
MDAPI
New Science modules
Replica exchange
QM
Implicit Solvents
Polarizable Force Field
Bonds related Force calculation
Integration
Pair-wise Forces calculation
PME
NAMD Core
Charm modules
FFT
Fault Tolerance
Grid Scheduling
Collective communication
Load balancer
Core CHARM
Clusters
Lemieux
Teragrid

7
MDAPI Modular Interface
  • Separate front end from modular engine
  • Same program or over a network or grid
  • Dynamic discovery of engine capabilities, no
    limitations imposed by interface
  • Front ends NAMD 2, NAMD 3, Amber, CHARMM, VMD
  • Engines NAMD 2, NAMD 3, MINDY

8
Terascale Biology and Resources
PSC LeMieux
TeraGrid
CRAY X1
NCSA Tungsten
ASCI Purple
Riken MDGRAPE
Red Storm Thors Hammer
9
Basis of NAMD Scalability
  • Very active computer science collaboration
  • UIUC Parallel Programming Lab, since 1992
  • Charm system message driven objects
  • Constant tuning for evolving parallel platforms
  • Designed for parallel efficiency
  • NAMD 1 discrete spatial decomposition, fast
    multipole
  • NAMD 2 hybrid force-spatial decomposition, PME
  • Dependency-driven execution, no barriers
  • Measurement-based load balancing system

10
Modern Charm Design
  • Virtualization Object-based Parallelization
  • Object array - A collection of chares,
  • with a single global name for the collection, and
  • each member addressed by an index
  • Mapping of element objects to processors handled
    by the system

Users view
A0
A1
A2
A3
A..
System view
A3
A0
11
Benefits of Modern Charm
  • Adaptive load balancing
  • Optimized communication
  • Persistent Communication, immediate messages
  • Optimized concurrent multicast/reduction
  • Flexible, tuned, Parallel FFT libraries
  • Automatic Checkpointing
  • Ability to change the number of processors
  • while running
  • Scheduling on the grid
  • Fault tolerance

12
NAMD 3 Fault Tolerance
  • Fully automated restart
  • Harder to incorporate into scripting interface
  • Survive loss of a node
  • Larger machines are less reliable

13
Highly Scalable Algorithms
  • Generate fine-grained parallelism
  • In non-bonded force evaluation
  • Smaller patches, with 2-away communication
  • Alternative parallel strategies for bonded
    foreces
  • In PME
  • Using pencil decomposition for FFTs where needed

14
Efficient Parallelization for IMD
  • Characteristics
  • Limited parallelism on small systems
  • Real time response needed
  • Fine grained parallelization
  • Improve speedups on 4K-30K atom systems
  • Time/step goal
  • Currently 0.2s/step for BrH on single processor
    (P4 1.7GHz)
  • Targeting on 0.003s/step on 64 processors of
    faster machine, that is 20picosecond/minute
  • Flexible use of clusters
  • Migrating jobs (shrink/expand)
  • Better utilization when machine is idle

15
NAMD 3 Initial New Features
  • Twin Goals
  • Provide immediately useful functionality
  • Verify completeness of framework design
  • Scientific Modules
  • Implicit solvent models (e.g, generalized Born)
  • Replica exchange (e.g., 10 on 16 processors)
  • Polarizable force fields (e.g., Drude model)
  • Hybrid quantum/classical mechanics

16
More NAMD 3 Features
  • Self-consistent polarizability with a
    (sequential) CPU penalty of less than 100.
  • Fast nonperiodic (and periodic) electrostatics
    using multiple grid methods.
  • A Langevin integrator that permits larger time
    steps (by being exact for constant forces).
  • An integrator module that computes shadow energy.

17
Integration with CHARMM/Amber?
  • Goal NAMD as parallel simulation engine for
    CHARMM/Amber
  • Generate input files in CHARMM/Amber
  • NAMD must read native file formats
  • Run with NAMD on parallel computer
  • Need to use equivalent algorithms
  • Analyze simulation in CHARMM/Amber
  • NAMD must generate native file formats
Write a Comment
User Comments (0)
About PowerShow.com