Algorithms and Infrastructure for Molecular Dynamics Simulations

About This Presentation
Title:

Algorithms and Infrastructure for Molecular Dynamics Simulations

Description:

Algorithms and Infrastructure for Molecular Dynamics Simulations Ananth Grama Purdue University http://www.cs.purdue.edu/people/ayg ayg_at_cs.purdue.edu – PowerPoint PPT presentation

Number of Views:8
Avg rating:3.0/5.0

less

Transcript and Presenter's Notes

Title: Algorithms and Infrastructure for Molecular Dynamics Simulations


1
Algorithms and Infrastructure forMolecular
Dynamics Simulations
Ananth Grama Purdue University http//www.cs.purd
ue.edu/people/ayg ayg_at_cs.purdue.edu Various
parts of this work are in collaboration with Eric
Jakobsson, Vipin Kumar, Sagar Pandit, Ahmed
Sameh, Vivek Sarin, H. Larry Scott, Shankar
Subramaniam, Priya Vashishtha. Supported by
NSF, NIH, and DoE.
2
(No Transcript)
3
Multiscale Modeling Techniques
  • Ab-initio methods (few approximations, but slow)
  • DFT, CPMD
  • Electron and nuclei treated explicitly.
  • Classical atomistic methods (more
    approximations)
  • Classical molecular dynamics
  • Monte Carlo
  • Brownian dynamics
  • No electronic degrees of freedom. Electrons are
    approximated through fixed partial charges on
    atoms.
  • Continuum methods (No atomistic details)

4
Molecular Dynamics
  • Initialize positions, velocities and potential
    parameters
  • At each time step
  • Compute energy and forces
  • Non bonded forces, bonded forces, external
    forces, pressure tensor
  • Update configuration
  • Compute new configuration, apply constraints and
    correct velocities, scale simulation cell

5
(No Transcript)
6
(No Transcript)
7
Non Bonded Forces
  • Lennard-Jones (LJ) potential
  • short range potential
  • can be truncated with a cutoff rc (errors,
    dynamic neighbor list)
  • Electrostatics
  • long range potential
  • cutoffs usually produce erroneous results
  • Particle Mesh Ewald (PME) technique with
    appropriate boundary corrections (poor
    scalability)
  • Fast Multipole Methods.

8
Handling Long-Range Potential
  • Multipole Methods (FMM/Barnes-Hut)
  • Error Control in Multipole Methods
  • Parallelism in Multipole Methods
  • Applications in Dense System Solvers
  • Applications in Sparse Solvers
  • Application Studies Lipid Bilayers
  • Ongoing Work Reactive Simulations

9
Multipole Methods
  • Represent the domain hierarchically using a
    spatial tree.
  • Accurate estimation of potentials requires O(n2)
    force computations.
  • Hierarchical methods reduce this complexity to
    O(n) or O(n log n).
  • Since particle distributions can be very
    irregular, the tree can be highly unbalanced.

10
Multipole Methods
11
Error Control in Multipole Methods
12
Error Control in Multipole Methods
Grama et al., SIAM SISC.
13
Error Control in Multipole Methods
Grama et al., SIAM SISC.
14
Error Control in Multipole Methods
Grama et al., SIAM SISC.
15
Error Control in Multipole Methods
Grama et al., SIAM SISC.
16
Error Control in Multipole Methods
Grama et al., SIAM SISC.
17
Error Control in Multipole Methods
Grama et al., SIAM SISC.
18
Error Control in Multipole Methods
Grama et al., SIAM SISC.
19
Parallelism in Multipole Methods
  • Each particle can be processed independently
  • Nearby particles interact with nearly identical
    set of sub-domains
  • Spatially proximate ordering minimizes
    communication
  • Load can be balanced by assigning each task
    identical number of interactions

Grama et al., Parallel Computing.
20
Parallelism in Multipole Methods
Grama et al., Parallel Computing.
21
Parallelism in Multipole Methods
Grama et al., Parallel Computing.
22
Solving Dense Linear Systems
23
Solving Dense Linear Systems
24
Solving Dense Linear Systems
25
Solving Dense Linear Systems
  • Over a discretized boundary, these lead to a
    dense linear system of the form Ax b.
  • Iterative methods (GMRES) are typically used to
    solve the system.
  • The matrix-vector product can be evaluated as a
    single iteration of the multipole kernel near
    field is replaced using Gaussian quadratures and
    far-field is evaluated using multipoles.

26
Solving Dense Linear Systems
27
Multipole Based Dense Solvers
All times in seconds on a 64 processor Cray T3E
(24,192 unknowns)
28
Multipole Based Solver Accuracy
29
Preconditioning the Solver
30
Preconditioning the Solver
31
Solving Sparse Linear Systems
32
Generalized Stokes Problem
  • Navier-Stokes equation
  • Incompressibility Condition

33
Generalized Stokes Problem
  • Linear system
  • BT is the discrete divergence operator and A
    collects all the velocity terms.
  • Navier-Stokes
  • Generalized Stokes Problem

34
Solenoidal Basis Methods
  • Class of projection based methods
  • Basis for divergence-free velocity
  • Solenoidal basis matrix P
  • Divergence free velocity
  • Modified system

gt
35
Solenoidal Basis Method
  • Reduced system
  • Reduced system solved by suitable iterative
    methods such as CG or GMRES.
  • Velocity obtained by
  • Pressure obtained by


36
(No Transcript)
37
(No Transcript)
38
Preconditioning
  • Preconditioning can be affected by an approximate
    Laplace solve followed by a Helmholtz solve.
  • The Laplace solve can be performed effectively
    using a Multipole method.

39
Preconditioning Laplace/Poisson Problems
  • Given a domain with known internal charges and
    boundary potential (Dirichlet conditions),
    estimate potential across the domain.
  • Compute boundary potential due to internal
    charges (single multipole application)
  • Compute residual boundary potential due to
    charges on the boundary (vector operation).
  • Compute boundary charges corresponding to
    residual boundary potential (multipole-based
    GMRES solve).
  • Compute potential through entire domain due to
    boundary and internal charges (single multipole).

40
Numerical Experiments
  • 3D driven cavity problem in a cubic domain.
  • Marker-and-cell scheme in which domain is
    discretized uniformly.
  • Pressure unknowns are defined at node points and
    velocities at mid points of edges.
  • x, y, and z components of velocities are defined
    on edges along respective directions.

41
(No Transcript)
42
Preconditioning Performance
43
Preconditioning Performance Poisson Problem
44
Preconditioning Performance Poisson Problem
45
Preconditioning Performance Poisson Problem
46
Parallel Performance
47
Comparison to ILU
48
Application Validation Simulating Lipid Bilayers.
49
(No Transcript)
50
Fluid Mosaic Model
51
(No Transcript)
52
Key Physical Properties of Lipid Bilayers
  • Thickness.
  • increases with the length of the hydrocarbon
    chains
  • decreases with number of unsaturated bonds in
    chains
  • decreases with temperature
  • Area per molecule
  • increases with the length of the hydrocarbon
    chains
  • increases with the number of unsaturated bond in
    chains
  • increases with temperature
  • depends on the properties of polar region

53
(No Transcript)
54
(No Transcript)
55
(No Transcript)
56
How real are the simulations ?
Simulation of 128 DPPC bilayer in SPC water
Pandit, et al., Biophys. J.
57
(No Transcript)
58
(No Transcript)
59
Phase Diagram with Cholesterol
  • Lo or b phase
  • Liquid ordered phase where chain ordering is like
    gel phase but the lateral diffusion coefficients
    are like fluid phase.

Reproduced from Vist and Davis (1990),
Biochemistry, 29451
60
Rafts
  • Rafts are liquid ordered domains in cell plasma
    membranes rich in sphingomyelin (or other
    sphingolipids), cholesterol and certain anchor
    proteins.
  • Rafts are important membrane structural
    components in
  • signal transduction
  • protein transport
  • sorting membrane
  • components
  • process of fusion of bacteria
  • and viruses into host cells

How do rafts function and maintain their
integrity? How does Cholesterol control the
fluidity of lipid membranes?
61
A Model for Rafts
62
Simulation Studies of Rafts
Study of ternary mixtures that form raft like
domains.
Study of liquid ordered phase
Preformed domain
Spontaneous domain formation
  • Mixture of cholesterol and saturated lipid (DPPC
    and DLPC).
  • Mixture of cholesterol and sphingomyelin.

Ternary mixture of DOPC, SM, Chol with a
preformed domain.
Ternary mixture of DOPC, SM, Chol with random
initial distribution
63
Study of Lo phase
simulated systems 120 PC molecules
(DPPC/DLPC) 80 cholesterol molecules (40 mol
) 5000 water molecules duration 35 ns / 18
ns conditions NPT ensemble (P 1 atm, T 323
/ 279 gt
)
Pandit, et al., Biophys. J. (2004), 86, 1345-1356.
64
DPPCCholesterol DLPCCholesterol
Pandit, et al., J. (2004), 86, 1345-1356.
65
Conclusions from Simulations
  • Cholesterol increases order parameter of the
    lipids. The order parameters are close to liquid
    ordered phase order parameters.
  • Cholesterol forms complexes with two of its
    neighboring lipids. Life time of such a complex
    is 10ns.
  • Complex formation process is dependent on the
    chain length of the lipids. Shorter chain lipids
    do not form large complexes.

Pandit, et al., Biophys. J.
66
(No Transcript)
67
(No Transcript)
68
(No Transcript)
69
Some Ongoing Work
70
Reactive systems
  • Chemical reactions are association and
    dissociation of chemical bonds
  • Classical simulations cannot simulate reactions
  • ab-initio methods calculate overlap of electron
    orbitals to investigate chemical reactions
  • ReaX force field postulates a classical bond
    order interaction to mimic the association and
    dissociation of chemical bonds1

1 van Duin et al , J. Phys. Chem. A, 105, 9396
(2001)
71
(No Transcript)
72
Bond Order Interaction
  • After correction, the bond order between a pair
    of atoms depends on the uncorrected bond orders
    of the neighbors of each atoms
  • The bond orders rapidly decay to zero as a
    function of distance, so one can easily construct
    a neighbor list for efficient computation of bond
    orders

73
Neighbor Lists for Bond Order
  • Efficient implementation critical for performance
  • Implementation based on an oct-tree decomposition
    of the domain
  • For each particle, we traverse down to
    neighboring octs and collect neighboring atoms
  • Has implications for parallelism (issues
    identical to parallelizing multipole methods)

74
Bond Order Choline
75
Bond Order Benzene
76
Other Local Energy Terms
  • Other interaction terms common to classical
    simulations, e.g., bond energy, valence angle and
    torsion, are appropriately modified and
    contribute to non-zero bond order pairs of atoms
  • These terms also become many body interactions as
    bond order itself depends on the neighbors and
    neighbors neighbors
  • Due to variable bond structure there are other
    interaction terms, such as over/under
    coordination energy, lone pair interaction, 3 and
    4 body conjugation, and three body penalty energy

77
Non Bonded van der Waals Interaction
  • van der Waals interactions are modeled using
    distance corrected Morse potential
  • Where R(rij) is the shielded distance given by

78
Electrostatics
  • Shielded electrostatic interaction is used to
    account for orbital overlap of electrons at
    closer distances
  • Long range electrostatics interactions are
    handled using the Fast Multipole Method (FMM).

79
Charge Equilibration (QEq) Method
  • The fixed partial charge model used in classical
    simulations is inadequate for reacting systems.
  • One must compute the partial charges on atoms at
    each time step using an ab-initio method.
  • We compute the partial charges on atoms at each
    time step using a simplified approach call the
    Qeq method.

80
Charge Equilibration (QEq) Method
  • Expand electrostatic energy as a Taylor series in
    charge around neutral charge.
  • Identify the term linear in charge as
    electronegativity of the atom and the quadratic
    term as electrostatic potential and self energy.
  • Using these, solve for self-term of partial
    derivative of electrostatic energy.

81
Qeq Method
  • We need to minimize
  • subject to

where
82
Qeq Method
83
Qeq Method
From charge neutrality, we get
84
Qeq Method
Let
where
or
85
Qeq Method
  • Substituting back, we get

We need to solve 2n equations with kernel H for
si and ti.
86
Qeq Method
  • Observations
  • H is dense.
  • The diagonal term is Ji
  • The shielding term is short-range
  • Long range behavior of the kernel is 1/r
  • Hierarchical methods to the rescue!
    Multipole-accelerated matrix-vector products
    combined with GMRES and a preconditioner.

87
Qeq Parallel Implementation
  • Key element is the parallel matrix-vector
    (multipole) operation
  • Spatial decomposition using space-filling curves
  • Domain is generally regular since domains are
    typically dense
  • Data addressing handled by function shipping
  • Preconditioning via truncated kernel
  • GMRES never got to restart of 10!

88
(No Transcript)
89
(No Transcript)
90
Parallel ReaX Performance
  • ReaX potentials are near-field.
  • Primary parallel overhead is in multipole
    operations.
  • Excellent performance obtained over rest of the
    code.
  • Comprehensive integration and resulting
    (integrated) speedups being evaluated.

91
... and yet to come...
  • Comprehensive validation of parallel ReaX
  • System validation of code from simple systems
    (small hydrocarbons) to complex molecules (larger
    proteins)
  • Parametrization and tuning force fields.

92
Parting Thoughts
  • There is a great deal computer scientists can
    contribute to engineering and sciences.
  • In our group, we work on projects ranging from
    structural control (model reduction, control,
    sensing, macroprogramming -- NSF/ITR), materials
    modeling (multiscale simulations NSF, DoE),
    systems biology (interaction networks NIH), and
    finite element solvers (sparse solvers NSF).

93
Parting Thoughts
  • There is a great deal we can learn from
    scientific and engineering disciplines as well.
  • Our experiences in these applications motivate
    our work in core areas of computing, including
    wide-area storage systems (NSF/STI), speculative
    execution (Intel), relaxed synchronization (NSF),
    and scaling to the extreme (DARPA/HPCS).

94
Thank you very much!References and papers
athttp//www.cs.purdue.edu/pdsl/
Write a Comment
User Comments (0)