On the Use of Sparse Direct Solver in a Projection Method for Generalized Eigenvalue Problems Using - PowerPoint PPT Presentation

1 / 17
About This Presentation
Title:

On the Use of Sparse Direct Solver in a Projection Method for Generalized Eigenvalue Problems Using

Description:

On the Use of Sparse Direct Solver in a Projection Method for Generalized ... HOMO LUMO. Background (cont'd) A projection method using numerical integration ... – PowerPoint PPT presentation

Number of Views:86
Avg rating:3.0/5.0
Slides: 18
Provided by: naCseNa
Category:

less

Transcript and Presenter's Notes

Title: On the Use of Sparse Direct Solver in a Projection Method for Generalized Eigenvalue Problems Using


1
On the Use of Sparse Direct Solver in a
Projection Method for Generalized Eigenvalue
Problems Using Numerical Integration
  • Takamitsu Watanabe and Yusaku Yamamoto
  • Dept. of Computational Science Engineering
  • Nagoya University

2
Outline
  • Background
  • Objective of our study
  • Projection method for generalized eigenvalue
    problems using numerical integration
  • Application of the sparse direct solver
  • Numerical results
  • Conclusion

3
Background
  • Generalized eigenvalue problems in quantum
    chemistry and structural engineering

Given , find and
such that .
  • Problem characteristics
  • A and B are large and sparse.
  • A is real symmetric and B is s.p.d.
  • Eigenvalues are real.
  • Eigenvalues in a specified interval are often
    needed.

specified interval
real axis
eigenvalues
HOMO LUMO
4
Background (contd)
  • A projection method using numerical integration
  • Sakurai and Sugiura, A projection method for
    generalized eigenvalue problems,
  • J. Comput. Appl. Math. (2003)
  • Reduce the original problem to a small
    generalized eigenvalue problem within a specified
    region in the complex plane.
  • By solving the small problem, the eigenvalues
    lying in the region can be obtained.
  • The main part of computation is to solve multiple
    linear simultaneous equations.
  • Suited for parallel computation.

Small generalized eigenvalue problem within the
region
Original problem
region
5
Objective of our study
  • Previous approach
  • Solve the linear simultaneous equations by an
    iterative method.
  • The number of iterations needed for convergence
    differs from one simultaneous equations to
    another.
  • This brings about load imbalance between
    processors, decreasing parallel efficiency.
  • Our study
  • Solve the linear simultaneous equations by a
    sparse direct solver without pivoting.
  • Load balance will be improved since the
    computational times are the same for all linear
    simultaneous equations.

6
Projection method for generalized eigenvalue
problems using numerical integration
Suppose that has distinct
eigenvalues and that we need
that lie in a closed
curve .
Using two arbitrary complex vectors ,
define a complex function Then, f (z) can be
expanded as follows
?m1
.
?m2
C, g(z) polynomial in z.
,
7
Projection method for generalized eigenvalue
problems using numerical integration (contd)
Further define the moments by
and two Hankel matrices by
.
Th. are the m roots of
.
The original problem has been
reduced to a small problem
through contour integral.
8
Projection method for generalized eigenvalue
problems using numerical integration (contd)
Computation of the moments
  • Set the path of integration G to a
  • circle with center g and radius r .
  • Approximate the integral using the
  • trapezoidal rule.

The function values have to be computed for each
Path of integration
.
r
Solution of N independent linear simultaneous
equations is necessary (N 64 128).
9
Application of the sparse direct solver
  • A and B sparse symmetric matrices, a
    complex number

The coefficient matrix is a sparse complex
symmetric matrix.
  • Application of the sparse direct solver
  • For a sparse s.p.d. matrix, the sparse direct
    solver provides an efficient way for solving the
    linear simultaneous equations.
  • We adopt this approach by extending the sparse
    direct solver to deal with complex symmetric
    matrices.

10
The sparse direct solver
  • Characteristics
  • Reduce the computational work and memory
    requirements of the Cholesky factorization by
    exploiting the sparsity of the matrix.
  • Stability is guaranteed when the matrix is s.p.d.
  • Efficient parallelization techniques are
    available.
  • Find a permutation of rows/columns that reduces
  • computational work and memory requirements.

ordering
  • Estimate the computational work and memory
  • requirements.

symbolic factorization
  • Prepare data structures to store the Cholesky
  • factor.

Cholesky factorization
triangular solution
11
Extension of the sparse direct solver to complex
symmetric matrices
  • Algorithm
  • Extension is straightforward by using the
    Cholesky factorization for complex symmetric
    matrices.
  • Advantages such as reduced computational work,
    reduced memory requirements and parallelizability
    are carried over.
  • Accuracy and stability
  • Theoretically, pivoting is necessary when
    factorizing complex symmetric matrices.
  • Since our algorithm does not incorporate
    pivoting, accuracy and stability is not
    guaranteed.
  • We examine the accuracy and stability
    experimentally by comparing the results with
    those obtained using GEPP.

12
Numerical results
  • Matrices used in the experiments

Harwell-Boeing Library
BCSSTK12 BCSSTK13
FMO
  • For each matrix, we solve the equations with the
    sparse direct solver
  • (with MD and ND ordering) and GEPP.
  • We compare the computational time and accuracy
    of the eigenvalues.

13
Computational time
Computational time (sec.) for one set of linear
simultaneous equations and speedup (PowerPC G5,
2.0GHz)
BCSSTK12 BCSSTK13
FMO
  • The sparse direct solver is two to over one
    hundred times faster than GEPP, depending on the
    nonzero structure.

14
Accuracy of the eigenvalues (BCSSTK12)
Example of an interval containing 4 eigenvalues
Distribution of the eigenvalues and the specified
interval
eigenvalues specified interval
Relative errors in the eigenvalues for each
algorithm (N64)
  • The errors were of the same order for all three
    solvers.
  • Also, the growth factor for the sparse solver
    was O(1).

15
Accuracy of the eigenvalues (BCSSTK13)
Example of an interval containing 3 eigenvalues
Distribution of the eigenvalues and the specified
interval
eigenvalues specified interval
Relative errors in the eigenvalues for each
algorithm (N64)
The errors were of the same order for all three
solvers.
16
Accuracy of the eigenvalues (FMO)
Example of an interval containing 4 eigenvalues
Distribution of the eigenvalues and the specified
interval
eigenvalues specified interval
Relative errors in the eigenvalues for each
algorithm (N64)
The errors were of the same order for all three
solvers.
17
Conclusion
  • Summary of this study
  • We applied a complex symmetric version of the
    sparse direct solver to a projection method for
    generalized eigenvalue problems using numerical
    integration.
  • The sparse solver succeeded in solving the linear
    simultaneous equations stably and accurately,
    producing eigenvalues that are as accurate as
    those obtained by GEPP.
  • Future work
  • Apply our algorithm to larger matrices arising
    from quantum chemistry applications.
  • Construct a hybrid method that uses an iterative
    solver when the growth factor becomes too large.
  • Parallelize the sparse solver to enable more than
    N processors to be used.
Write a Comment
User Comments (0)
About PowerShow.com