APC523AST523 Scientific Computation in Astrophysics - PowerPoint PPT Presentation

1 / 29
About This Presentation
Title:

APC523AST523 Scientific Computation in Astrophysics

Description:

LU decomposition ... Incomplete Cholesky decomposition: popular preconditioner for positive definite ... Instead, use Singular Value Decomposition (SVD) ... – PowerPoint PPT presentation

Number of Views:37
Avg rating:3.0/5.0
Slides: 30
Provided by: tri5104
Category:

less

Transcript and Presenter's Notes

Title: APC523AST523 Scientific Computation in Astrophysics


1
APC523/AST523Scientific Computation in
Astrophysics
  • Lecture 6 and 7
  • Numerical Linear Algebra

2
Why start with linear algebra?
Solving systems of linear equations is probably
the simplest task in computational
science. Ubiquitous, often is part of solving a
larger problem. Symbolic manipulation packages
(like SMP, Mathematica, MAPL) can perform most
tasks of linear algebra for small
systems. However, numerical solutions are needed
for large numbers of equations.
3
  • In general, a linear system can be written as
  • That is, a system of M equations in N unknowns.
  • If MN, system is closed.
  • If one or more of the equations is a linear
    combination of the others, the equations are
    degenerate. If det(A)0 the system is singular.
  • Numerical methods have additional problems when
    the
  • System is degenerate to within round-off error
  • Accumulated round-off swamps solution (magnitude
    of xi varies widely). May need to normalize
    equations before solving.

a11x1 a12x2 a1NxN b1 a21x1 a22x2
a2NxN b2 . .
. . Or in matrix form Axb
. . .
. aM1x1 aM2x2 aMNxN bM
4
Basic tasks of linear algebra
  • Solve Axb, given A and b
  • Solve Axb, given A and multiple b
  • Calculate A-1
  • Calculate determinant of A
  • Calculate eigenvalues and eigenvectors of A
  • etc.

5
LINPACK and LAPACK
  • LINPACK is an (old) collection of routines for
    linear algebra.
  • Has been combined with package for eigensystems
    to create LAPACK (Linear Algebra PACKage) Large
    collection of routines for linear algebra
    available from netlib.org. Contains everything
    from
  • vector-vector operations (add/multiply)
  • vector-matrix operations (multiply)
  • matrix-matrix operations
  • general and iterative methods for solving Axb
  • eigensystem routines
  • Details at http//www.netlib.org/lapack
  • The LINPACK benchmark is the two routines DGEFA
    and DGESL

6
Solving Axb
  • The general solution is given by Cramers rule
  • where A(i) is the matrix formed by
    replacing the i-th column of A with b
  • But Cramers rule is not practical for n gt 3, nor
    even very stable.
  • It requires e(n1)! operations for large n.
  • That would be 2.56 x 10160 operations for n100,
    or 10140 years on a teraflop computer. Clearly
    we need something better.
  • Methods can be classified into two types
  • Direct methods get exact answer with a finite
    number of operations
  • Iterative methods provide better and better
    approximate solutions with increasing work.
  • For small n, or sparse matrices, direct methods
    are best.
  • For large matrices (106x106) iterative methods
    may be the only choice.

7
Gaussian elimination
You probably learned this in freshman algebra.
Consider Add -ai1/a11 times the first eqn
to the i-th equation. This cancels x1 from all
the equations below the first. Converts leading
column to zero below first equation. Then add
-aj2(a21a22/a11) times the second row to the j-th
equation below the second. This cancels x2 from
all equations below the second. Converts second
column to zero below the second
equation. Continue.
a11x1 a12x2 a1NxN b1 a21x1 a22x2
a2NxN b2 . .
. . aM1x1 aM2x2 aMNxN bM
8
Gaussian elimination with pivoting
During Gaussian elimination, if the leading
matrix element of the row being worked on is
zero, the method fails. Pivoting refers to
rearranging the rows (equations) so the leading
matrix element at the pivot point is never
zero. In fact, Gaussian elimination without
pivoting is unstable (large amplification factor
k). More formally, Gaussian elimination requires
each leading principle matrix of A (but not A
itself) be non-singular. If A does not satisfy
this requirement, pivoting transforms it into a
matrix A that does.
9
Back substitution
Gaussian elimination factors A into an upper
triangular matrix U e.g. for a 5x5 system A
becomes X X X X X X
X X X U X X X
X X X
Solution to Axb is now given by back
substitution. Last equation gives solution for
xN. Substitute into N-1 equation and solve for
xN-1. Continue This results in an O(N3) method
for solving Axb
10
LU decomposition
We can factor any matrix A into a product of
lower and upper triangular matrices, A LU
X X X X
X X X X
X X X X L X X X U
X X X X X X X
X X X X
X X X X Then
Ax (LU)x L (Ux) b If we have the
factorization ALU, solving Axb requires two
steps (1) solve Ly b (forward
elimination) (2) solve Ux y (back
substitution) LU decomposition is essentially
Gaussian elimination back substitution
11
Astrophysical ApplicationCalculating level
populations in ISM
  • ISM gas and dust that fills space between the
    stars in the Milky Way galaxy.
  • In some places, gas in ISM is glowing due to
  • Photons from young, hot stars
  • Photons from hot, proto-WDs
  • Particle collisions in interstellar shock waves

12
HII region ionized gas heated by hot, young
star, e.g. Orion nebula
13
(No Transcript)
14
Horsehead nebula
15
Planetary nebula circumstellar matter heated by
proto-white dwarf
Eskimo nebula
16
(No Transcript)
17
Veil nebulasupernova blast waves
18
Spectrum of light from nebula reveals strong
emission lines.
From ratios of intensities of lines astronomers
can determine basic properties of gas (d, T).
How?
19
Line intensities are directly proportional to
number of transitions per unit time between
electronic levels of ion, atom, or
molecule. These transition rates, and
level populations, are determined by rate
equations. For each species must balance
rate out of level i rate into level i Let
nj be the electron density in level j
(Rjk)Y transition rate coefficient for level
j-gtk due to process Y. In general What
processes (Y) are there?
Energy level diagram for OIV ion
20
  • (1) Collisional excitation.
  • Inelastic collision with free electron excites
    internal state.
  • Rate is Rjk ne gjk
  • ne number density of free electrons
  • gjk collisional rate coefficient
  • (2) Collisional de-excitation.
  • Reverse of above, rate is Rkj ne gkj
  • Can show gjk (gk/gj) exp(-Ejk/kT) gkj
  • where gk,gj statistical weights
    (degeneracy) of levels
  • Ejk energy difference between
    levels
  • k Boltzmann constant
  • T temperature of gas
  • Collisions can also occur with other ions/atoms
    but usually negligible compared to electron
    collision rate. Ignore here.

21
(3) Radiative de-excitation Electron can
spontaneously emit a photon and transition k-gtj
(kgtj) Rate Rkj Akj, where A is Einstein A
coefficient (4) Stimulated emission Rarely
important (except in masers). Ignore here. So,
in the end, can write level population equations
as
22
For example, for a 3-level atom this
becomes rate out of level rate into level
These are M equations in M unknowns. BUT
equations are degenerate. Any one can be written
as linear combination of the other two. Need one
more independent equation to close system. Use
the normalization
23
Intensity ratios
Can solve linear system for variety of atoms and
ions. Intensity of any spectral line associated
with transition k-gtj is just Ikj (rate of
spontaneous emission)x(energy of emitted photon)
nk Akj Ekj So by solving linear system
for nk, can compute line intensities, compare to
observations, and determine (ne , T) pair that
best fits data.
24
Tri-diagonal matrices
Quite often, numerical algorithms result in a
tri-diagonal matrix of unknowns (for example,
centered finite difference formula for second
derivative)
25
Forward elimination/back substitution is
extremely fast for tri-diagonal matrices
Gaussian elimination is O(N3) for general
matrices For tri-di systems, it is only O(N) -
huge savings Moreover, it works without
pivoting, providing matrix is diagonally
dominant, that is b gt a c . Most linear
algebra packages have special routines for tri-di
matrices which exploit this saving.
26
Sparse matrices
  • When most of the components of a matrix are zero,
    matrix is said to be sparse.
  • Obviously general methods for sparse matrices are
    wasteful most operations would involve 00.
  • Often, the non-zero portions of the matrix form a
    pattern.
  • Special purpose methods have been developed for a
    wide variety of sparse matrices
  • Tridiagonal
  • Band diagonal with bandwidth M
  • Block diagonal
  • Banded
  • Also important to know whether matrix is
    symmetric (aij aji), positive definite, and/or
    diagonal dominant aii gt Sumj aij
  • More complex algorithms often require solving
    large, sparse linear systems as an intermediate
    step, so well consider them now.

27
Direct methods for sparse matrices
Almost all are based on Gaussian
elimination. But substantial memory and
computational savings can be made depending on
specific structure of matrix. Examples see
LAPACK
28
Iterative methods for sparse matrices
  • Iterative methods are by far the most important
    class of techniques for large systems.
  • Often matrix is decomposed into A P - Q, where
    P is easily invertible, then basic iteration is
    Pxi1 Qxi b
  • P is preconditioner
  • Techniques
  • Conjugate gradient for positive definite
    symmetric matrices. Minimize f(x) (1/2)xAx -
    bx. Occurs when f Ax-b 0. This is nonlinear
    root finding in multi-dimensions (see lecture 9).
  • Bi-conjugate gradient for nonsymmetric systems
  • GMRES for general nonsymmetric systems
  • Incomplete Cholesky decomposition popular
    preconditioner for positive definite symmetric
    matrices, about twice as fast as other methods.
    Chooses U so that ULT.

29
Singular matrices
Faster methods are possible if the matrix is
positive definite, that is xT A x gt 0 for any x.
In this case A has all positive eigenvalues. If
det(A)0, or is zero to within round-off most
standard techniques fail. Instead, use Singular
Value Decomposition (SVD).
Write a Comment
User Comments (0)
About PowerShow.com