Title: Numerical Linear Algebra
1Numerical Linear Algebra
- Chris Rambicure
- Guojin Chen
- Christopher Cprek
2WHY USE LINEAR ALGEBRA?
- 1) Because it is applicable in many problems.
- 2)And its usually easier than calculus
3TRUE
Linear algebra has become as basic and as
applicable as calculus,and fortunately it is
easier.
-Gilbert Strang
Calculus
4HERE COME THE BASICS
5SCALARS
- What youre used to dealing with
- Have magnitude, but no direction
6VECTORS
- Represent both a magnitude and a direction
- Can add or subtract, multiply by scalars, or do
dot or cross products
7THE MATRIX
- Its an mxn array
- Holds a set of numerical values
- Especially useful in solving certain types of
equations - Operations Transpose, Scalar Multiply, Matrix
Add, Matrix Multiply
8EIGENVALUES
- You can choose a matrix A, a vector x, and a
scalar x so that Ax sx, meaning the matrix just
scales the vector - X in this case is called an eigenvector, and s is
its eigenvalue
9CHARACTERISTIC EQUATION
- det(M-tI) 0
- M the matrix
- I the identity
- t eigenvalues
10CAYLEY-HAMILTON THEOREM
- IF
- AND
- THEN p(A) 0, meaning A satisfies its
characteristic equation
11A Couple Names, A Couple Algorithms
12IN THE BEGINNING(Grassmanns Linear Algebra)
- Grassmann is considered to be the father of
linear algebra - Developed the idea of a linear algebra in which
the symbols representing geometric objects can be
manipulated - Several of his operations the interior product,
the exterior product, and the multiproduct
13Whats a Multiproduct Equation Look Like?
- d1Äd2 d1Äd2 0
- The multiproduct has many uses, including
scientific, mathematic, and industrial - Got updated by William Clifford
14CLIFFORDS MODIFICATION TO GRASSMANS EQUATION
- d1Äd2 d1Äd2 2kij
- The 2kij is whats referred to as Kroneckers
Symbol - Both of these equations are used for Quantum
Theory Math
15VECTOR SPACE
- Another idea which is kind of tied with Grassman
- Vector Space refers to some set of vectors that
contains the origin - It is usually infinite
- Subspace is a subset of vector space. It, of
course, is also vector space
16Cholesky Decomposition
- Algorithm developed by Arthur Cayley
- Takes a matrix and factors it into a triangular
matrix times its transpose - ARR
- Useful for matrix applications
- Becomes even more worthwhile in parallel
17HOW TO USE LINEAR ALGEBRA FOR PDES
- You can use matrices and vectors to solve partial
differential equations - For equations with lots of variables, youll wind
up with really sparse matrices - Hence, the project weve been working on all year
18BIBLIOGRAPHY
- Hermann Grassmann. Online. http//members.fortun
ecity.com/johnhays/grassmann.htm - Abstract Linear Spaces. Online.
http//www-groups.dcs.stand.ac.uk/history/HistTop
ics/Abstract_linear_spaces.html - Liberman, M. Linear Algebra Review. Online.
http//www.ling.upenn.edu/courses/ling525/linear_a
lgebra_review.html - Cholesky Factorization. Online.
http//www.netlib.org/utk/papers/factor/node9.html
19Numerical Linear Algebra
- Guojin Chen
- Christopher Cprek
- Chris Rambicure
20Johann Carl Friedrich Gauss
Born April 30, 1777 (Germany) Died Feb 23, 1855
(Germany)
21Gaussian Elimination
- LU Factorization
- Operation Count
- Instability of Gaussian Elimination without
Pivoting - Gaussian Elimination with Partial Pivoting
22Linear systems A linear system of equations (n
equations with n unknowns) can be written
a11 x1 a12 x2 ... a1n xn b1
a21 x1 a22 x2 ... a2n xn
b2
...
an1 x1 an2 x2 ...
ann xn bn
Using matrices, the above system of
linear equations can be written
23Gauss Elimination and Back Substitution
Convert this to triangular form
Then solve the system by Back Substitution.
24(No Transcript)
25LU Factorization
- Gaussian elimination transforms a full linear
system into an upper-triangular one by applying
simple linear transformations on the left. - Let A be a square matrix. The idea is to
transform A into upper-triangular matrix U by
introducing zeros below the diagonal.
26LU Factorization
- This elimination process is equivalent to
multiplying by a sequence of lower-triangular
matrices Lk on the left - Lm-1 L2L1A U
27(No Transcript)
28LU Factorization
- Setting L (Lm-1 )-1 (L2)-1(L1)-1
- We obtain an LU factorization of A
- A LU
29In order to find a general solution of a system
of equations, it is helpful to simplify the
system as much as possible. Gauss elimination is
a standard method (which has the advantage of
being easy to implement on a computer) for doing
this. Gauss elimination uses elementary
operations. We can interchange any two
equations multiply an equation by a
(nonzero) constant add a multiple of one
equation to any other one and aim to reduce the
system to triangular form. The system obtained
after each operation is equivalent to the
original one, meaning that they have the same
solutions.
30Algorithm of Gaussian Elimination without
Pivoting U A, L I For k 1 to m-1 for j
k 1 to m ljk ujk/ukk uj,km uj,km
ljkuk,km
31Operation Count
- There are 3 loops in the previous algorithm
- There are 2 flops per entry
- For each value of k, the inner loop is repeated
for rows k1, , m. - Work for Gaussian elimination is
- m3 flops
32(No Transcript)
33Instability of Gaussian Elimination without
Pivoting
- Consider the following matrices
-
- A1
- A2
34(No Transcript)
35Pivoting
- Pivots
- Partial Pivoting
- Example
- Complete Pivoting
36Pivot
37Partial Pivoting
38Example
P1
39L1
40(No Transcript)
41(No Transcript)
42(No Transcript)
43Reference http//www.maths.soton.ac.uk/teaching/u
nits/ma273/node8.html http//www.maths.soton.ac.uk
/teaching/units/ma273/node9.html Numerical Linear
Algebra by Lloyd Trefethen and David Bau,
III http//www.sosmath.com/matrix/system1/system1.
html
44Numerical Linear AlgebraThe Computer Age
- Christopher Cprek
- Chris Rambicure
- Guojin Chen
45What Ill Be Covering
- How Computers made Numerical Linear Algebra
relevant. - LAPACK
- Solving Dense Matrices on Parallel Computers.
46Why All the Sudden Interest?
- Gregory Moore regards the axiomatization of
abstract vector spaces to have been completed in
the 1920s. - Linear Algebra wasnt offered as a separate
mathematics course at major universities until
the 1950s and 60s. - Interest in linear algebra skyrocketed.
47Computers Made it Practical
- Before computers, solving a system of 100
equations with 100 unknowns was unheard of. - The brute mathematical force of computers made
linear algebra systems incredibly useful for all
kinds of applications involving linear algebra.
48Computers and Linear Algebra
- The computer software Matlab provides a good
example it is among the most popular in
engineering applications and at its core it
treats every problem as a linear algebra problem.
- A need for more advanced large matrix operations
resulted in LAPACK.
49What is LAPACK?
- Linear Algebra PACKage
- Software package designed specifically for linear
algebra applications. - The original goal of the LAPACK project was to
make the widely used EISPACK and LINPACK
libraries run efficiently on shared-memory vector
and parallel processors.
50LAPACK continued
- LAPACK is written in Fortran77 and provides
routines for solving systems of simultaneous
linear equations, least-squares solutions of
linear systems of equations, eigenvalue problems,
and singular value problems. - Dense and banded matrices are handled, but not
general sparse matrices. In all areas, similar
functionality is provided for real and complex
matrices, in both single and double precision.
51Parallel Dense Matrix Partitioning
- Parallel computers are well suited for processing
large matrices. - In order to process a matrix in parallel, it is
necessary to partition the matrix so that the
different partitions can be mapped to different
processors.
52Partitioning Dense Matrices
- Striped Partitioning
- Block-Striped
- Cyclic-Striped
- Block-Cyclic-Striped
- Checkerboard Partitioning
- Block-Checkerboard
- Cyclic-Checkerboard
- Block-Cyclic-Checkerboard
53Striped Partitioning
- Matrix is divided into groups of complete rows or
columns, and each processor is assigned one such
group.
54Striped Partitioning cont
- Block-striped Partitioning is when contiguous
rows or columns are assigned to each processor
together. - Cyclic-striped Partitioning is when rows or
columns are sequentially assigned to processors
in a wraparound manner. - Block-Cyclic-Striped is a combination of the two.
55Striped Partitioning cont
- In a column-wise block striping of an nn matrix
on p processors (labeled P(0), P(1), , P(P-1) - P(I) contains columns with indices (n/p)I, (n/p)I
1, , (n/p)(I1) 1. - In row-wise striping
- P(I) contains rows with indices I, Ip, I2p, ,
In-p.
56Checkerboard Partitioning
- The matrix is divided into smaller square or
rectangular blocks or submatrices that are
distributed among processors.
57Checkerboard Partitioning cont
- Much like striped-partitioning, checkerboard
partitioning may use block, cyclic, or a
combination. - A checkerboard-partitioned square matrix maps
naturally onto a two-dimensional square mesh of
processors. An nn matrix onto a p processor mesh
divides the blocks into size (n/?p)(n/?p).
58Matrix Transposition on a Mesh
- Assume that an nn matrix is stored in an nn
mesh of processors, so each processor holds a
single element. - A diagonal runs down the mesh.
- An element above the diagonal moves down to the
diagonal and then to the left to its destination
processor. - An element below the diagonal moves up to the
diagonal and then to the right to its destination
processor.
59Matrix Transposition cont
60Matrix Tranposition cont
- An element at initial p8 moves to p4, p0, p1, and
finally to p2. - If pltnn, then the tranpose can be computed in
two phases. - Square matrix blocks are treated as indivisible
units, and whole blocks are communicated instead
of individual elements. - Then do a local rearrangement within the blocks.
61Matrix Transposition cont
- Communication and the Local Rearrangement
62Matrix Transposition cont
- The total parallel run-time of the procedure for
transposition of matrix on a parallel computer
63Parallelization of Linear Algebra
- Transposition is just an example of how numerical
linear algebra can be easily and effectively
parallelized. - The same techniques and principles can be applied
to operations like multiplication, addition,
solving, etc. - This explains their current popularity.
64Conclusion
- Linear algebra is flourishing in an age of
computers, where there are limitless
applications. - LAPACK exists as an efficient code library for
processing large systems of equations on parallel
processing computers. - Parallel Computers are very well suited to these
kinds of problems.
65Useful Links
- http//www.crpc.rice.edu/CRPC/brochure/res_la.html
- http//citeseer.nj.nec.com/26050.html
- http//www.maa.org/features/cowen.html
- http//www.nersc.gov/dhbailey/cs267/Lectures/Lect
_10_2000.pdf - http//www.cacr.caltech.edu/ASAP/news/specialevent
s/tutorialnla.htm - http//www.netlib.org/scalapack/
- http//citeseer.nj.nec.com/125513.html
- http//discolab.rutgers.edu/classes/cs528/lectures
/lecture7/ - http//www.cse.uiuc.edu/cse302/lec20/lec-matrix/le
c-matrix.html