CS 267 Applications of Parallel Computers Dense Linear Algebra - PowerPoint PPT Presentation

1 / 79
About This Presentation
Title:

CS 267 Applications of Parallel Computers Dense Linear Algebra

Description:

CS267 Dense Linear Algebra I.1. Demmel Fa 2002. CS 267 Applications of Parallel Computers ... How fast can you solve dense Ax=b? ... – PowerPoint PPT presentation

Number of Views:58
Avg rating:3.0/5.0
Slides: 80
Provided by: DavidE7
Category:

less

Transcript and Presenter's Notes

Title: CS 267 Applications of Parallel Computers Dense Linear Algebra


1
CS 267 Applications of Parallel ComputersDense
Linear Algebra
  • James Demmel
  • http//www.cs.berkeley.edu/demmel/cs267_171002.pp
    t

2
Outline
  • Motivation for Dense Linear Algebra
  • Benchmarks
  • Review Gaussian Elimination (GE) for solving Axb
  • Optimizing GE for caches on sequential machines
  • using matrix-matrix multiplication (BLAS)
  • LAPACK library overview and performance
  • Data layouts on parallel machines
  • Parallel Gaussian Elimination
  • ScaLAPACK library overview
  • Eigenvalue problems
  • Open Problems

3
Success Stories (with NERSC, LBNL)
  • Scattering in a quantum system of three charged
    particles (Rescigno, Baertschy, Isaacs and
    McCurdy, Dec. 24, 1999).

Cosmic Microwave Background Analysis, BOOMERanG
collaboration, MADCAP code (Apr. 27, 2000).
SuperLU
ScaLAPACK
4
Motivation (1)
  • 3 Basic Linear Algebra Problems
  • Linear Equations Solve Axb for x
  • Least Squares Find x that minimizes S ri2 where
    rAx-b
  • Eigenvalues Find l and x where Ax l x
  • Singular Value Decomposition ATAx?2x
  • Lots of variations depending on structure of A
  • A symmetric, positive definite, banded,

5
Motivation (2)
  • Why dense A, as opposed to sparse A?
  • Many large matrices are sparse, but
  • Dense algorithms easier to understand
  • Some applications yields large dense matrices
  • Benchmarking
  • How fast is your computer?
    How fast can you solve
    dense Axb?
  • Large sparse matrix algorithms often yield
    smaller (but still large) dense problems

6
Winner of TOPS 500 (LINPACK Benchmark), by year
Year Machine Tflops Factor faster Peak Tflops Num Procs N
2002 Earth System Computer, NEC 35.6 4.9 40.8 5104 1.04M
2001 ASCI White, IBM SP Power 3 7.2 1.5 11.1 7424 .52M
2000 ASCI White, IBM SP Power 3 4.9 2.1 11.1 7424 .43M
1999 ASCI Red, Intel PII Xeon 2.4 1.1 3.2 9632 .36M
1998 ASCI Blue, IBM SP 604E 2.1 1.6 3.9 5808 .43M
1997 ASCI Red, Intel Ppro, 200 MHz 1.3 3.6 1.8 9152 .24M
1996 Hitachi CP-PACS .37 1.3 .6 2048 .10M
1995 Intel Paragon XP/S MP .28 1 .3 6768 .13M
Source Jack Dongarra (UTK)
7
Current Records for Solving Small Dense Systems
www.netlib.org, click on Performance DataBase
Server

Megaflops Machine n100
n1000 Peak Intel Pentium 4
1190 2355 5060 (1
proc, 2.53 GHz) NEC SX 5 (16 proc, 250
MHz) 45030 64000
(1 proc, 250 MHz) 856
7280 8000
8
Review of Gaussian Elimination (GE) for solving
Axb
  • Add multiples of each row to later rows to make A
    upper triangular
  • Solve resulting triangular system Ux c by
    substitution

for each column i zero it out below the
diagonal by adding multiples of row i to later
rows for i 1 to n-1 for each row j below
row i for j i1 to n add a
multiple of row i to row j for k i to
n A(j,k) A(j,k) -
(A(j,i)/A(i,i)) A(i,k)
9
Refine GE Algorithm (1)
  • Initial Version
  • Remove computation of constant A(j,i)/A(i,i) from
    inner loop

for each column i zero it out below the
diagonal by adding multiples of row i to later
rows for i 1 to n-1 for each row j below
row i for j i1 to n add a
multiple of row i to row j for k i to
n A(j,k) A(j,k) -
(A(j,i)/A(i,i)) A(i,k)
for i 1 to n-1 for j i1 to n
m A(j,i)/A(i,i) for k i to n
A(j,k) A(j,k) - m A(i,k)
10
Refine GE Algorithm (2)
  • Last version
  • Dont compute what we already know
    zeros below diagonal in column i

for i 1 to n-1 for j i1 to n
m A(j,i)/A(i,i) for k i to n
A(j,k) A(j,k) - m A(i,k)
for i 1 to n-1 for j i1 to n
m A(j,i)/A(i,i) for k i1 to n
A(j,k) A(j,k) - m A(i,k)
11
Refine GE Algorithm (3)
  • Last version
  • Store multipliers m below diagonal in zeroed
    entries for later use

for i 1 to n-1 for j i1 to n
m A(j,i)/A(i,i) for k i1 to n
A(j,k) A(j,k) - m A(i,k)
for i 1 to n-1 for j i1 to n
A(j,i) A(j,i)/A(i,i) for k i1 to
n A(j,k) A(j,k) - A(j,i) A(i,k)
12
Refine GE Algorithm (4)
  • Last version

for i 1 to n-1 for j i1 to n
A(j,i) A(j,i)/A(i,i) for k i1 to
n A(j,k) A(j,k) - A(j,i) A(i,k)
  • Split Loop

for i 1 to n-1 for j i1 to n
A(j,i) A(j,i)/A(i,i) for j i1 to n
for k i1 to n A(j,k)
A(j,k) - A(j,i) A(i,k)
13
Refine GE Algorithm (5)
for i 1 to n-1 for j i1 to n
A(j,i) A(j,i)/A(i,i) for j i1 to n
for k i1 to n A(j,k)
A(j,k) - A(j,i) A(i,k)
  • Last version
  • Express using matrix operations (BLAS)

for i 1 to n-1 A(i1n,i) A(i1n,i) (
1 / A(i,i) ) A(i1n,i1n) A(i1n ,
i1n ) - A(i1n , i) A(i ,
i1n)
14
What GE really computes
  • Call the strictly lower triangular matrix of
    multipliers M, and let L IM
  • Call the upper triangle of the final matrix U
  • Lemma (LU Factorization) If the above algorithm
    terminates (does not divide by zero) then A LU
  • Solving Axb using GE
  • Factorize A LU using GE
    (cost 2/3 n3 flops)
  • Solve Ly b for y, using substitution (cost
    n2 flops)
  • Solve Ux y for x, using substitution (cost
    n2 flops)
  • Thus Ax (LU)x L(Ux) Ly b as desired

for i 1 to n-1 A(i1n,i) A(i1n,i) /
A(i,i) A(i1n,i1n) A(i1n , i1n ) -
A(i1n , i) A(i , i1n)
15
Problems with basic GE algorithm
  • What if some A(i,i) is zero? Or very small?
  • Result may not exist, or be unstable, so need
    to pivot
  • Current computation all BLAS 1 or BLAS 2, but we
    know that BLAS 3 (matrix multiply) is fastest
    (earlier lectures)

for i 1 to n-1 A(i1n,i) A(i1n,i) /
A(i,i) BLAS 1 (scale a vector)
A(i1n,i1n) A(i1n , i1n ) BLAS 2
(rank-1 update) - A(i1n , i)
A(i , i1n)
Peak
BLAS 3
BLAS 2
BLAS 1
16
Pivoting in Gaussian Elimination
  • A 0 1 fails completely because cant
    divide by A(1,1)0
  • 1 0
  • But solving Axb should be easy!
  • When diagonal A(i,i) is tiny (not just zero),
    algorithm
  • may complete but get completely wrong answer
  • Numerical instability
  • Roundoff error is cause
  • Cure Pivot (swap rows of A) so A(i,i) large

17
Gaussian Elimination with Partial Pivoting (GEPP)
  • Partial Pivoting swap rows so that A(i,i) is
    largest in column

for i 1 to n-1 find and record k where
A(k,i) maxi lt j lt n A(j,i)
i.e. largest entry in rest of column i if
A(k,i) 0 exit with a warning that A
is singular, or nearly so elseif k ! i
swap rows i and k of A end if
A(i1n,i) A(i1n,i) / A(i,i) each
quotient lies in -1,1 A(i1n,i1n)
A(i1n , i1n ) - A(i1n , i) A(i , i1n)
  • Lemma This algorithm computes A PLU, where
    P is a
  • permutation matrix
  • This algorithm numerically stable in practice
  • For details see LAPACK code at
    www.netlib.org/lapack/single/sgetf2.f

18
Problems with basic GE algorithm
  • What if some A(i,i) is zero? Or very small?
  • Result may not exist, or be unstable, so need
    to pivot
  • Current computation all BLAS 1 or BLAS 2, but we
    know that BLAS 3 (matrix multiply) is fastest
    (earlier lectures)

for i 1 to n-1 A(i1n,i) A(i1n,i) /
A(i,i) BLAS 1 (scale a vector)
A(i1n,i1n) A(i1n , i1n ) BLAS 2
(rank-1 update) - A(i1n , i)
A(i , i1n)
Peak
BLAS 3
BLAS 2
BLAS 1
19
Converting BLAS2 to BLAS3 in GEPP
  • Blocking
  • Used to optimize matrix-multiplication
  • Harder here because of data dependencies in GEPP
  • Delayed Updates
  • Save updates to trailing matrix from several
    consecutive BLAS2 updates
  • Apply many saved updates simultaneously in one
    BLAS3 operation
  • Same idea works for much of dense linear algebra
  • Open questions remain
  • First Approach Need to choose a block size b
  • Algorithm will save and apply b updates
  • b must be small enough so that active submatrix
    consisting of b columns of A fits in cache
  • b must be large enough to make BLAS3 fast

20
Blocked GEPP (www.netlib.org/lapack/single/sgetr
f.f)
for ib 1 to n-1 step b Process matrix b
columns at a time end ib b-1
Point to end of block of b columns
apply BLAS2 version of GEPP to get A(ibn ,
ibend) P L U let LL denote the
strict lower triangular part of A(ibend ,
ibend) I A(ibend , end1n) LL-1
A(ibend , end1n) update next b rows
of U A(end1n , end1n ) A(end1n ,
end1n ) - A(end1n , ibend)
A(ibend , end1n)
apply delayed updates with
single matrix-multiply
with inner dimension b
(For a correctness proof, see on-lines notes.)
21
Efficiency of Blocked GEPP
22
Overview of LAPACK
  • Standard library for dense/banded linear algebra
  • Linear systems Axb
  • Least squares problems minx Ax-b 2
  • Eigenvalue problems Ax lx, Ax lBx
  • Singular value decomposition (SVD) A USVT
  • Algorithms reorganized to use BLAS3 as much as
    possible
  • Basis of math libraries on many computers, Matlab
    6
  • Many algorithmic innovations remain
  • Projects available
  • Automatic optimization
  • Quadtree matrix data structures for locality
  • New eigenvalue algorithms

23
Performance of LAPACK (n1000)
24
Performance of LAPACK (n100)
25
Recursive Algorithms
  • Still uses delayed updates, but organized
    differently
  • (formulas on board)
  • Can exploit recursive data layouts
  • 3x speedups on least squares for tall, thin
    matrices
  • Theoretically optimal memory hierarchy
    performance
  • See references at
  • http//lawra.uni-c.dk/lawra/index.html
  • http//www.cs.berkeley.edu/yelick/cs267f01/lectur
    es/Lect14.html
  • http//www.cs.umu.se/research/parallel/recursion/

26
Recursive Algorithms Limits
  • Two kinds of dense matrix compositions
  • One Sided
  • Sequence of simple operations applied on left of
    matrix
  • Gaussian Elimination A LU or A PLU
  • Symmetric Gaussian Elimination A LDLT
  • Cholesky A LLT
  • QR Decomposition for Least Squares A QR
  • Can be nearly 100 BLAS 3
  • Susceptible to recursive algorithms
  • Two Sided
  • Sequence of simple operations applied on both
    sides, alternating
  • Eigenvalue algorithms, SVD
  • At least 25 BLAS 2
  • Seem impervious to recursive approach?
  • Some recent progress on SVD (25 vs 50 BLAS2)

27
Parallelizing Gaussian Elimination
  • Recall parallelization steps from earlier lecture
  • Decomposition identify enough parallel work, but
    not too much
  • Assignment load balance work among threads
  • Orchestrate communication and synchronization
  • Mapping which processors execute which threads
  • Decomposition
  • In BLAS 2 algorithm nearly each flop in inner
    loop can be done in parallel, so with n2
    processors, need 3n parallel steps
  • This is too fine-grained, prefer calls to local
    matmuls instead
  • Need to discuss parallel matrix multiplication
  • Assignment
  • Which processors are responsible for which
    submatrices?

for i 1 to n-1 A(i1n,i) A(i1n,i) /
A(i,i) BLAS 1 (scale a vector)
A(i1n,i1n) A(i1n , i1n ) BLAS 2
(rank-1 update) - A(i1n , i)
A(i , i1n)
28
Different Data Layouts for Parallel GE (on 4
procs)
Load balanced, but cant easily use BLAS2 or BLAS3
Bad load balance P0 idle after first n/4 steps
Can trade load balance and BLAS2/3 performance
by choosing b, but factorization of block column
is a bottleneck
The winner!
Complicated addressing
29
PDGEMM PBLAS routine for matrix
multiply Observations For fixed N, as P
increases Mflops increases, but
less than 100 efficiency For fixed P, as N
increases, Mflops (efficiency) rises
DGEMM BLAS routine for matrix
multiply Maximum speed for PDGEMM Procs
speed of DGEMM Observations (same as above)
Efficiency always at least 48 For fixed
N, as P increases, efficiency drops
For fixed P, as N increases, efficiency
increases
30
Review BLAS 3 (Blocked) GEPP
for ib 1 to n-1 step b Process matrix b
columns at a time end ib b-1
Point to end of block of b columns
apply BLAS2 version of GEPP to get A(ibn ,
ibend) P L U let LL denote the
strict lower triangular part of A(ibend ,
ibend) I A(ibend , end1n) LL-1
A(ibend , end1n) update next b rows
of U A(end1n , end1n ) A(end1n ,
end1n ) - A(end1n , ibend)
A(ibend , end1n)
apply delayed updates with
single matrix-multiply
with inner dimension b
BLAS 3
31
Review Row and Column Block Cyclic Layout
processors and matrix blocks are distributed in a
2d array pcol-fold parallelism in any column,
and calls to the BLAS2 and BLAS3 on matrices of
size brow-by-bcol serial bottleneck is
eased need not be symmetric in rows and columns
32
Distributed GE with a 2D Block Cyclic Layout
block size b in the algorithm and the block sizes
brow and bcol in the layout satisfy bbrowbcol.
shaded regions indicate busy processors or
communication performed. unnecessary to have a
barrier between each step of the algorithm,
e.g.. step 9, 10, and 11 can be pipelined
33
Distributed GE with a 2D Block Cyclic Layout
34
Matrix multiply of green green - blue pink
35
ScaLAPACK Performance Models (1)ScaLAPACK
Operation Counts
36
ScaLAPACK Performance Models (2)Compare
Predictions and Measurements
(LU)
(Cholesky)
37
PDGESV ScaLAPACK parallel LU
routine Since it can run no faster than its
inner loop (PDGEMM), we measure Efficiency
Speed(PDGESV)/Speed(PDGEMM) Observations
Efficiency well above 50 for large
enough problems For fixed N, as P
increases, efficiency decreases
(just as for PDGEMM) For fixed P, as N
increases efficiency increases
(just as for PDGEMM) From bottom table, cost
of solving Axb about half of matrix
multiply for large enough matrices.
From the flop counts we would
expect it to be (2n3)/(2/3n3) 3
times faster, but communication makes
it a little slower.
38
(No Transcript)
39
Parallelism in ScaLAPACK
  • Task parallelism
  • Sign function eigenvalue computations
  • Divide and Conquer
  • Tridiagonal and band solvers, symmetric
    eigenvalue problem and Sign function
  • Cyclic reduction
  • Reduced system in the band solver
  • Level 3 BLAS block operations
  • All the reduction routines
  • Pipelining
  • QR Iteration, Triangular Solvers, classic
    factorizations
  • Redundant computations
  • Condition estimators
  • Static work assignment
  • Bisection

40
Scales well, nearly full machine speed
41
Old version, pre 1998 Gordon Bell Prize Still
have ideas to accelerate Project Available!
Old Algorithm, plan to abandon
42
The Holy Grail (Parlett, Dhillon, Marques)
Perfect Output complexity (O(n vectors)),
Embarrassingly parallel, Accurate
Making the symmetric eigenproblem and SVD scalable
To be propagated throughout LAPACK and ScaLAPACK
43
Have good ideas to speedup Project available!
Hardest of all to parallelize
44
Making the nonsymmetric eigenproblem scalable
  • Axi li xi , Schur form A QTQT
  • Parallel HQR
  • Henry, Watkins, Dongarra, Van de Geijn
  • Now in ScaLAPACK
  • Not as scalable as LU N times as many
    messages
  • Block-Hankel data layout better in theory, but
    not in ScaLAPACK
  • Sign Function
  • Beavers, Denman, Lin, Zmijewski, Bai, Demmel, Gu,
    Godunov, Bulgakov, Malyshev
  • Ai1 (Ai Ai-1)/2 ? shifted projector onto Re
    l gt 0
  • Repeat on transformed A to divide-and-conquer
    spectrum
  • Only uses inversion, so scalable
  • Inverse free version exists (uses QRD)
  • Very high flop count compared to HQR, less stable

45
Out-of-core means matrix lives on disk too
big for main mem Much harder to hide latency
of disk QR much easier than LU because no
pivoting needed for QR
46
ScaLAPACK Summary and Conclusions
  • One-sided Problems are scalable
  • LU (Linpack Benchmark)
  • Cholesky, QR
  • Two-sided Problems are harder
  • Half BLAS2, not all BLAS3
  • Eigenproblems, SVD (Holy Grail coming)
  • 684 Gflops on 4600 PE ASCI Red (149 Mflops/proc)
  • Henry, Stanley, Sears
  • Hermitian generalized eigenproblem Ax l Bx
  • 2nd Place, Gordon Bell Peak Performance Award,
    SC98
  • Narrow band problems hardest
  • Solving and eigenproblems
  • Galois theory of parallel prefix
  • www.netlib.org/scalapack

47
A small software project ...
48
Extra Slides
49
Parallelizing Gaussian Elimination
  • Recall parallelization steps from Lecture 3
  • Decomposition identify enough parallel work, but
    not too much
  • Assignment load balance work among threads
  • Orchestrate communication and synchronization
  • Mapping which processors execute which threads
  • Decomposition
  • In BLAS 2 algorithm nearly each flop in inner
    loop can be done in parallel, so with n2
    processors, need 3n parallel steps
  • This is too fine-grained, prefer calls to local
    matmuls instead

for i 1 to n-1 A(i1n,i) A(i1n,i) /
A(i,i) BLAS 1 (scale a vector)
A(i1n,i1n) A(i1n , i1n ) BLAS 2
(rank-1 update) - A(i1n , i)
A(i , i1n)
50
Assignment of parallel work in GE
  • Think of assigning submatrices to threads, where
    each thread responsible for updating submatrix it
    owns
  • owner computes rule natural because of locality
  • What should submatrices look like to achieve load
    balance?

51
Different Data Layouts for Parallel GE (on 4
procs)
Load balanced, but cant easily use BLAS2 or BLAS3
Bad load balance P0 idle after first n/4 steps
Can trade load balance and BLAS2/3 performance
by choosing b, but factorization of block column
is a bottleneck
The winner!
Complicated addressing
52
Computational Electromagnetics (MOM)
The main steps in the solution process are
Fill computing the matrix elements
of A Factor factoring the dense matrix
A Solve solving for one or more
excitations b Field Calc computing the fields
scattered from the object
53
Analysis of MOM for Parallel Implementation
Task Work Parallelism
Parallel Speed
Fill O(n2) embarrassing
low Factor O(n3)
moderately diff. very high Solve
O(n2) moderately diff.
high Field Calc. O(n)
embarrassing high
54
BLAS 3 (Blocked) GEPP, using Delayed Updates
for ib 1 to n-1 step b Process matrix b
columns at a time end ib b-1
Point to end of block of b columns
apply BLAS2 version of GEPP to get A(ibn ,
ibend) P L U let LL denote the
strict lower triangular part of A(ibend ,
ibend) I A(ibend , end1n) LL-1
A(ibend , end1n) update next b rows
of U A(end1n , end1n ) A(end1n ,
end1n ) - A(end1n , ibend)
A(ibend , end1n)
apply delayed updates with
single matrix-multiply
with inner dimension b
BLAS 3
55
BLAS2 version of Gaussian Elimination with
Partial Pivoting (GEPP)
for i 1 to n-1 find and record k where
A(k,i) maxi lt j lt n A(j,i)
i.e. largest entry in rest of column i if
A(k,i) 0 exit with a warning that A
is singular, or nearly so elseif k ! i
swap rows i and k of A end if
A(i1n,i) A(i1n,i) / A(i,i)
each quotient lies in -1,1
BLAS 1 A(i1n,i1n)
A(i1n , i1n ) - A(i1n , i) A(i , i1n)
BLAS 2, most work in this
line
56
How to proceed
  • Consider basic parallel matrix multiplication
    algorithms on simple layouts
  • Performance modeling to choose best one
  • Time (message) latency words time-per-word
  • a nb
  • Briefly discuss block-cyclic layout
  • PBLAS Parallel BLAS

57
Parallel Matrix Multiply
  • Computing CCAB
  • Using basic algorithm 2n3 Flops
  • Variables are
  • Data layout
  • Topology of machine
  • Scheduling communication
  • Use of performance models for algorithm design

58
1D Layout
  • Assume matrices are n x n and n is divisible by p
  • A(i) refers to the n by n/p block column that
    processor i owns (similiarly for B(i) and C(i))
  • B(i,j) is the n/p by n/p sublock of B(i)
  • in rows jn/p through (j1)n/p
  • Algorithm uses the formula
  • C(i) C(i) AB(i) C(i) S A(j)B(j,i)

j
59
Matrix Multiply 1D Layout on Bus or Ring
  • Algorithm uses the formula
  • C(i) C(i) AB(i) C(i) S A(j)B(j,i)
  • First consider a bus-connected machine without
    broadcast only one pair of processors can
    communicate at a time (ethernet)
  • Second consider a machine with processors on a
    ring all processors may communicate with nearest
    neighbors simultaneously

j
60
Naïve MatMul for 1D layout on Bus without
Broadcast
  • Naïve algorithm
  • C(myproc) C(myproc) A(myproc)B(myproc,my
    proc)
  • for i 0 to p-1
  • for j 0 to p-1 except i
  • if (myproc i) send A(i) to
    processor j
  • if (myproc j)
  • receive A(i) from processor i
  • C(myproc) C(myproc)
    A(i)B(i,myproc)
  • barrier
  • Cost of inner loop
  • computation 2n(n/p)2 2n3/p2
  • communication a bn2 /p

61
Naïve MatMul (continued)
  • Cost of inner loop
  • computation 2n(n/p)2 2n3/p2
  • communication a bn2 /p
    approximately
  • Only 1 pair of processors (i and j) are active on
    any iteration,
  • an of those, only i is doing computation
  • gt the algorithm is almost
    entirely serial
  • Running time (p(p-1) 1)computation
    p(p-1)communication
  • 2n3 p2a pn2b
  • this is worse than the serial time and grows
    with p

62
Better Matmul for 1D layout on a Processor Ring
  • Proc i can communicate with Proc(i-1) and
    Proc(i1) simultaneously for all i

Copy A(myproc) into Tmp C(myproc) C(myproc)
TB(myproc , myproc) for j 1 to p-1 Send
Tmp to processor myproc1 mod p Receive Tmp
from processor myproc-1 mod p C(myproc)
C(myproc) TmpB( myproc-j mod p , myproc)
  • Same idea as for gravity in simple sharks and
    fish algorithm
  • Time of inner loop 2(a bn2/p) 2n(n/p)2
  • Total Time 2n (n/p)2 (p-1) Time of
    inner loop
  • 2n3/p 2pa 2bn2
  • Optimal for 1D layout on Ring or Bus, even with
    with Broadcast
  • Perfect speedup for arithmetic
  • A(myproc) must move to each other
    processor, costs at least
  • (p-1)cost of sending n(n/p)
    words
  • Parallel Efficiency 2n3 / (p Total Time)
    1/(1 a p2/(2n3) b p/(2n) )
  • 1/ (1
    O(p/n))
  • Grows to 1 as n/p increases (or a and b shrink)

63
MatMul with 2D Layout
  • Consider processors in 2D grid (physical or
    logical)
  • Processors can communicate with 4 nearest
    neighbors
  • Broadcast along rows and columns

p(0,0) p(0,1) p(0,2)
p(1,0) p(1,1) p(1,2)
p(2,0) p(2,1) p(2,2)
64
Cannons Algorithm
  • C(i,j) C(i,j) S A(i,k)B(k,j)
  • assume s sqrt(p) is an integer
  • forall i0 to s-1 skew A
  • left-circular-shift row i of A by i
  • so that A(i,j) overwritten by
    A(i,(ji)mod s)
  • forall i0 to s-1 skew B
  • up-circular-shift B column i of B by i
  • so that B(i,j) overwritten by
    B((ij)mod s), j)
  • for k0 to s-1 sequential
  • forall i0 to s-1 and j0 to s-1
    all processors in parallel
  • C(i,j) C(i,j) A(i,j)B(i,j)
  • left-circular-shift each row of A
    by 1
  • up-circular-shift each row of B by
    1

k
65
Communication in Cannon
C(1,2) A(1,0) B(0,2) A(1,1) B(1,2)
A(1,2) B(2,2)
66
Cost of Cannons Algorithm
  • forall i0 to s-1 recall s
    sqrt(p)
  • left-circular-shift row i of A by i
    cost s(a bn2/p)
  • forall i0 to s-1
  • up-circular-shift B column i of B by i
    cost s(a bn2/p)
  • for k0 to s-1
  • forall i0 to s-1 and j0 to s-1
  • C(i,j) C(i,j) A(i,j)B(i,j)
    cost 2(n/s)3 2n3/p3/2
  • left-circular-shift each row of A
    by 1 cost a bn2/p
  • up-circular-shift each row of B by
    1 cost a bn2/p
  • Total Time 2n3/p 4 sa 4bn2/s
  • Parallel Efficiency 2n3 / (p Total Time)
  • 1/( 1 a
    2(s/n)3 b 2(s/n) )
  • 1/(1
    O(sqrt(p)/n))
  • Grows to 1 as n/s n/sqrt(p) sqrt(data per
    processor) grows
  • Better than 1D layout, which had Efficiency
    1/(1 O(p/n))

67
Drawbacks to Cannon
  • Hard to generalize for
  • p not a perfect square
  • A and B not square
  • Dimensions of A, B not perfectly divisible by
    ssqrt(p)
  • A and B not aligned in the way they are stored
    on processors
  • block-cyclic layouts
  • Memory hog (extra copies of local matrices)

68
SUMMA Scalable Universal Matrix Multiply
Algorithm
  • Slightly less efficient, but simpler and easier
    to generalize
  • Presentation from van de Geijn and Watts
  • www.netlib.org/lapack/lawns/lawn96.ps
  • Similar ideas appeared many times
  • Used in practice in PBLAS Parallel BLAS
  • www.netlib.org/lapack/lawns/lawn100.ps

69
SUMMA
B(k,J)
J
k
k


C(I,J)
I
A(I,k)
  • I, J represent all rows, columns owned by a
    processor
  • k is a single row or column (or a block of b
    rows or columns)
  • C(I,J) C(I,J) Sk A(I,k)B(k,J)
  • Assume a pr by pc processor grid (pr pc 4
    above)

For k0 to n-1 or n/b-1 where b is the
block size
cols in A(I,k) and rows in B(k,J) for all
I 1 to pr in parallel owner of
A(I,k) broadcasts it to whole processor row
for all J 1 to pc in parallel
owner of B(k,J) broadcasts it to whole processor
column Receive A(I,k) into Acol Receive
B(k,J) into Brow C( myproc , myproc ) C(
myproc , myproc) Acol Brow
70
SUMMA performance
For k0 to n/b-1 for all I 1 to s s
sqrt(p) owner of A(I,k) broadcasts it
to whole processor row time
log s ( a b bn/s), using a tree for
all J 1 to s owner of B(k,J)
broadcasts it to whole processor column
time log s ( a b bn/s), using a
tree Receive A(I,k) into Acol Receive
B(k,J) into Brow C( myproc , myproc ) C(
myproc , myproc) Acol Brow
time 2(n/s)2b
  • Total time 2n3/p a log p n/b
    b log p n2 /s
  • Parallel Efficiency 1/(1 a log p p /
    (2bn2) b log p s/(2n) )
  • Same b term as Cannon, except for log p factor
  • log p grows slowly so this is ok
  • Latency (a) term can be larger, depending on b
  • When b1, get a log p n
  • As b grows to n/s, term shrinks to a
    log p s (log p times Cannon)
  • Temporary storage grows like 2bn/s
  • Can change b to tradeoff latency cost with memory

71
Summary of Parallel Matrix Multiply Algorithms
  • 1D Layout
  • Bus without broadcast - slower than serial
  • Nearest neighbor communication on a ring (or bus
    with broadcast) Efficiency 1/(1 O(p/n))
  • 2D Layout
  • Cannon
  • Efficiency 1/(1O(p1/2 /n))
  • Hard to generalize for general p, n, block
    cyclic, alignment
  • SUMMA
  • Efficiency 1/(1 O(log p p / (bn2) log p
    p1/2 /n))
  • Very General
  • b small gt less memory, lower efficiency
  • b large gt more memory, high efficiency
  • Gustavson et al
  • Efficiency 1/(1 O(p1/3 /n) ) ??

72
  • Extra Slides

73
Current Records for Solving Dense Systems
www.netlib.org, click on Performance DataBase
Server
Year System Size Machine
Procs Gflops (Peak) 1950's O(100)

1995 128,600 Intel Paragon
6768 281 ( 338) 1996
215,000 Intel ASCI Red 7264
1068 (1453) 1998 148,000
Cray T3E 1488 1127
(1786) 1998 235,000 Intel
ASCI Red 9152 1338 (1830)
(200 MHz
Ppro) 1999 374,000 SGI ASCI
Blue 5040 1608 (2520) 2000
362,000 Intel ASCI Red 9632
2380 (3207)
(333 MHz Xeon) 2001 518,000
IBM ASCI White 8000 7226 (12000)
(375 MHz
Power 3) 2002 1,075,200 Earth
Simulator 5120 35860 (40960)
74
Computational Electromagnetics Solve Axb
  • Developed during 1980s, driven by defense
    applications
  • Determine the RCS (radar cross section) of
    airplane
  • Reduce signature of plane (stealth technology)
  • Other applications are antenna design, medical
    equipment
  • Two fundamental numerical approaches
  • MOM methods of moments ( frequency domain)
  • Large dense matrices
  • Finite differences (time domain)
  • Even larger sparse matrices

75
Computational Electromagnetics
- Discretize surface into triangular facets using
standard modeling tools - Amplitude of currents
on surface are unknowns
- Integral equation is discretized into a set of
linear equations
image NW Univ. Comp. Electromagnetics Laboratory
http//nueml.ece.nwu.edu/
76
Computational Electromagnetics (MOM)
After discretization the integral equation has
the form A x b where A is
the (dense) impedance matrix, x is the unknown
vector of amplitudes, and b is the excitation
vector. (see Cwik, Patterson, and Scott,
Electromagnetic Scattering on the Intel
Touchstone Delta, IEEE Supercomputing 92, pp 538
- 542)
77
Results for Parallel Implementation on Intel Delta
Task
Time (hours)
Fill (compute n2 matrix entries)
9.20 (embarrassingly parallel but slow)
Factor (Gaussian Elimination, O(n3) ) 8.25
(good parallelism with right
algorithm) Solve (O(n2))
2 .17 (reasonable
parallelism with right algorithm)
Field Calc. (O(n))
0.12 (embarrassingly
parallel and fast)
The problem solved was for a matrix of size
48,672. 2.6 Gflops for Factor - The world
record in 1991.
78
Computational Chemistry Ax l x
  • Seek energy levels of a molecule, crystal, etc.
  • Solve Schroedingers Equation for energy levels
    eigenvalues
  • Discretize to get Ax lBx, solve for eigenvalues
    l and eigenvectors x
  • A and B large Hermitian matrices (B positive
    definite)
  • MP-Quest (Sandia NL)
  • Si and sapphire crystals of up to 3072 atoms
  • A and B up to n40000, complex Hermitian
  • Need all eigenvalues and eigenvectors
  • Need to iterate up to 20 times (for
    self-consistency)
  • Implemented on Intel ASCI Red
  • 9200 Pentium Pro 200 processors (4600 Duals, a
    CLUMP)
  • Overall application ran at 605 Gflops (out of
    1800 Gflops peak),
  • Eigensolver ran at 684 Gflops
  • www.cs.berkeley.edu/stanley/gbell/index.html
  • Runner-up for Gordon Bell Prize at Supercomputing
    98

79
(No Transcript)
Write a Comment
User Comments (0)
About PowerShow.com