Automatic Performance Tuning of SparseMatrixVectorMultiplication SpMV and Iterative Sparse Solvers - PowerPoint PPT Presentation

1 / 261
About This Presentation
Title:

Automatic Performance Tuning of SparseMatrixVectorMultiplication SpMV and Iterative Sparse Solvers

Description:

Kaushik Datta, Mark Hoemmen, Marghoob Mohiyuddin, Shoaib Kamil, Rajesh Nishtala, ... 8x8 dense substructure: exploit this to limit #mem_refs ... – PowerPoint PPT presentation

Number of Views:172
Avg rating:3.0/5.0

less

Transcript and Presenter's Notes

Title: Automatic Performance Tuning of SparseMatrixVectorMultiplication SpMV and Iterative Sparse Solvers


1
Automatic Performance TuningofSparse-Matrix-Vect
or-Multiplication (SpMV)andIterative Sparse
Solvers
  • James Demmel
  • www.cs.berkeley.edu/demmel/cs267_Spr09

2
Outline
  • Motivation for Automatic Performance Tuning
  • Results for sparse matrix kernels
  • Sparse Matrix Vector Multiplication (SpMV)
  • Sequential and Multicore results
  • OSKI Optimized Sparse Kernel Interface
  • Tuning Entire Sparse Solvers

3
Berkeley Benchmarking and OPtimization (BeBOP)
  • Prof. Katherine Yelick
  • Current members
  • Kaushik Datta, Mark Hoemmen, Marghoob Mohiyuddin,
    Shoaib Kamil, Rajesh Nishtala, Vasily Volkov,
    Sam Williams,
  • Previous members
  • Hormozd Gahvari, Eun-Jim Im, Ankit Jain, Rich
    Vuduc, many undergrads,
  • Many results here from current, previous students
  • bebop.cs.berkeley.edu

4
Automatic Performance Tuning
  • Goal Let machine do hard work of writing fast
    code
  • What does tuning of dense BLAS, FFTs, signal
    processing, have in common?
  • Can do the tuning off-line once per
    architecture, algorithm
  • Can take as much time as necessary (hours, a
    week)
  • At run-time, algorithm choice may depend only on
    few parameters (matrix dimensions, size of FFT,
    etc.)
  • Cant always do tuning off-line
  • Algorithm and implementation may strongly depend
    on data only known at run-time
  • Ex Sparse matrix nonzero pattern determines both
    best data structure and implementation of
    Sparse-matrix-vector-multiplication (SpMV)
  • Part of search for best algorithm just be done
    (very quickly!) at run-time

5
Source Accelerator Cavity Design Problem (Ko via
Husbands)
6
Linear Programming Matrix

7
A Sparse Matrix You Encounter Every Day
8
SpMV with Compressed Sparse Row (CSR) Storage
Matrix-vector multiply kernel y(i) ? y(i)
A(i,j)x(j) for each row i for kptri to
ptri1 do yi yi valkxindk
Matrix-vector multiply kernel y(i) ? y(i)
A(i,j)x(j) for each row i for kptri to
ptri1 do yi yi valkxindk
9
Example The Difficulty of Tuning
  • n 21200
  • nnz 1.5 M
  • kernel SpMV
  • Source NASA structural analysis problem

10
Example The Difficulty of Tuning
  • n 21200
  • nnz 1.5 M
  • kernel SpMV
  • Source NASA structural analysis problem
  • 8x8 dense substructure exploit this to limit
    mem_refs

11
Speedups on Itanium 2 The Need for Search
Mflop/s
Mflop/s
12
Register Profile Itanium 2
1190 Mflop/s
190 Mflop/s
13
Register Profiles IBM and Intel IA-64
Power3 - 17
Power4 - 16
252 Mflop/s
820 Mflop/s
122 Mflop/s
459 Mflop/s
Itanium 2 - 33
Itanium 1 - 8
247 Mflop/s
1.2 Gflop/s
107 Mflop/s
190 Mflop/s
14
Register Profiles Sun and Intel x86
Ultra 2i - 11
Ultra 3 - 5
72 Mflop/s
90 Mflop/s
35 Mflop/s
50 Mflop/s
Pentium III-M - 15
Pentium III - 21
108 Mflop/s
122 Mflop/s
42 Mflop/s
58 Mflop/s
15
Another example of tuning challenges
  • More complicated non-zero structure in general
  • N 16614
  • NNZ 1.1M

16
Zoom in to top corner
  • More complicated non-zero structure in general
  • N 16614
  • NNZ 1.1M

17
3x3 blocks look natural, but
  • More complicated non-zero structure in general
  • Example 3x3 blocking
  • Logical grid of 3x3 cells
  • But would lead to lots of fill-in

18
Extra Work Can Improve Efficiency!
  • More complicated non-zero structure in general
  • Example 3x3 blocking
  • Logical grid of 3x3 cells
  • Fill-in explicit zeros
  • Unroll 3x3 block multiplies
  • Fill ratio 1.5
  • On Pentium III 1.5x speedup!
  • Actual mflop rate 1.52 2.25 higher

19
Automatic Register Block Size Selection
  • Selecting the r x c block size
  • Off-line benchmark
  • Precompute Mflops(r,c) using dense A for each r x
    c
  • Once per machine/architecture
  • Run-time search
  • Sample A to estimate Fill(r,c) for each r x c
  • Run-time heuristic model
  • Choose r, c to minimize time Fill(r,c) /
    Mflops(r,c)

20
Accurate and Efficient Adaptive Fill Estimation
  • Idea Sample matrix
  • Fraction of matrix to sample s Î 0,1
  • Control cost O(s nnz ) by controlling s
  • Search at run-time the constant matters!
  • Control s automatically by computing statistical
    confidence intervals, by monitoring variance
  • Cost of tuning
  • Lower bound convert matrix in 5 to 40 unblocked
    SpMVs
  • Heuristic 1 to 11 SpMVs
  • Tuning only useful when we do many SpMVs
  • Common case, eg in sparse solvers

21
Accuracy of the Tuning Heuristics (1/4)
See p. 375 of Vuducs thesis for matrices
NOTE Fair flops used (ops on explicit zeros
not counted as work)
22
Accuracy of the Tuning Heuristics (2/4)
23
Accuracy of the Tuning Heuristics (2/4)
DGEMV
24
Upper Bounds on Performance for blocked SpMV
  • P (flops) / (time)
  • Flops 2 nnz(A)
  • Upper bound on speed Two main assumptions
  • 1. Count memory ops only (streaming)
  • 2. Count only compulsory, capacity misses ignore
    conflicts
  • Account for line sizes
  • Account for matrix size and nnz
  • Charge minimum access latency ai at Li cache
    amem
  • e.g., Saavedra-Barrera and PMaC MAPS benchmarks
  • Upper bound on time assume all references to
    x( ) miss

25
Example Bounds on Itanium 2
26
Example Bounds on Itanium 2
27
Example Bounds on Itanium 2
28
Summary of Other Performance Optimizations
  • Optimizations for SpMV
  • Register blocking (RB) up to 4x over CSR
  • Variable block splitting 2.1x over CSR, 1.8x
    over RB
  • Diagonals 2x over CSR
  • Reordering to create dense structure splitting
    2x over CSR
  • Symmetry 2.8x over CSR, 2.6x over RB
  • Cache blocking 2.8x over CSR
  • Multiple vectors (SpMM) 7x over CSR
  • And combinations
  • Sparse triangular solve
  • Hybrid sparse/dense data structure 1.8x over CSR
  • Higher-level kernels
  • AATx, ATAx 4x over CSR, 1.8x over RB
  • A2x 2x over CSR, 1.5x over RB
  • Ax, A2x, A3x, .. , Akx . more to say
    later

29
Potential Impact on Applications Omega3P
  • Application accelerator cavity design Ko
  • Relevant optimization techniques
  • Symmetric storage
  • Register blocking
  • Reordering, to create more dense blocks
  • Reverse Cuthill-McKee ordering to reduce
    bandwidth
  • Do Breadth-First-Search, number nodes in reverse
    order visited
  • Traveling Salesman Problem-based ordering to
    create blocks
  • Nodes columns of A
  • Weights(u, v) no. of nz u, v have in common
  • Tour ordering of columns
  • Choose maximum weight tour
  • See Pinar Heath 97
  • 2.1x speedup on IBM Power 4

30
Source Accelerator Cavity Design Problem (Ko via
Husbands)
31
Post-RCM Reordering
32
100x100 Submatrix Along Diagonal
33
Microscopic Effect of RCM Reordering
Before Green Red After Green Blue
34
Microscopic Effect of Combined RCMTSP
Reordering
Before Green Red After Green Blue
35
(Omega3P)
36
Optimized Sparse Kernel Interface - OSKI
  • Provides sparse kernels automatically tuned for
    users matrix machine
  • BLAS-style functionality SpMV, Ax ATy, TrSV
  • Hides complexity of run-time tuning
  • Includes new, faster locality-aware kernels
    ATAx, Akx
  • Faster than standard implementations
  • Up to 4x faster matvec, 1.8x trisolve, 4x ATAx
  • For advanced users solver library writers
  • Available as stand-alone library (OSKI 1.0.1h,
    6/07)
  • Available as PETSc extension (OSKI-PETSc .1d,
    3/06)
  • Bebop.cs.berkeley.edu/oski

37
How the OSKI Tunes (Overview)
Application Run-Time
Library Install-Time (offline)
1. Build for Target Arch.
2. Benchmark
Workload from program monitoring
History
Matrix
Benchmark data
Heuristic models
1. Evaluate Models
Generated code variants
2. Select Data Struct. Code
To user Matrix handle for kernel calls
Extensibility Advanced users may write
dynamically add Code variants and Heuristic
models to system.
38
How to Call OSKI Basic Usage
  • May gradually migrate existing apps
  • Step 1 Wrap existing data structures
  • Step 2 Make BLAS-like kernel calls

int ptr , ind double val /
Matrix, in CSR format / double x , y
/ Let x and y be two dense vectors / /
Compute y ?y ?Ax, 500 times / for( i 0
i lt 500 i ) my_matmult( ptr, ind, val, ?, x,
b, y )
39
How to Call OSKI Basic Usage
  • May gradually migrate existing apps
  • Step 1 Wrap existing data structures
  • Step 2 Make BLAS-like kernel calls

int ptr , ind double val /
Matrix, in CSR format / double x , y
/ Let x and y be two dense vectors / / Step 1
Create OSKI wrappers around this data
/ oski_matrix_t A_tunable oski_CreateMatCSR(ptr
, ind, val, num_rows, num_cols, SHARE_INPUTMAT,
) oski_vecview_t x_view oski_CreateVecView(x,
num_cols, UNIT_STRIDE) oski_vecview_t y_view
oski_CreateVecView(y, num_rows, UNIT_STRIDE) /
Compute y ?y ?Ax, 500 times / for( i 0
i lt 500 i ) my_matmult( ptr, ind, val, ?, x,
b, y )
40
How to Call OSKI Basic Usage
  • May gradually migrate existing apps
  • Step 1 Wrap existing data structures
  • Step 2 Make BLAS-like kernel calls

int ptr , ind double val /
Matrix, in CSR format / double x , y
/ Let x and y be two dense vectors / / Step 1
Create OSKI wrappers around this data
/ oski_matrix_t A_tunable oski_CreateMatCSR(ptr
, ind, val, num_rows, num_cols, SHARE_INPUTMAT,
) oski_vecview_t x_view oski_CreateVecView(x,
num_cols, UNIT_STRIDE) oski_vecview_t y_view
oski_CreateVecView(y, num_rows, UNIT_STRIDE) /
Compute y ?y ?Ax, 500 times / for( i 0
i lt 500 i ) oski_MatMult(A_tunable,
OP_NORMAL, ?, x_view, ?, y_view)/ Step 2 /
41
How to Call OSKI Tune with Explicit Hints
  • User calls tune routine
  • May provide explicit tuning hints (OPTIONAL)

oski_matrix_t A_tunable oski_CreateMatCSR(
) / / / Tell OSKI we will call SpMV 500
times (workload hint) / oski_SetHintMatMult(A_tun
able, OP_NORMAL, ?, x_view, ?, y_view, 500) /
Tell OSKI we think the matrix has 8x8 blocks
(structural hint) / oski_SetHint(A_tunable,
HINT_SINGLE_BLOCKSIZE, 8, 8) oski_TuneMat(A_tuna
ble) / Ask OSKI to tune / for( i 0 i lt
500 i ) oski_MatMult(A_tunable, OP_NORMAL, ?,
x_view, ?, y_view)
42
How the User Calls OSKI Implicit Tuning
  • Ask library to infer workload
  • Library profiles all kernel calls
  • May periodically re-tune

oski_matrix_t A_tunable oski_CreateMatCSR(
) / / for( i 0 i lt 500 i )
oski_MatMult(A_tunable, OP_NORMAL, ?, x_view,
?, y_view) oski_TuneMat(A_tunable) / Ask OSKI
to tune /
43
Multicore SMPs Used for Tuning SpMV
AMD Opteron 2356 (Barcelona)
Intel Xeon E5345 (Clovertown)
IBM QS20 Cell Blade
Sun T2 T5140 (Victoria Falls)
44
Multicore SMPs with Conventional cache-based
memory hierarchy
AMD Opteron 2356 (Barcelona)
Intel Xeon E5345 (Clovertown)
Sun T2 T5140 (Victoria Falls)
IBM QS20 Cell Blade
45
Multicore SMPs with local store-based memory
hierarchy
AMD Opteron 2356 (Barcelona)
Intel Xeon E5345 (Clovertown)
Sun T2 T5140 (Victoria Falls)
IBM QS20 Cell Blade
46
Multicore SMPs with CMT Chip-MultiThreading
AMD Opteron 2356 (Barcelona)
Intel Xeon E5345 (Clovertown)
IBM QS20 Cell Blade
Sun T2 T5140 (Victoria Falls)
47
Multicore SMPs Number of threads
AMD Opteron 2356 (Barcelona)
Intel Xeon E5345 (Clovertown)
8 threads
8 threads
IBM QS20 Cell Blade
Sun T2 T5140 (Victoria Falls)
16 threads
128 threads
SPEs only
48
Multicore SMPs peak double precision flops
AMD Opteron 2356 (Barcelona)
Intel Xeon E5345 (Clovertown)
75 GFlop/s
74 Gflop/s
IBM QS20 Cell Blade
Sun T2 T5140 (Victoria Falls)
29 GFlop/s
19 GFlop/s
SPEs only
49
Multicore SMPs total DRAM bandwidth
AMD Opteron 2356 (Barcelona)
Intel Xeon E5345 (Clovertown)
21 GB/s (read) 10 GB/s (write)
21 GB/s
IBM QS20 Cell Blade
Sun T2 T5140 (Victoria Falls)
51 GB/s
42 GB/s (read) 21 GB/s (write)
SPEs only
50
Multicore SMPs with Non-Uniform Memory Access -
NUMA
AMD Opteron 2356 (Barcelona)
Intel Xeon E5345 (Clovertown)
IBM QS20 Cell Blade
Sun T2 T5140 (Victoria Falls)
SPEs only
51
Set of 14 test matrices
  • All bigger than the caches of our SMPs

2K x 2K Dense matrix stored in sparse format
Dense
Well Structured (sorted by nonzeros/row)
Protein
FEM / Spheres
FEM / Cantilever
Wind Tunnel
FEM / Harbor
QCD
FEM / Ship
Economics
Epidemiology
Poorly Structured hodgepodge
FEM / Accelerator
Circuit
webbase
Extreme Aspect Ratio (linear programming)
LP
52
SpMV Performance Naive parallelization
  • Out-of-the box SpMV performance on a suite of 14
    matrices
  • Scalability isnt great
  • Compare to threads
  • 8 8
  • 128 16

Naïve Pthreads
Naïve
53
SpMV Performance NUMA and Software Prefetching
  • NUMA-aware allocation is essential on NUMA SMPs.
  • Explicit software prefetching can boost bandwidth
    and change cache replacement policies
  • used exhaustive search

54
SpMV Performance Matrix Compression
  • Compression includes
  • register blocking
  • other formats
  • smaller indices
  • Use heuristic rather than search

55
SpMV Performance cache and TLB blocking
Cache/LS/TLB Blocking
Matrix Compression
SW Prefetching
NUMA/Affinity
Naïve Pthreads
Naïve
56
SpMV Performance Architecture specific
optimizations
Cache/LS/TLB Blocking
Matrix Compression
SW Prefetching
NUMA/Affinity
Naïve Pthreads
Naïve
57
SpMV Performance max speedup
  • Fully auto-tuned SpMV performance across the
    suite of matrices
  • Included SPE/local store optimized version
  • Why do some optimizations work better on some
    architectures?

2.7x
4.0x
2.9x
35x
Cache/LS/TLB Blocking
Matrix Compression
SW Prefetching
NUMA/Affinity
Naïve Pthreads
Naïve
58
Avoiding Communication in Sparse Linear Algebra
  • k-steps of typical iterative solver for Axb or
    Ax?x
  • Does k SpMVs with starting vector (eg with b, if
    solving Axb)
  • Finds best solution among all linear
    combinations of these k1 vectors
  • Many such Krylov Subspace Methods
  • Conjugate Gradients, GMRES, Lanczos, Arnoldi,
  • Goal minimize communication in Krylov Subspace
    Methods
  • Assume matrix well-partitioned, with modest
    surface-to-volume ratio
  • Parallel implementation
  • Conventional O(k log p) messages, because k
    calls to SpMV
  • New O(log p) messages - optimal
  • Serial implementation
  • Conventional O(k) moves of data from slow to
    fast memory
  • New O(1) moves of data optimal
  • Lots of speed up possible (modeled and measured)
  • Price some redundant computation

59
Locally Dependent Entries for x,Ax, A
tridiagonal, 2 processors
Proc 1
Proc 2
Can be computed without communication
60
Locally Dependent Entries for x,Ax,A2x, A
tridiagonal, 2 processors
Proc 1
Proc 2
Can be computed without communication
61
Locally Dependent Entries for x,Ax,,A3x, A
tridiagonal, 2 processors
Proc 1
Proc 2
Can be computed without communication
62
Locally Dependent Entries for x,Ax,,A4x, A
tridiagonal, 2 processors
Proc 1
Proc 2
Can be computed without communication
63
Locally Dependent Entries for x,Ax,,A8x, A
tridiagonal, 2 processors
Proc 1
Proc 2
Can be computed without communication k8 fold
reuse of A
64
Remotely Dependent Entries for x,Ax,,A8x, A
tridiagonal, 2 processors
Proc 1
Proc 2
One message to get data needed to compute
remotely dependent entries, not k8 Minimizes
number of messages latency cost Price
redundant work ? surface/volume ratio
65
Remotely Dependent Entries for x,Ax,A2x,A3x, A
irregular, multiple processors
66
Sequential x,Ax,,A4x, with memory hierarchy
v
One read of matrix from slow memory, not
k4 Minimizes words moved bandwidth cost No
redundant work
67
Performance results on 8-Core Clovertown
68
Optimizing Communication Complexity of Sparse
Solvers
  • Example GMRES for Axb on 2D Mesh
  • x lives on n-by-n mesh
  • Partitioned on p½ -by- p½ grid of p processors
  • A has 5 point stencil (Laplacian)
  • (Ax)(i,j) linear_combination(x(i,j), x(i,j1),
    x(i1,j))
  • Ex 18-by-18 mesh on 3-by-3 grid of 9 processors

69
Minimizing Communication of GMRES
  • What is the cost (flops, words, mess)
    of k steps of standard GMRES?

GMRES, ver.1 for i1 to k w A v(i-1)
MGS(w, v(0),,v(i-1)) update v(i), H
endfor solve LSQ problem with H
n/p½
n/p½
  • Cost(A v) k (9n2 /p, 4n / p½ , 4 )
  • Cost(MGS) k2/2 ( 4n2 /p , log p , log p )
  • Total cost Cost( A v ) Cost (MGS)
  • Can we reduce the latency?

70
Minimizing Communication of GMRES
  • Cost(GMRES, ver.1) Cost(Av) Cost(MGS)

( 9kn2 /p, 4kn / p½ , 4k ) ( 2k2n2 /p , k2
log p / 2 , k2 log p / 2 )
  • How much latency cost from Av can you avoid?
    Almost all

71
Minimizing Communication of GMRES
  • Cost(GMRES, ver. 2) Cost(W) Cost(MGS)

( 9kn2 /p, 4kn / p½ , 8 ) ( 2k2n2 /p , k2
log p / 2 , k2 log p / 2 )
  • How much latency cost from MGS can you avoid?
    Almost all

72
Minimizing Communication of GMRES
  • Cost(GMRES, ver. 2) Cost(W) Cost(MGS)

( 9kn2 /p, 4kn / p½ , 8 ) ( 2k2n2 /p , k2
log p / 2 , k2 log p / 2 )
  • How much latency cost from MGS can you avoid?
    Almost all

73
Minimizing Communication of GMRES
  • Cost(GMRES, ver. 2) Cost(W) Cost(MGS)

( 9kn2 /p, 4kn / p½ , 8 ) ( 2k2n2 /p , k2
log p / 2 , k2 log p / 2 )
  • How much latency cost from MGS can you avoid?
    Almost all

74
(No Transcript)
75
Minimizing Communication of GMRES
  • Cost(GMRES, ver. 3) Cost(W) Cost(TSQR)

( 9kn2 /p, 4kn / p½ , 8 ) ( 2k2n2 /p , k2
log p / 2 , log p )
  • Latency cost independent of k, just log p
    optimal
  • Oops W from power method, so precision lost
    What to do?
  • Use a different polynomial basis
  • Not Monomial basis W v, Av, A2v, , instead
  • Newton Basis WN v, (A ?1 I)v , (A ?2 I)(A
    ?1 I)v, or
  • Chebyshev Basis WC v, T1(v), T2(v),

76
(No Transcript)
77
Speed ups on 8-core Clovertown
78
Conclusions
  • Fast code must minimize communication
  • Especially for sparse matrix computations because
    communication dominates
  • Generating fast code for a single SpMV
  • Design space of possible algorithms must be
    searched at run-time, when sparse matrix
    available
  • Design space should be searched automatically
  • Biggest speedups from minimizing communication in
    an entire sparse solver
  • Many more opportunities to minimize communication
    in multiple SpMVs than in one
  • Requires transforming entire algorithm
  • Lots of open problems

79
Extra slides
80
Quick-and-dirty Parallelism OSKI-PETSc
  • Extend PETScs distributed memory SpMV (MATMPIAIJ)
  • PETSc
  • Each process stores diag (all-local) and off-diag
    submatrices
  • OSKI-PETSc
  • Add OSKI wrappers
  • Each submatrix tuned independently

p0
p1
p2
p3
81
OSKI-PETSc Proof-of-Concept Results
  • Matrix 1 Accelerator cavity design (R. Lee _at_
    SLAC)
  • N 1 M, 40 M non-zeros
  • 2x2 dense block substructure
  • Symmetric
  • Matrix 2 Linear programming (Italian Railways)
  • Short-and-fat 4k x 1M, 11M non-zeros
  • Highly unstructured
  • Big speedup from cache-blocking no native PETSc
    format
  • Evaluation machine Xeon cluster
  • Peak 4.8 Gflop/s per node

82
Accelerator Cavity Matrix
83
OSKI-PETSc Performance Accel. Cavity
84
Linear Programming Matrix

85
OSKI-PETSc Performance LP Matrix
86
Performance Results
  • Measured Multicore (Clovertown) speedups up to
    6.4x
  • Measured/Modeled sequential OOC speedup up to 3x
  • Modeled parallel Petascale speedup up to 6.9x
  • Modeled parallel Grid speedup up to 22x
  • Sequential speedup due to bandwidth, works for
    many problem sizes
  • Parallel speedup due to latency, works for
    smaller problems on many processors

87
Speedups on Intel Clovertown (8 core)
88
Extensions
  • Other Krylov methods
  • Arnoldi, CG, Lanczos,
  • Preconditioning
  • Solve MAxMb where preconditioning matrix M
    chosen to make system easier
  • M approximates A-1 somehow, but cheaply, to
    accelerate convergence
  • Cheap as long as contributions from distant
    parts of the system can be compressed
  • Sparsity
  • Low rank

89
Design Space for x,Ax,,Akx (1/3)
  • Mathematical Operation
  • Keep all vectors
  • Krylov Subspace Methods
  • Improving conditioning of basis
  • W x, p1(A)x, p2(A)x,,pk(A)x
  • pi(A) degree i polynomial chosen to reduce
    cond(W)
  • Preconditioning (Ayb ? MAyMb)
  • x,Ax,MAx,AMAx,MAMAx,,(MA)kx
  • Keep last vector Akx only
  • Jacobi, Gauss Seidel

90
Design Space for x,Ax,,Akx (2/3)
  • Representation of sparse A
  • Zero pattern may be explicit or implicit
  • Nonzero entries may be explicit or implicit
  • Implicit ? save memory, communication
  • Representation of dense preconditioners M
  • Low rank off-diagonal blocks (semiseparable)

91
Design Space for x,Ax,,Akx (3/3)
  • Parallel implementation
  • From simple indexing, with redundant flops ?
    surface/volume ratio
  • To complicated indexing, with fewer redundant
    flops
  • Sequential implementation
  • Depends on whether vectors fit in fast memory
  • Reordering rows, columns of A
  • Important in parallel and sequential cases
  • Can be reduced to pair of Traveling Salesmen
    Problems
  • Plus all the optimizations for one SpMV!

92
Summary
  • Communication-Avoiding Linear Algebra (CALA)
  • Lots of related work
  • Some going back to 1960s
  • Reports discuss this comprehensively, not here
  • Our contributions
  • Several new algorithms, improvements on old ones
  • Unifying parallel and sequential approaches to
    avoiding communication
  • Time for these algorithms has come, because of
    growing communication costs
  • Why avoid communication just for linear algebra
    motifs?

93
Possible Class Projects
  • Come to BEBOP meetings (T 9 1030, 606 Soda)
  • Incorporate multicore optimizations into OSKI
  • Experiment with SpMV on GPU
  • Which optimizations are most effective?
  • Try to speed up particular matrices of interest
  • Data mining
  • Experiment with new x,Ax,,Akx kernel
  • GPU, multicore, distributed memory
  • On matrices of interest
  • Experiment with new solvers using this kernel

94
Extra Slides
95
Tuning Higher Level Algorithms
  • So far we have tuned a single sparse matrix
    kernel
  • y ATAx motivated by higher level algorithm
    (SVD)
  • What can we do by extending tuning to a higher
    level?
  • Consider Krylov subspace methods for Axb, Ax
    lx
  • Conjugate Gradients (CG), GMRES, Lanczos,
  • Inner loop does yAx, dot products, saxpys,
    scalar ops
  • Inner loop costs at least O(1) messages
  • k iterations cost at least O(k) messages
  • Our goal show how to do k iterations with O(1)
    messages
  • Possible payoff make Krylov subspace methods
    much
  • faster on machines with slow networks
  • Memory bandwidth improvements too (not
    discussed)
  • Obstacles numerical stability, preconditioning,

96
Krylov Subspace Methods for Solving Axb
  • Compute a basis for a subspace V by doing y Ax
    k times
  • Find best solution in that Krylov subspace V
  • Given starting vector x1, V spanned by x2 Ax1,
    x3 Ax2 , , xk Axk-1
  • GMRES finds an orthogonal basis of V by
    Gram-Schmidt, so it actually does a different
    set of SpMVs than in last bullet

97
Example Standard GMRES
  • r b - Ax1, b length(r), v1 r / b
    length(r) sqrt(S ri2 )
  • for m 1 to k do
  • w Avm at least O(1) messages
  • for i 1 to m do Gram-Schmidt
  • him dotproduct(vi , w ) at least
    O(1) messages, or log(p)
  • w w h im vi
  • end for
  • hm1,m length(w) at least O(1) messages,
    or log(p)
  • vm1 w / hm1,m
  • end for
  • find y minimizing length( Hk y be1 )
    small, local problem
  • new x x1 Vk y Vk v1 , v2 , ,
    vk

O(k2), or O(k2 log p), messages altogether
98
Latency-Avoiding GMRES (1)
  • r b - Ax1, b length(r), v1 r / b
    O(log p) messages
  • Wk1 v1 , A v1 , A2 v1 , , Ak v1
    O(1) messages
  • Q, R qr(Wk1) QR decomposition, O(log
    p) messages
  • Hk R(, 2k1) (R(1k,1k))-1 small, local
    problem
  • find y minimizing length( Hk y be1 )
    small, local problem
  • new x x1 Qk y local problem

O(log p) messages altogether Independent of k
99
Latency-Avoiding GMRES (2)
  • Q, R qr(Wk1) QR decomposition, O(log
    p) messages
  • Easy, but not so stable way to do it
  • X(myproc) Wk1T(myproc) Wk1 (myproc)
  • local computation
  • Y sum_reduction(X(myproc)) O(log p)
    messages

  • Y Wk1T Wk1
  • R (cholesky(Y))T small, local
    computation
  • Q(myproc) Wk1 (myproc) R-1 local
    computation

100
Numerical example (1)
Diagonal matrix with n1000, Aii from 1 down to
10-5 Instability as k grows, after many iterations
101
Numerical Example (2)
Partial remedy restarting periodically (every
120 iterations) Other remedies high precision,
different basis than v , A v , , Ak v
102
Operation Counts for Ax,A2x,A3x,,Akx on p procs
103
Summary and Future Work
  • Dense
  • LAPACK
  • ScaLAPACK
  • Communication primitives
  • Sparse
  • Kernels, Stencils
  • Higher level algorithms
  • All of the above on new architectures
  • Vector, SMPs, multicore, Cell,
  • High level support for tuning
  • Specification language
  • Integration into compilers

104
Extra Slides
105
A Sparse Matrix You Encounter Every Day
Who am I?
I am a Big Repository Of useful And useless Facts
alike. Who am I? (Hint Not your e-mail inbox.)
106
What about the Google Matrix?
  • Google approach
  • Approx. once a month rank all pages using
    connectivity structure
  • Find dominant eigenvector of a matrix
  • At query-time return list of pages ordered by
    rank
  • Matrix A aG (1-a)(1/n)uuT
  • Markov model Surfer follows link with
    probability a, jumps to a random page with
    probability 1-a
  • G is n x n connectivity matrix n billions
  • gij is non-zero if page i links to page j
  • Normalized so each column sums to 1
  • Very sparse about 78 non-zeros per row (power
    law dist.)
  • u is a vector of all 1 values
  • Steady-state probability xi of landing on page i
    is solution to x Ax
  • Approximate x by power method x Akx0
  • In practice, k 25

107
Current Work
  • Public software release
  • Impact on library designs Sparse BLAS, Trilinos,
    PETSc,
  • Integration in large-scale applications
  • DOE Accelerator design plasma physics
  • Geophysical simulation based on Block Lanczos
    (ATAX LBL)
  • Systematic heuristics for data structure
    selection?
  • Evaluation of emerging architectures
  • Revisiting vector micros
  • Other sparse kernels
  • Matrix triple products, Akx
  • Parallelism
  • Sparse benchmarks (with UTK) Gahvari Hoemmen
  • Automatic tuning of MPI collective ops Nishtala,
    et al.

108
Summary of High-Level Themes
  • Kernel-centric optimization
  • Vs. basic block, trace, path optimization, for
    instance
  • Aggressive use of domain-specific knowledge
  • Performance bounds modeling
  • Evaluating software quality
  • Architectural characterizations and consequences
  • Empirical search
  • Hybrid on-line/run-time models
  • Statistical performance models
  • Exploit information from sampling, measuring

109
Related Work
  • My bibliography 337 entries so far
  • Sample area 1 Code generation
  • Generative generic programming
  • Sparse compilers
  • Domain-specific generators
  • Sample area 2 Empirical search-based tuning
  • Kernel-centric
  • linear algebra, signal processing, sorting, MPI,
  • Compiler-centric
  • profiling FDO, iterative compilation,
    superoptimizers, self-tuning compilers,
    continuous program optimization

110
Future Directions (A Bag of Flaky Ideas)
  • Composable code generators and search spaces
  • New application domains
  • PageRank multilevel block algorithms for
    topic-sensitive search?
  • New kernels cryptokernels
  • rich mathematical structure germane to
    performance lots of hardware
  • New tuning environments
  • Parallel, Grid, whole systems
  • Statistical models of application performance
  • Statistical learning of concise parametric models
    from traces for architectural evaluation
  • Compiler/automatic derivation of parametric models

111
Possible Future Work
  • Different Architectures
  • New FP instruction sets (SSE2)
  • SMP / multicore platforms
  • Vector architectures
  • Different Kernels
  • Higher Level Algorithms
  • Parallel versions of kenels, with optimized
    communication
  • Block algorithms (eg Lanczos)
  • XBLAS (extra precision)
  • Produce Benchmarks
  • Augment HPCC Benchmark
  • Make it possible to combine optimizations easily
    for any kernel
  • Related tuning activities (LAPACK ScaLAPACK)

112
Review of Tuning by Illustration
  • (Extra Slides)

113
Splitting for Variable Blocks and Diagonals
  • Decompose A A1 A2 At
  • Detect canonical structures (sampling)
  • Split
  • Tune each Ai
  • Improve performance and save storage
  • New data structures
  • Unaligned block CSR
  • Relax alignment in rows columns
  • Row-segmented diagonals

114
Example Variable Block Row (Matrix 12)
2.1x over CSR 1.8x over RB
115
Example Row-Segmented Diagonals
2x over CSR
116
Mixed Diagonal and Block Structure
117
Summary
  • Automated block size selection
  • Empirical modeling and search
  • Register blocking for SpMV, triangular solve,
    ATAx
  • Not fully automated
  • Given a matrix, select splittings and
    transformations
  • Lots of combinatorial problems
  • TSP reordering to create dense blocks (Pinar 97
    Moon, et al. 04)

118
Extra Slides
119
A Sparse Matrix You Encounter Every Day
Who am I?
I am a Big Repository Of useful And useless Facts
alike. Who am I? (Hint Not your e-mail inbox.)
120
Problem Context
  • Sparse kernels abound
  • Models of buildings, cars, bridges, economies,
  • Google PageRank algorithm
  • Historical trends
  • Sparse matrix-vector multiply (SpMV) 10 of peak
  • 2x faster with hand-tuning
  • Tuning becoming more difficult over time
  • Promise of automatic tuning PHiPAC/ATLAS, FFTW,
  • Challenges to high-performance
  • Not dense linear algebra!
  • Complex data structures indirect, irregular
    memory access
  • Performance depends strongly on run-time inputs

121
Key Questions, Ideas, Conclusions
  • How to tune basic sparse kernels automatically?
  • Empirical modeling and search
  • Up to 4x speedups for SpMV
  • 1.8x for triangular solve
  • 4x for ATAx 2x for A2x
  • 7x for multiple vectors
  • What are the fundamental limits on performance?
  • Kernel-, machine-, and matrix-specific upper
    bounds
  • Achieve 75 or more for SpMV, limiting low-level
    tuning
  • Consequences for architecture?
  • General techniques for empirical search-based
    tuning?
  • Statistical models of performance

122
Road Map
  • Sparse matrix-vector multiply (SpMV) in a
    nutshell
  • Historical trends and the need for search
  • Automatic tuning techniques
  • Upper bounds on performance
  • Statistical models of performance

123
Compressed Sparse Row (CSR) Storage
Matrix-vector multiply kernel y(i) ? y(i)
A(i,j)x(j)
Matrix-vector multiply kernel y(i) ? y(i)
A(i,j)x(j) for each row i for kptri to
ptri1 do yi yi valkxindk
Matrix-vector multiply kernel y(i) ? y(i)
A(i,j)x(j) for each row i for kptri to
ptri1 do yi yi valkxindk
124
Road Map
  • Sparse matrix-vector multiply (SpMV) in a
    nutshell
  • Historical trends and the need for search
  • Automatic tuning techniques
  • Upper bounds on performance
  • Statistical models of performance

125
Historical Trends in SpMV Performance
  • The Data
  • Uniprocessor SpMV performance since 1987
  • Untuned and Tuned implementations
  • Cache-based superscalar micros some vectors
  • LINPACK

126
SpMV Historical Trends Mflop/s
127
Example The Difficulty of Tuning
  • n 21216
  • nnz 1.5 M
  • kernel SpMV
  • Source NASA structural analysis problem

128
Still More Surprises
  • More complicated non-zero structure in general

129
Still More Surprises
  • More complicated non-zero structure in general
  • Example 3x3 blocking
  • Logical grid of 3x3 cells

130
Historical Trends Mixed News
  • Observations
  • Good news Moores law like behavior
  • Bad news Untuned is 10 peak or less,
    worsening
  • Good news Tuned roughly 2x better today, and
    improving
  • Bad news Tuning is complex
  • (Not really news SpMV is not LINPACK)
  • Questions
  • Application Automatic tuning?
  • Architect What machines are good for SpMV?

131
Road Map
  • Sparse matrix-vector multiply (SpMV) in a
    nutshell
  • Historical trends and the need for search
  • Automatic tuning techniques
  • SpMV SC02 IJHPCA 04b
  • Sparse triangular solve (SpTS) ICS/POHLL 02
  • ATAx ICCS/WoPLA 03
  • Upper bounds on performance
  • Statistical models of performance

132
SPARSITY Framework for Tuning SpMV
  • SPARSITY Automatic tuning for SpMV Im Yelick
    99
  • General approach
  • Identify and generate implementation space
  • Search space using empirical models experiments
  • Prototype library and heuristic for choosing
    register block size
  • Also cache-level blocking, multiple vectors
  • Whats new?
  • New block size selection heuristic
  • Within 10 of optimal replaces previous version
  • Expanded implementation space
  • Variable block splitting, diagonals, combinations
  • New kernels sparse triangular solve, ATAx, Arx

133
Automatic Register Block Size Selection
  • Selecting the r x c block size
  • Off-line benchmark characterize the machine
  • Precompute Mflops(r,c) using dense matrix for
    each r x c
  • Once per machine/architecture
  • Run-time search characterize the matrix
  • Sample A to estimate Fill(r,c) for each r x c
  • Run-time heuristic model
  • Choose r, c to maximize Mflops(r,c) / Fill(r,c)
  • Run-time costs
  • Up to 40 SpMVs (empirical worst case)

134
Accuracy of the Tuning Heuristics (1/4)
DGEMV
NOTE Fair flops used (ops on explicit zeros
not counted as work)
135
Accuracy of the Tuning Heuristics (2/4)
DGEMV
136
Accuracy of the Tuning Heuristics (3/4)
DGEMV
137
Accuracy of the Tuning Heuristics (4/4)
DGEMV
138
Road Map
  • Sparse matrix-vector multiply (SpMV) in a
    nutshell
  • Historical trends and the need for search
  • Automatic tuning techniques
  • Upper bounds on performance
  • SC02
  • Statistical models of performance

139
Motivation for Upper Bounds Model
  • Questions
  • Speedups are good, but what is the speed limit?
  • Independent of instruction scheduling, selection
  • What machines are good for SpMV?

140
Upper Bounds on Performance Blocked SpMV
  • P (flops) / (time)
  • Flops 2 nnz(A)
  • Lower bound on time Two main assumptions
  • 1. Count memory ops only (streaming)
  • 2. Count only compulsory, capacity misses ignore
    conflicts
  • Account for line sizes
  • Account for matrix size and nnz
  • Charge min access latency ai at Li cache amem
  • e.g., Saavedra-Barrera and PMaC MAPS benchmarks

141
Example Bounds on Itanium 2
142
Example Bounds on Itanium 2
143
Example Bounds on Itanium 2
144
Fraction of Upper Bound Across Platforms
145
Achieved Performance and Machine Balance
  • Machine balance Callahan 88 McCalpin 95
  • Balance Peak Flop Rate / Bandwidth (flops /
    double)
  • Ideal balance for mat-vec 2 flops / double
  • For SpMV, even less
  • SpMV streaming
  • 1 / (avg load time to stream 1 array)
    (bandwidth)
  • Sustained balance peak flops / model bandwidth

146
(No Transcript)
147
Where Does the Time Go?
  • Most time assigned to memory
  • Caches disappear when line sizes are equal
  • Strictly increasing line sizes

148
Execution Time Breakdown Matrix 40
149
Speedups with Increasing Line Size
150
Summary Performance Upper Bounds
  • What is the best we can do for SpMV?
  • Limits to low-level tuning of blocked
    implementations
  • Refinements?
  • What machines are good for SpMV?
  • Partial answer balance characterization
  • Architectural consequences?
  • Example Strictly increasing line sizes

151
Road Map
  • Sparse matrix-vector multiply (SpMV) in a
    nutshell
  • Historical trends and the need for search
  • Automatic tuning techniques
  • Upper bounds on performance
  • Tuning other sparse kernels
  • Statistical models of performance
  • FDO 00 IJHPCA 04a

152
Statistical Models for Automatic Tuning
  • Idea 1 Statistical criterion for stopping a
    search
  • A general search model
  • Generate implementation
  • Measure performance
  • Repeat
  • Stop when probability of being within e of
    optimal falls below threshold
  • Can estimate distribution on-line
  • Idea 2 Statistical performance models
  • Problem Choose 1 among m implementations at
    run-time
  • Sample performance off-line, build statistical
    model

153
Example Select a Matmul Implementation
154
Example Support Vector Classification
155
Road Map
  • Sparse matrix-vector multiply (SpMV) in a
    nutshell
  • Historical trends and the need for search
  • Automatic tuning techniques
  • Upper bounds on performance
  • Tuning other sparse kernels
  • Statistical models of performance
  • Summary and Future Work

156
Summary of High-Level Themes
  • Kernel-centric optimization
  • Vs. basic block, trace, path optimization, for
    instance
  • Aggressive use of domain-specific knowledge
  • Performance bounds modeling
  • Evaluating software quality
  • Architectural characterizations and consequences
  • Empirical search
  • Hybrid on-line/run-time models
  • Statistical performance models
  • Exploit information from sampling, measuring

157
Related Work
  • My bibliography 337 entries so far
  • Sample area 1 Code generation
  • Generative generic programming
  • Sparse compilers
  • Domain-specific generators
  • Sample area 2 Empirical search-based tuning
  • Kernel-centric
  • linear algebra, signal processing, sorting, MPI,
  • Compiler-centric
  • profiling FDO, iterative compilation,
    superoptimizers, self-tuning compilers,
    continuous program optimization

158
Future Directions (A Bag of Flaky Ideas)
  • Composable code generators and search spaces
  • New application domains
  • PageRank multilevel block algorithms for
    topic-sensitive search?
  • New kernels cryptokernels
  • rich mathematical structure germane to
    performance lots of hardware
  • New tuning environments
  • Parallel, Grid, whole systems
  • Statistical models of application performance
  • Statistical learning of concise parametric models
    from traces for architectural evaluation
  • Compiler/automatic derivation of parametric models

159
Acknowledgements
  • Super-advisors Jim and Kathy
  • Undergraduate R.A.s Attila, Ben, Jen, Jin,
    Michael, Rajesh, Shoaib, Sriram, Tuyet-Linh
  • See pages xvixvii of dissertation.

160
TSP-based Reordering Before
(Pinar 97 Moon, et al 04)
161
TSP-based Reordering After
(Pinar 97 Moon, et al 04) Up to
2x speedups over CSR
162
Example L2 Misses on Itanium 2
Misses measured using PAPI Browne 00
163
Example Distribution of Blocked Non-Zeros
164
Sparse/Dense Partitioning for SpTS
  • Partition L into sparse (L1,L2) and dense LD
  • Perform SpTS in three steps
  • Sparsity optimizations for (1)(2) DTRSV for (3)
  • Tuning parameters block size, size of dense
    triangle

165
SpTS Performance Power3
166
(No Transcript)
167
Summary of SpTS and AATx Results
  • SpTS Similar to SpMV
  • 1.8x speedups limited benefit from low-level
    tuning
  • AATx, ATAx
  • Cache interleaving only up to 1.6x speedups
  • Reg cache up to 4x speedups
  • 1.8x speedup over register only
  • Similar heuristic same accuracy ( 10 optimal)
  • Further from upper bounds 6080
  • Opportunity for better low-level tuning a la
    PHiPAC/ATLAS
  • Matrix triple products? Akx?
  • Preliminary work

168
Register Blocking Speedup
169
Register Blocking Performance
170
Register Blocking Fraction of Peak
171
Example Confidence Interval Estimation
172
Costs of Tuning
173
Splitting UBCSR Pentium III
174
Splitting UBCSR Power4
175
SplittingUBCSR Storage Power4
176
(No Transcript)
177
Example Variable Block Row (Matrix 13)
178
(No Transcript)
179
Preliminary Results (Matrix Set 2) Itanium 2
Dense
FEM
FEM (var)
Bio
LP
Econ
Stat
180
Multiple Vector Performance
181
(No Transcript)
182
What about the Google Matrix?
  • Google approach
  • Approx. once a month rank all pages using
    connectivity structure
  • Find dominant eigenvector of a matrix
  • At query-time return list of pages ordered by
    rank
  • Matrix A aG (1-a)(1/n)uuT
  • Markov model Surfer follows link with
    probability a, jumps to a random page with
    probability 1-a
  • G is n x n connectivity matrix n 3 billion
  • gij is non-zero if page i links to page j
  • Normalized so each column sums to 1
  • Very sparse about 78 non-zeros per row (power
    law dist.)
  • u is a vector of all 1 values
  • Steady-state probability xi of landing on page i
    is solution to x Ax
  • Approximate x by power method x Akx0
  • In practice, k 25

183
(No Transcript)
184
MAPS Benchmark Example Power4
185
MAPS Benchmark Example Itanium 2
186
Saavedra-Barrera Example Ultra 2i
187
Execution Time Breakdown (PAPI) Matrix 40
188
Preliminary Results (Matrix Set 1) Itanium 2
LP
FEM
FEM (var)
Assorted
Dense
189
Tuning Sparse Triangular Solve (SpTS)
  • Compute xL-1b where L sparse lower triangular,
    x b dense
  • L from sparse LU has rich dense substructure
  • Dense trailing triangle can account for 2090 of
    matrix non-zeros
  • SpTS optimizations
  • Split into sparse trapezoid and dense trailing
    triangle
  • Use tuned dense BLAS (DTRSV) on dense triangle
  • Use Sparsity register blocking on sparse part
  • Tuning parameters
  • Size of dense trailing triangle
  • Register block size

190
Sparse Kernels and Optimizations
  • Kernels
  • Sparse matrix-vector multiply (SpMV) yAx
  • Sparse triangular solve (SpTS) xT-1b
  • yAATx, yATAx
  • Powers (yAkx), sparse triple-product (RART),
  • Optimization techniques (implementation space)
  • Register blocking
  • Cache blocking
  • Multiple dense vectors (x)
  • A has special structure (e.g., symmetric, banded,
    )
  • Hybrid data structures (e.g., splitting,
    switch-to-dense, )
  • Matrix reordering
  • How and when do we search?
  • Off-line Benchmark implementations
  • Run-time Estimate matrix properties, evaluate
    performance models based on benchmark data

191
Cache Blocked SpMV on LSI Matrix Ultra 2i
A 10k x 255k 3.7M non-zeros Baseline 16
Mflop/s Best block size performance 16k x
64k 28 Mflop/s
192
Cache Blocking on LSI Matrix Pentium 4
A 10k x 255k 3.7M non-zeros Baseline 44
Mflop/s Best block size performance 16k x
16k 210 Mflop/s
193
Cache Blocked SpMV on LSI Matrix Itanium
A 10k x 255k 3.7M non-zeros Baseline 25
Mflop/s Best block size performance 16k x
32k 72 Mflop/s
194
Cache Blocked SpMV on LSI Matrix Itanium 2
A 10k x 255k 3.7M non-zeros Baseline 170
Mflop/s Best block size performance 16k x
65k 275 Mflop/s
195
Inter-Iteration Sparse Tiling (1/3)
  • Strout, et al., 01
  • Let A be 6x6 tridiagonal
  • Consider yA2x
  • tAx, yAt
  • Nodes vector elements
  • Edges matrix elements aij

196
Inter-Iteration Sparse Tiling (2/3)
  • Strout, et al., 01
  • Let A be 6x6 tridiagonal
  • Consider yA2x
  • tAx, yAt
  • Nodes vector elements
  • Edges matrix elements aij
  • Orange everything needed to compute y1
  • Reuse a11, a12

197
Inter-Iteration Sparse Tiling (3/3)
  • Strout, et al., 01
  • Let A be 6x6 tridiagonal
  • Consider yA2x
  • tAx, yAt
  • Nodes vector elements
  • Edges matrix elements aij
  • Orange everything needed to compute y1
  • Reuse a11, a12
  • Grey y2, y3
  • Reuse a23, a33, a43

198
Inter-Iteration Sparse Tiling Issues
  • Tile sizes (colored regions) grow with no. of
    iterations and increasing out-degree
  • G likely to have a few nodes with high out-degree
    (e.g., Yahoo)
  • Mathematical tricks to limit tile size?
  • Judicious dropping of edges Ng01

199
Summary and Questions
  • Need to understand matrix structure and machine
  • BeBOP suite of techniques to deal with different
    sparse structures and architectures
  • Google matrix problem
  • Established techniques within an iteration
  • Ideas for inter-iteration optimizations
  • Mathematical structure of problem may help
  • Questions
  • Structure of G?
  • What are the computational bottlenecks?
  • Enabling future computations?
  • E.g., topic-sensitive PageRank ? multiple vector
    version Haveliwala 02
  • See www.cs.berkeley.edu/richie/bebop/intel/google
    for more info, including more complete Itanium 2
    results.

200
Exploiting Matrix Structure
  • Symmetry (numerical or structural)
  • Reuse matrix entries
  • Can combine with register blocking, multiple
    vectors,
  • Matrix splitting
  • Split the matrix, e.g., into r x c and 1 x 1
  • No fill overhead
  • Large matrices with random structure
  • E.g., Latent Semantic Indexing (LSI) matrices
  • Technique cache blocking
  • Store matrix as 2i x 2j sparse submatrices
  • Effective when x vector is large
  • Currently, search to find fastest size

201
Symmetric SpMV Performance Pentium 4
202
SpMV with Split Matrices Ultra 2i
203
Cache Blocking on Random Matrices Itanium
Speedup on four banded random matrices.
204
Sparse Kernels and Optimizations
  • Kernels
  • Sparse matrix-vector multiply (SpMV) yAx
  • Sparse triangular solve (SpTS) xT-1b
  • yAATx, yATAx
  • Powers (yAkx), sparse triple-product (RART),
  • Optimization techniques (implementation space)
  • Register blocking
  • Cache blocking
  • Multiple dense vectors (x)
  • A has special structure (e.g., symmetric, banded,
    )
  • Hybrid data structures (e.g., splitting,
    switch-to-dense, )
  • Matrix reordering
  • How and when do we search?
  • Off-line Benchmark implementations
  • Run-time Estimate matrix properties, evaluate
    performance models based on benchmark data

205
Register Blocked SpMV Pentium III
206
Register Blocked SpMV Ultra 2i
207
Register Blocked SpMV Power3
208
Register Blocked SpMV Itanium
209
Possible Optimization Techniques
  • Within an iteration, i.e., computing (GuuT)x
    once
  • Cache block Gx
  • On linear programming matrices and matrices with
    random structure (e.g., LSI), 1.54x speedups
  • Best block size is matrix and machine dependent
  • Reordering and/or splitting of G to separate
    dense structure (rows, columns, blocks)
  • Between iterations, e.g., (GuuT)2x
  • (GuuT)2x G2x (Gu)uTx u(uTG)x u(uTu)uTx
  • Compute Gu, uTG, uTu once for all iterations
  • G2x Inter-iteration tiling to read G only once

210
Multiple Vector Perform
Write a Comment
User Comments (0)
About PowerShow.com