Automatic Performance Tuning Sparse Matrix Kernels - PowerPoint PPT Presentation

1 / 208
About This Presentation
Title:

Automatic Performance Tuning Sparse Matrix Kernels

Description:

bebop.cs.berkeley.edu. Outline. Motivation for Automatic Performance Tuning ... BEBOP project addresses this. Tuning Dense BLAS PHiPAC. Tuning Dense BLAS ATLAS ... – PowerPoint PPT presentation

Number of Views:93
Avg rating:3.0/5.0
Slides: 209
Provided by: WSE79
Category:

less

Transcript and Presenter's Notes

Title: Automatic Performance Tuning Sparse Matrix Kernels


1
Automatic Performance TuningSparse Matrix Kernels
  • James Demmel
  • www.cs.berkeley.edu/demmel/cs267_Spr05

2
Berkeley Benchmarking and OPtimization (BeBOP)
  • Prof. Katherine Yelick
  • Rich Vuduc
  • Many results in this talk are from Vuducs PhD
    thesis, www.cs.berkeley.edu/richie
  • Rajesh Nishtala, Mark Hoemmen, Hormozd Gahvari
  • Eun-Jim Im, many other earlier contributors
  • bebop.cs.berkeley.edu

3
Outline
  • Motivation for Automatic Performance Tuning
  • Recent results for sparse matrix kernels
  • OSKI Optimized Sparse Kernel Interface
  • Future Work

4
Motivation for Automatic Performance Tuning
  • Writing high performance software is hard
  • Make programming easier while getting high speed
  • Ideal program in your favorite high level
    language (Matlab, Python, PETSc) and get a high
    fraction of peak performance
  • Reality Best algorithm (and its implementation)
    can depend strongly on the problem, computer
    architecture, compiler,
  • Best choice can depend on knowing a lot of
    applied mathematics and computer science
  • How much of this can we teach?
  • How much of this can we automate?

5
Examples of Automatic Performance Tuning (1)
  • Dense BLAS
  • Sequential
  • PHiPAC (UCB), then ATLAS (UTK)
  • Now in Matlab, many other releases
  • math-atlas.sourceforge.net/
  • Fast Fourier Transform (FFT) variations
  • Sequential and Parallel
  • FFTW (MIT)
  • Widely used
  • www.fftw.org
  • Digital Signal Processing
  • SPIRAL www.spiral.net (CMU)
  • MPI Collectives (UCB, UTK)
  • More projects, conferences, government reports,

6
Examples of Automatic Performance Tuning (2)
  • What do dense BLAS, FFTs, signal processing, MPI
    reductions have in common?
  • Can do the tuning off-line once per
    architecture, algorithm
  • Can take as much time as necessary (hours, a
    week)
  • At run-time, algorithm choice may depend only on
    few parameters
  • Matrix dimension, size of FFT, etc.
  • Cant always do off-line tuning
  • Algorithm and implementation may strongly depend
    on data only known at run-time
  • Ex Sparse matrix nonzero pattern determines both
    best data structure and implementation of
    Sparse-matrix-vector-multiplication (SpMV)
  • BEBOP project addresses this

7
Tuning Dense BLAS PHiPAC
8
Tuning Dense BLAS ATLAS
Extends applicability of PHIPAC Incorporated in
Matlab (with rest of LAPACK)
9
Tuning Register Tile Sizes (Dense Matrix Multiply)
333 MHz Sun Ultra 2i 2-D slice of 3-D space
implementations color-coded by performance in
Mflop/s 16 registers, but 2-by-3 tile size
fastest
Needle in a haystack
10
(No Transcript)
11
A Sparse Matrix You Encounter Every Day
12
SpMV with Compressed Sparse Row (CSR) Storage
Matrix-vector multiply kernel y(i) ? y(i)
A(i,j)x(j) for each row i for kptri to
ptri1 do yi yi valkxindk
Matrix-vector multiply kernel y(i) ? y(i)
A(i,j)x(j) for each row i for kptri to
ptri1 do yi yi valkxindk
13
Motivation for Automatic Performance Tuning of
SpMV
  • Historical trends
  • Sparse matrix-vector multiply (SpMV) 10 of peak
    or less
  • 2x faster than CSR with hand-tuning
  • Tuning becoming more difficult over time
  • Performance depends on machine, kernel, matrix
  • Matrix known at run-time
  • Best data structure implementation can be
    surprising
  • Our approach empirical modeling and search
  • Up to 4x speedups and 31 of peak for SpMV
  • Many optimization techniques for SpMV
  • Several other kernels triangular solve, ATAx,
    Akx
  • Release OSKI Library, integrate into PETSc

14
SpMV Historical Trends Fraction of Peak
15
Example The Difficulty of Tuning
  • n 21216
  • nnz 1.5 M
  • kernel SpMV
  • Source NASA structural analysis problem
  • 8x8 dense substructure

16
Taking advantage of block structure in SpMV
  • Bottleneck is time to get matrix from memory
  • Only 2 flops for each nonzero in matrix
  • Dont store each nonzero with index, instead
    store each nonzero r-by-c block with index
  • Storage drops by up to 2x, if rc gtgt 1, all 32-bit
    quantities
  • Time to fetch matrix from memory decreases
  • Change both data structure and algorithm
  • Need to pick r and c
  • Need to change algorithm accordingly
  • In example, is rc8 best choice?
  • Minimizes storage, so looks like a good idea

17
Speedups on Itanium 2 The Need for Search
Mflop/s
Mflop/s
18
Register Profile Itanium 2
1190 Mflop/s
190 Mflop/s
19
SpMV Performance (Matrix 2) Generation 2
Ultra 2i - 9
Ultra 3 - 5
63 Mflop/s
109 Mflop/s
35 Mflop/s
53 Mflop/s
Pentium III-M - 15
Pentium III - 19
96 Mflop/s
120 Mflop/s
42 Mflop/s
58 Mflop/s
20
Register Profiles Sun and Intel x86
Ultra 2i - 11
Ultra 3 - 5
72 Mflop/s
90 Mflop/s
35 Mflop/s
50 Mflop/s
Pentium III-M - 15
Pentium III - 21
108 Mflop/s
122 Mflop/s
42 Mflop/s
58 Mflop/s
21
SpMV Performance (Matrix 2) Generation 1
Power3 - 13
Power4 - 14
195 Mflop/s
703 Mflop/s
100 Mflop/s
469 Mflop/s
Itanium 2 - 31
Itanium 1 - 7
225 Mflop/s
1.1 Gflop/s
103 Mflop/s
276 Mflop/s
22
Register Profiles IBM and Intel IA-64
Power3 - 17
Power4 - 16
252 Mflop/s
820 Mflop/s
122 Mflop/s
459 Mflop/s
Itanium 2 - 33
Itanium 1 - 8
247 Mflop/s
1.2 Gflop/s
107 Mflop/s
190 Mflop/s
23
Example The Difficulty of Tuning
  • n 21216
  • nnz 1.5 M
  • kernel SpMV
  • Source NASA structural analysis problem

24
Zoom in to top corner
  • More complicated non-zero structure in general

25
3x3 blocks look natural, but
  • More complicated non-zero structure in general
  • Example 3x3 blocking
  • Logical grid of 3x3 cells
  • But would lead to lots of fill-in

26
Extra Work Can Improve Efficiency!
  • More complicated non-zero structure in general
  • Example 3x3 blocking
  • Logical grid of 3x3 cells
  • Fill-in explicit zeros
  • Unroll 3x3 block multiplies
  • Fill ratio 1.5
  • On Pentium III 1.5x speedup!
  • Actual mflop rate 1.52 2.25 higher

27
Automatic Register Block Size Selection
  • Selecting the r x c block size
  • Off-line benchmark
  • Precompute Mflops(r,c) using dense A for each r x
    c
  • Once per machine/architecture
  • Run-time search
  • Sample A to estimate Fill(r,c) for each r x c
  • Run-time heuristic model
  • Choose r, c to maximize Mflops(r,c) / Fill(r,c)

28
Accurate and Efficient Adaptive Fill Estimation
  • Idea Sample matrix
  • Fraction of matrix to sample s ÃŽ 0,1
  • Cost O(s nnz)
  • Control cost by controlling s
  • Search at run-time the constant matters!
  • Control s automatically by computing statistical
    confidence intervals
  • Idea Monitor variance
  • Cost of tuning
  • Lower bound convert matrix in 5 to 40 unblocked
    SpMVs
  • Heuristic 1 to 11 SpMVs

29
Accuracy of the Tuning Heuristics (1/4)
See p. 375 of Vuducs thesis for matrices
NOTE Fair flops used (ops on explicit zeros
not counted as work)
30
Accuracy of the Tuning Heuristics (2/4)
31
Accuracy of the Tuning Heuristics (3/4)
32
Accuracy of the Tuning Heuristics (3/4)
DGEMV
33
Evaluating algorithms and machines for SpMV
  • Metrics
  • Speedups
  • Mflop/s (fair flops)
  • Fraction of peak
  • Questions
  • Speedups are good, but what is the best?
  • Independent of instruction scheduling, selection
  • Can SpMV be further improved or not?
  • What machines are good for SpMV?

34
Upper Bounds on Performance for blocked SpMV
  • P (flops) / (time)
  • Flops 2 nnz(A)
  • Lower bound on time Two main assumptions
  • 1. Count memory ops only (streaming)
  • 2. Count only compulsory, capacity misses ignore
    conflicts
  • Account for line sizes
  • Account for matrix size and nnz
  • Charge minimum access latency ai at Li cache
    amem
  • e.g., Saavedra-Barrera and PMaC MAPS benchmarks

35
Example L2 Misses on Itanium 2
Misses measured using PAPI Browne 00
36
Example Bounds on Itanium 2
37
Example Bounds on Itanium 2
38
Example Bounds on Itanium 2
39
Fraction of Upper Bound Across Platforms
40
Achieved Performance and Machine Balance
  • Machine balance Callahan 88 McCalpin 95
  • Balance Peak Flop Rate / Bandwidth (flops /
    double)
  • Lower is better (I.e. can hope to get higher
    fraction of peak flop rate)
  • Ideal balance for mat-vec 2 flops / double
  • For SpMV, even less

41
(No Transcript)
42
Where Does the Time Go?
  • Most time assigned to memory
  • Caches disappear when line sizes are equal
  • Strictly increasing line sizes

43
Execution Time Breakdown Matrix 40
44
Execution Time Breakdown (PAPI) Matrix 40
45
Speedups with Increasing Line Size
46
Summary Performance Upper Bounds
  • What is the best we can do for SpMV?
  • Limits to low-level tuning of blocked
    implementations
  • Refinements?
  • What machines are good for SpMV?
  • Partial answer balance characterization
  • Architectural consequences?
  • Help to have strictly increasing line sizes

47
Summary of Other Performance Optimizations
  • Optimizations for SpMV
  • Register blocking (RB) up to 4x over CSR
  • Variable block splitting 2.1x over CSR, 1.8x
    over RB
  • Diagonals 2x over CSR
  • Reordering to create dense structure splitting
    2x over CSR
  • Symmetry 2.8x over CSR, 2.6x over RB
  • Cache blocking 2.8x over CSR
  • Multiple vectors (SpMM) 7x over CSR
  • And combinations
  • Sparse triangular solve
  • Hybrid sparse/dense data structure 1.8x over CSR
  • Higher-level kernels
  • AATx, ATAx 4x over CSR, 1.8x over RB
  • A2x 2x over CSR, 1.5x over RB

48
Example Sparse Triangular Factor
  • Raefsky4 (structural problem) SuperLU colmmd
  • N19779, nnz12.6 M

49
Cache Optimizations for AATx
  • Cache-level Interleave multiplication by A, AT


  • Register-level aiT to be rc block row, or diag
    row
  • Algorithmic-level transformations for A2x, A3x,

50
Example Combining Optimizations
  • Register blocking, symmetry, multiple (k) vectors
  • Three low-level tuning parameters r, c, v

X
k
v

r
c

Y
A
51
Example Combining Optimizations
  • Register blocking, symmetry, and multiple vectors
    Ben Lee _at_ UCB
  • Symmetric, blocked, 1 vector
  • Up to 2.6x over nonsymmetric, blocked, 1 vector
  • Symmetric, blocked, k vectors
  • Up to 2.1x over nonsymmetric, blocked, k vecs.
  • Up to 7.3x over nonsymmetric, nonblocked, 1,
    vector
  • Symmetric Storage 64.7 savings

52
Potential Impact on Applications T3P
  • Application accelerator design Ko
  • 80 of time spent in SpMV
  • Relevant optimization techniques
  • Symmetric storage
  • Register blocking
  • On Single Processor Itanium 2
  • 1.68x speedup
  • 532 Mflops, or 15 of 3.6 GFlop peak
  • 4.4x speedup with multiple (8) vectors
  • 1380 Mflops, or 38 of peak

53
Potential Impact on Applications Omega3P
  • Application accelerator cavity design Ko
  • Relevant optimization techniques
  • Symmetric storage
  • Register blocking
  • Reordering
  • Reverse Cuthill-McKee ordering to reduce
    bandwidth
  • Traveling Salesman Problem-based ordering to
    create blocks
  • Nodes columns of A
  • Weights(u, v) no. of nz u, v have in common
  • Tour ordering of columns
  • Choose maximum weight tour
  • See Pinar Heath 97
  • 2.1x speedup on Power 4, but SPMV not dominant

54
Source Accelerator Cavity Design Problem (Ko via
Husbands)
55
100x100 Submatrix Along Diagonal
56
Post-RCM Reordering
57
Microscopic Effect of RCM Reordering
Before Green Red After Green Blue
58
Microscopic Effect of Combined RCMTSP
Reordering
Before Green Red After Green Blue
59
(Omega3P)
60
Optimized Sparse Kernel Interface - OSKI
  • Provides sparse kernels automatically tuned for
    users matrix machine
  • BLAS-style functionality SpMV, Ax ATy, TrSV
  • Hides complexity of run-time tuning
  • Includes new, faster locality-aware kernels
    ATAx, Akx
  • Faster than standard implementations
  • Up to 4x faster matvec, 1.8x trisolve, 4x ATAx
  • For advanced users solver library writers
  • Available as stand-alone library (end of Oct 04)
  • Available as PETSc extension (Dec 04)

61
How the OSKI Tunes (Overview)
Application Run-Time
Library Install-Time (offline)
1. Build for Target Arch.
2. Benchmark
Workload from program monitoring
History
Matrix
Benchmark data
Heuristic models
1. Evaluate Models
Generated code variants
2. Select Data Struct. Code
To user Matrix handle for kernel calls
Extensibility Advanced users may write
dynamically add Code variants and Heuristic
models to system.
62
How the OSKI Tunes (Overview)
  • At library build/install-time
  • Pre-generate and compile code variants into
    dynamic libraries
  • Collect benchmark data
  • Measures and records speed of possible sparse
    data structure and code variants on target
    architecture
  • Installation process uses standard, portable GNU
    AutoTools
  • At run-time
  • Library tunes using heuristic models
  • Models analyze users matrix benchmark data to
    choose optimized data structure and code
  • Non-trivial tuning cost up to 40 mat-vecs
  • Library limits the time it spends tuning based on
    estimated workload
  • provided by user or inferred by library
  • User may reduce cost by saving tuning results for
    application on future runs with same or similar
    matrix

63
Optimizations in the Initial OSKI Release
  • Fully automatic heuristics for
  • Sparse matrix-vector multiply
  • Register-level blocking
  • Register-level blocking symmetry multiple
    vectors
  • Cache-level blocking
  • Sparse triangular solve with register-level
    blocking and switch-to-dense optimization
  • Sparse ATAx with register-level blocking
  • User may select other optimizations manually
  • Diagonal storage optimizations, reordering,
    splitting tiled matrix powers kernel (Akx)
  • All available in dynamic libraries
  • Accessible via high-level embedded script
    language
  • Plug-in extensibility
  • Very advanced users may write their own
    heuristics, create new data structures/code
    variants and dynamically add them to the system

64
Statistical Models for Automatic Tuning
  • Idea 1 Statistical criterion for stopping a
    search
  • A general search model
  • Generate implementation
  • Measure performance
  • Repeat
  • Stop when probability of being within e of
    optimal falls below threshold
  • Can estimate distribution on-line
  • Idea 2 Statistical performance models
  • Problem Choose 1 among m implementations at
    run-time
  • Sample performance off-line, build statistical
    model

65
Example Select a Matmul Implementation
66
Example Support Vector Classification
67
A Sparse Matrix You Encounter Every Day
Who am I?
I am a Big Repository Of useful And useless Facts
alike. Who am I? (Hint Not your e-mail inbox.)
68
What about the Google Matrix?
  • Google approach
  • Approx. once a month rank all pages using
    connectivity structure
  • Find dominant eigenvector of a matrix
  • At query-time return list of pages ordered by
    rank
  • Matrix A aG (1-a)(1/n)uuT
  • Markov model Surfer follows link with
    probability a, jumps to a random page with
    probability 1-a
  • G is n x n connectivity matrix n billions
  • gij is non-zero if page i links to page j
  • Normalized so each column sums to 1
  • Very sparse about 78 non-zeros per row (power
    law dist.)
  • u is a vector of all 1 values
  • Steady-state probability xi of landing on page i
    is solution to x Ax
  • Approximate x by power method x Akx0
  • In practice, k 25

69
Extra Slides
70
Current Work
  • Public software release
  • Impact on library designs Sparse BLAS, Trilinos,
    PETSc,
  • Integration in large-scale applications
  • DOE Accelerator design plasma physics
  • Geophysical simulation based on Block Lanczos
    (ATAX LBL)
  • Systematic heuristics for data structure
    selection?
  • Evaluation of emerging architectures
  • Revisiting vector micros
  • Other sparse kernels
  • Matrix triple products, Akx
  • Parallelism
  • Sparse benchmarks (with UTK) Gahvari Hoemmen
  • Automatic tuning of MPI collective ops Nishtala,
    et al.

71
Summary of High-Level Themes
  • Kernel-centric optimization
  • Vs. basic block, trace, path optimization, for
    instance
  • Aggressive use of domain-specific knowledge
  • Performance bounds modeling
  • Evaluating software quality
  • Architectural characterizations and consequences
  • Empirical search
  • Hybrid on-line/run-time models
  • Statistical performance models
  • Exploit information from sampling, measuring

72
Related Work
  • My bibliography 337 entries so far
  • Sample area 1 Code generation
  • Generative generic programming
  • Sparse compilers
  • Domain-specific generators
  • Sample area 2 Empirical search-based tuning
  • Kernel-centric
  • linear algebra, signal processing, sorting, MPI,
  • Compiler-centric
  • profiling FDO, iterative compilation,
    superoptimizers, self-tuning compilers,
    continuous program optimization

73
Future Directions (A Bag of Flaky Ideas)
  • Composable code generators and search spaces
  • New application domains
  • PageRank multilevel block algorithms for
    topic-sensitive search?
  • New kernels cryptokernels
  • rich mathematical structure germane to
    performance lots of hardware
  • New tuning environments
  • Parallel, Grid, whole systems
  • Statistical models of application performance
  • Statistical learning of concise parametric models
    from traces for architectural evaluation
  • Compiler/automatic derivation of parametric models

74
Possible Future Work
  • Different Architectures
  • New FP instruction sets (SSE2)
  • SMP / multicore platforms
  • Vector architectures
  • Different Kernels
  • Higher Level Algorithms
  • Parallel versions of kenels, with optimized
    communication
  • Block algorithms (eg Lanczos)
  • XBLAS (extra precision)
  • Produce Benchmarks
  • Augment HPCC Benchmark
  • Make it possible to combine optimizations easily
    for any kernel
  • Related tuning activities (LAPACK ScaLAPACK)

75
Review of Tuning by Illustration
  • (Extra Slides)

76
Splitting for Variable Blocks and Diagonals
  • Decompose A A1 A2 At
  • Detect canonical structures (sampling)
  • Split
  • Tune each Ai
  • Improve performance and save storage
  • New data structures
  • Unaligned block CSR
  • Relax alignment in rows columns
  • Row-segmented diagonals

77
Example Variable Block Row (Matrix 12)
2.1x over CSR 1.8x over RB
78
Example Row-Segmented Diagonals
2x over CSR
79
Mixed Diagonal and Block Structure
80
Summary
  • Automated block size selection
  • Empirical modeling and search
  • Register blocking for SpMV, triangular solve,
    ATAx
  • Not fully automated
  • Given a matrix, select splittings and
    transformations
  • Lots of combinatorial problems
  • TSP reordering to create dense blocks (Pinar 97
    Moon, et al. 04)

81
Extra Slides
82
A Sparse Matrix You Encounter Every Day
Who am I?
I am a Big Repository Of useful And useless Facts
alike. Who am I? (Hint Not your e-mail inbox.)
83
Problem Context
  • Sparse kernels abound
  • Models of buildings, cars, bridges, economies,
  • Google PageRank algorithm
  • Historical trends
  • Sparse matrix-vector multiply (SpMV) 10 of peak
  • 2x faster with hand-tuning
  • Tuning becoming more difficult over time
  • Promise of automatic tuning PHiPAC/ATLAS, FFTW,
  • Challenges to high-performance
  • Not dense linear algebra!
  • Complex data structures indirect, irregular
    memory access
  • Performance depends strongly on run-time inputs

84
Key Questions, Ideas, Conclusions
  • How to tune basic sparse kernels automatically?
  • Empirical modeling and search
  • Up to 4x speedups for SpMV
  • 1.8x for triangular solve
  • 4x for ATAx 2x for A2x
  • 7x for multiple vectors
  • What are the fundamental limits on performance?
  • Kernel-, machine-, and matrix-specific upper
    bounds
  • Achieve 75 or more for SpMV, limiting low-level
    tuning
  • Consequences for architecture?
  • General techniques for empirical search-based
    tuning?
  • Statistical models of performance

85
Road Map
  • Sparse matrix-vector multiply (SpMV) in a
    nutshell
  • Historical trends and the need for search
  • Automatic tuning techniques
  • Upper bounds on performance
  • Statistical models of performance

86
Compressed Sparse Row (CSR) Storage
Matrix-vector multiply kernel y(i) ? y(i)
A(i,j)x(j)
Matrix-vector multiply kernel y(i) ? y(i)
A(i,j)x(j) for each row i for kptri to
ptri1 do yi yi valkxindk
Matrix-vector multiply kernel y(i) ? y(i)
A(i,j)x(j) for each row i for kptri to
ptri1 do yi yi valkxindk
87
Road Map
  • Sparse matrix-vector multiply (SpMV) in a
    nutshell
  • Historical trends and the need for search
  • Automatic tuning techniques
  • Upper bounds on performance
  • Statistical models of performance

88
Historical Trends in SpMV Performance
  • The Data
  • Uniprocessor SpMV performance since 1987
  • Untuned and Tuned implementations
  • Cache-based superscalar micros some vectors
  • LINPACK

89
SpMV Historical Trends Mflop/s
90
Example The Difficulty of Tuning
  • n 21216
  • nnz 1.5 M
  • kernel SpMV
  • Source NASA structural analysis problem

91
Still More Surprises
  • More complicated non-zero structure in general

92
Still More Surprises
  • More complicated non-zero structure in general
  • Example 3x3 blocking
  • Logical grid of 3x3 cells

93
Historical Trends Mixed News
  • Observations
  • Good news Moores law like behavior
  • Bad news Untuned is 10 peak or less,
    worsening
  • Good news Tuned roughly 2x better today, and
    improving
  • Bad news Tuning is complex
  • (Not really news SpMV is not LINPACK)
  • Questions
  • Application Automatic tuning?
  • Architect What machines are good for SpMV?

94
Road Map
  • Sparse matrix-vector multiply (SpMV) in a
    nutshell
  • Historical trends and the need for search
  • Automatic tuning techniques
  • SpMV SC02 IJHPCA 04b
  • Sparse triangular solve (SpTS) ICS/POHLL 02
  • ATAx ICCS/WoPLA 03
  • Upper bounds on performance
  • Statistical models of performance

95
SPARSITY Framework for Tuning SpMV
  • SPARSITY Automatic tuning for SpMV Im Yelick
    99
  • General approach
  • Identify and generate implementation space
  • Search space using empirical models experiments
  • Prototype library and heuristic for choosing
    register block size
  • Also cache-level blocking, multiple vectors
  • Whats new?
  • New block size selection heuristic
  • Within 10 of optimal replaces previous version
  • Expanded implementation space
  • Variable block splitting, diagonals, combinations
  • New kernels sparse triangular solve, ATAx, Arx

96
Automatic Register Block Size Selection
  • Selecting the r x c block size
  • Off-line benchmark characterize the machine
  • Precompute Mflops(r,c) using dense matrix for
    each r x c
  • Once per machine/architecture
  • Run-time search characterize the matrix
  • Sample A to estimate Fill(r,c) for each r x c
  • Run-time heuristic model
  • Choose r, c to maximize Mflops(r,c) / Fill(r,c)
  • Run-time costs
  • Up to 40 SpMVs (empirical worst case)

97
Accuracy of the Tuning Heuristics (1/4)
DGEMV
NOTE Fair flops used (ops on explicit zeros
not counted as work)
98
Accuracy of the Tuning Heuristics (2/4)
DGEMV
99
Accuracy of the Tuning Heuristics (3/4)
DGEMV
100
Accuracy of the Tuning Heuristics (4/4)
DGEMV
101
Road Map
  • Sparse matrix-vector multiply (SpMV) in a
    nutshell
  • Historical trends and the need for search
  • Automatic tuning techniques
  • Upper bounds on performance
  • SC02
  • Statistical models of performance

102
Motivation for Upper Bounds Model
  • Questions
  • Speedups are good, but what is the speed limit?
  • Independent of instruction scheduling, selection
  • What machines are good for SpMV?

103
Upper Bounds on Performance Blocked SpMV
  • P (flops) / (time)
  • Flops 2 nnz(A)
  • Lower bound on time Two main assumptions
  • 1. Count memory ops only (streaming)
  • 2. Count only compulsory, capacity misses ignore
    conflicts
  • Account for line sizes
  • Account for matrix size and nnz
  • Charge min access latency ai at Li cache amem
  • e.g., Saavedra-Barrera and PMaC MAPS benchmarks

104
Example Bounds on Itanium 2
105
Example Bounds on Itanium 2
106
Example Bounds on Itanium 2
107
Fraction of Upper Bound Across Platforms
108
Achieved Performance and Machine Balance
  • Machine balance Callahan 88 McCalpin 95
  • Balance Peak Flop Rate / Bandwidth (flops /
    double)
  • Ideal balance for mat-vec 2 flops / double
  • For SpMV, even less
  • SpMV streaming
  • 1 / (avg load time to stream 1 array)
    (bandwidth)
  • Sustained balance peak flops / model bandwidth

109
(No Transcript)
110
Where Does the Time Go?
  • Most time assigned to memory
  • Caches disappear when line sizes are equal
  • Strictly increasing line sizes

111
Execution Time Breakdown Matrix 40
112
Speedups with Increasing Line Size
113
Summary Performance Upper Bounds
  • What is the best we can do for SpMV?
  • Limits to low-level tuning of blocked
    implementations
  • Refinements?
  • What machines are good for SpMV?
  • Partial answer balance characterization
  • Architectural consequences?
  • Example Strictly increasing line sizes

114
Road Map
  • Sparse matrix-vector multiply (SpMV) in a
    nutshell
  • Historical trends and the need for search
  • Automatic tuning techniques
  • Upper bounds on performance
  • Tuning other sparse kernels
  • Statistical models of performance
  • FDO 00 IJHPCA 04a

115
Statistical Models for Automatic Tuning
  • Idea 1 Statistical criterion for stopping a
    search
  • A general search model
  • Generate implementation
  • Measure performance
  • Repeat
  • Stop when probability of being within e of
    optimal falls below threshold
  • Can estimate distribution on-line
  • Idea 2 Statistical performance models
  • Problem Choose 1 among m implementations at
    run-time
  • Sample performance off-line, build statistical
    model

116
Example Select a Matmul Implementation
117
Example Support Vector Classification
118
Road Map
  • Sparse matrix-vector multiply (SpMV) in a
    nutshell
  • Historical trends and the need for search
  • Automatic tuning techniques
  • Upper bounds on performance
  • Tuning other sparse kernels
  • Statistical models of performance
  • Summary and Future Work

119
Summary of High-Level Themes
  • Kernel-centric optimization
  • Vs. basic block, trace, path optimization, for
    instance
  • Aggressive use of domain-specific knowledge
  • Performance bounds modeling
  • Evaluating software quality
  • Architectural characterizations and consequences
  • Empirical search
  • Hybrid on-line/run-time models
  • Statistical performance models
  • Exploit information from sampling, measuring

120
Related Work
  • My bibliography 337 entries so far
  • Sample area 1 Code generation
  • Generative generic programming
  • Sparse compilers
  • Domain-specific generators
  • Sample area 2 Empirical search-based tuning
  • Kernel-centric
  • linear algebra, signal processing, sorting, MPI,
  • Compiler-centric
  • profiling FDO, iterative compilation,
    superoptimizers, self-tuning compilers,
    continuous program optimization

121
Future Directions (A Bag of Flaky Ideas)
  • Composable code generators and search spaces
  • New application domains
  • PageRank multilevel block algorithms for
    topic-sensitive search?
  • New kernels cryptokernels
  • rich mathematical structure germane to
    performance lots of hardware
  • New tuning environments
  • Parallel, Grid, whole systems
  • Statistical models of application performance
  • Statistical learning of concise parametric models
    from traces for architectural evaluation
  • Compiler/automatic derivation of parametric models

122
Acknowledgements
  • Super-advisors Jim and Kathy
  • Undergraduate R.A.s Attila, Ben, Jen, Jin,
    Michael, Rajesh, Shoaib, Sriram, Tuyet-Linh
  • See pages xvixvii of dissertation.

123
TSP-based Reordering Before
(Pinar 97 Moon, et al 04)
124
TSP-based Reordering After
(Pinar 97 Moon, et al 04) Up to
2x speedups over CSR
125
Example L2 Misses on Itanium 2
Misses measured using PAPI Browne 00
126
Example Distribution of Blocked Non-Zeros
127
Register Profile Itanium 2
1190 Mflop/s
190 Mflop/s
128
Register Profiles Sun and Intel x86
Ultra 2i - 11
Ultra 3 - 5
72 Mflop/s
90 Mflop/s
35 Mflop/s
50 Mflop/s
Pentium III-M - 15
Pentium III - 21
108 Mflop/s
122 Mflop/s
42 Mflop/s
58 Mflop/s
129
Register Profiles IBM and Intel IA-64
Power3 - 17
Power4 - 16
252 Mflop/s
820 Mflop/s
122 Mflop/s
459 Mflop/s
Itanium 2 - 33
Itanium 1 - 8
247 Mflop/s
1.2 Gflop/s
107 Mflop/s
190 Mflop/s
130
Accurate and Efficient Adaptive Fill Estimation
  • Idea Sample matrix
  • Fraction of matrix to sample s ÃŽ 0,1
  • Cost O(s nnz)
  • Control cost by controlling s
  • Search at run-time the constant matters!
  • Control s automatically by computing statistical
    confidence intervals
  • Idea Monitor variance
  • Cost of tuning
  • Lower bound convert matrix in 5 to 40 unblocked
    SpMVs
  • Heuristic 1 to 11 SpMVs

131
Sparse/Dense Partitioning for SpTS
  • Partition L into sparse (L1,L2) and dense LD
  • Perform SpTS in three steps
  • Sparsity optimizations for (1)(2) DTRSV for (3)
  • Tuning parameters block size, size of dense
    triangle

132
SpTS Performance Power3
133
(No Transcript)
134
Summary of SpTS and AATx Results
  • SpTS Similar to SpMV
  • 1.8x speedups limited benefit from low-level
    tuning
  • AATx, ATAx
  • Cache interleaving only up to 1.6x speedups
  • Reg cache up to 4x speedups
  • 1.8x speedup over register only
  • Similar heuristic same accuracy ( 10 optimal)
  • Further from upper bounds 6080
  • Opportunity for better low-level tuning a la
    PHiPAC/ATLAS
  • Matrix triple products? Akx?
  • Preliminary work

135
Register Blocking Speedup
136
Register Blocking Performance
137
Register Blocking Fraction of Peak
138
Example Confidence Interval Estimation
139
Costs of Tuning
140
Splitting UBCSR Pentium III
141
Splitting UBCSR Power4
142
SplittingUBCSR Storage Power4
143
(No Transcript)
144
Example Variable Block Row (Matrix 13)
145
Dense Tuning is Hard, Too
  • Even dense matrix multiply can be notoriously
    difficult to tune

146
Dense matrix multiply surprising performance as
register tile size varies.
147
(No Transcript)
148
Preliminary Results (Matrix Set 2) Itanium 2
Dense
FEM
FEM (var)
Bio
LP
Econ
Stat
149
Multiple Vector Performance
150
(No Transcript)
151
What about the Google Matrix?
  • Google approach
  • Approx. once a month rank all pages using
    connectivity structure
  • Find dominant eigenvector of a matrix
  • At query-time return list of pages ordered by
    rank
  • Matrix A aG (1-a)(1/n)uuT
  • Markov model Surfer follows link with
    probability a, jumps to a random page with
    probability 1-a
  • G is n x n connectivity matrix n 3 billion
  • gij is non-zero if page i links to page j
  • Normalized so each column sums to 1
  • Very sparse about 78 non-zeros per row (power
    law dist.)
  • u is a vector of all 1 values
  • Steady-state probability xi of landing on page i
    is solution to x Ax
  • Approximate x by power method x Akx0
  • In practice, k 25

152
(No Transcript)
153
MAPS Benchmark Example Power4
154
MAPS Benchmark Example Itanium 2
155
Saavedra-Barrera Example Ultra 2i
156
(No Transcript)
157
Summary of Results Pentium III
158
Summary of Results Pentium III (3/3)
159
Execution Time Breakdown (PAPI) Matrix 40
160
Preliminary Results (Matrix Set 1) Itanium 2
LP
FEM
FEM (var)
Assorted
Dense
161
Tuning Sparse Triangular Solve (SpTS)
  • Compute xL-1b where L sparse lower triangular,
    x b dense
  • L from sparse LU has rich dense substructure
  • Dense trailing triangle can account for 2090 of
    matrix non-zeros
  • SpTS optimizations
  • Split into sparse trapezoid and dense trailing
    triangle
  • Use tuned dense BLAS (DTRSV) on dense triangle
  • Use Sparsity register blocking on sparse part
  • Tuning parameters
  • Size of dense trailing triangle
  • Register block size

162
Sparse Kernels and Optimizations
  • Kernels
  • Sparse matrix-vector multiply (SpMV) yAx
  • Sparse triangular solve (SpTS) xT-1b
  • yAATx, yATAx
  • Powers (yAkx), sparse triple-product (RART),
  • Optimization techniques (implementation space)
  • Register blocking
  • Cache blocking
  • Multiple dense vectors (x)
  • A has special structure (e.g., symmetric, banded,
    )
  • Hybrid data structures (e.g., splitting,
    switch-to-dense, )
  • Matrix reordering
  • How and when do we search?
  • Off-line Benchmark implementations
  • Run-time Estimate matrix properties, evaluate
    performance models based on benchmark data

163
Cache Blocked SpMV on LSI Matrix Ultra 2i
A 10k x 255k 3.7M non-zeros Baseline 16
Mflop/s Best block size performance 16k x
64k 28 Mflop/s
164
Cache Blocking on LSI Matrix Pentium 4
A 10k x 255k 3.7M non-zeros Baseline 44
Mflop/s Best block size performance 16k x
16k 210 Mflop/s
165
Cache Blocked SpMV on LSI Matrix Itanium
A 10k x 255k 3.7M non-zeros Baseline 25
Mflop/s Best block size performance 16k x
32k 72 Mflop/s
166
Cache Blocked SpMV on LSI Matrix Itanium 2
A 10k x 255k 3.7M non-zeros Baseline 170
Mflop/s Best block size performance 16k x
65k 275 Mflop/s
167
Inter-Iteration Sparse Tiling (1/3)
  • Strout, et al., 01
  • Let A be 6x6 tridiagonal
  • Consider yA2x
  • tAx, yAt
  • Nodes vector elements
  • Edges matrix elements aij

168
Inter-Iteration Sparse Tiling (2/3)
  • Strout, et al., 01
  • Let A be 6x6 tridiagonal
  • Consider yA2x
  • tAx, yAt
  • Nodes vector elements
  • Edges matrix elements aij
  • Orange everything needed to compute y1
  • Reuse a11, a12

169
Inter-Iteration Sparse Tiling (3/3)
  • Strout, et al., 01
  • Let A be 6x6 tridiagonal
  • Consider yA2x
  • tAx, yAt
  • Nodes vector elements
  • Edges matrix elements aij
  • Orange everything needed to compute y1
  • Reuse a11, a12
  • Grey y2, y3
  • Reuse a23, a33, a43

170
Inter-Iteration Sparse Tiling Issues
  • Tile sizes (colored regions) grow with no. of
    iterations and increasing out-degree
  • G likely to have a few nodes with high out-degree
    (e.g., Yahoo)
  • Mathematical tricks to limit tile size?
  • Judicious dropping of edges Ng01

171
Summary and Questions
  • Need to understand matrix structure and machine
  • BeBOP suite of techniques to deal with different
    sparse structures and architectures
  • Google matrix problem
  • Established techniques within an iteration
  • Ideas for inter-iteration optimizations
  • Mathematical structure of problem may help
  • Questions
  • Structure of G?
  • What are the computational bottlenecks?
  • Enabling future computations?
  • E.g., topic-sensitive PageRank ? multiple vector
    version Haveliwala 02
  • See www.cs.berkeley.edu/richie/bebop/intel/google
    for more info, including more complete Itanium 2
    results.

172
Exploiting Matrix Structure
  • Symmetry (numerical or structural)
  • Reuse matrix entries
  • Can combine with register blocking, multiple
    vectors,
  • Matrix splitting
  • Split the matrix, e.g., into r x c and 1 x 1
  • No fill overhead
  • Large matrices with random structure
  • E.g., Latent Semantic Indexing (LSI) matrices
  • Technique cache blocking
  • Store matrix as 2i x 2j sparse submatrices
  • Effective when x vector is large
  • Currently, search to find fastest size

173
Symmetric SpMV Performance Pentium 4
174
SpMV with Split Matrices Ultra 2i
175
Cache Blocking on Random Matrices Itanium
Speedup on four banded random matrices.
176
Sparse Kernels and Optimizations
  • Kernels
  • Sparse matrix-vector multiply (SpMV) yAx
  • Sparse triangular solve (SpTS) xT-1b
  • yAATx, yATAx
  • Powers (yAkx), sparse triple-product (RART),
  • Optimization techniques (implementation space)
  • Register blocking
  • Cache blocking
  • Multiple dense vectors (x)
  • A has special structure (e.g., symmetric, banded,
    )
  • Hybrid data structures (e.g., splitting,
    switch-to-dense, )
  • Matrix reordering
  • How and when do we search?
  • Off-line Benchmark implementations
  • Run-time Estimate matrix properties, evaluate
    performance models based on benchmark data

177
Register Blocked SpMV Pentium III
178
Register Blocked SpMV Ultra 2i
179
Register Blocked SpMV Power3
180
Register Blocked SpMV Itanium
181
Possible Optimization Techniques
  • Within an iteration, i.e., computing (GuuT)x
    once
  • Cache block Gx
  • On linear programming matrices and matrices with
    random structure (e.g., LSI), 1.54x speedups
  • Best block size is matrix and machine dependent
  • Reordering and/or splitting of G to separate
    dense structure (rows, columns, blocks)
  • Between iterations, e.g., (GuuT)2x
  • (GuuT)2x G2x (Gu)uTx u(uTG)x u(uTu)uTx
  • Compute Gu, uTG, uTu once for all iterations
  • G2x Inter-iteration tiling to read G only once

182
Multiple Vector Performance Itanium
183
Sparse Kernels and Optimizations
  • Kernels
  • Sparse matrix-vector multiply (SpMV) yAx
  • Sparse triangular solve (SpTS) xT-1b
  • yAATx, yATAx
  • Powers (yAkx), sparse triple-product (RART),
  • Optimization techniques (implementation space)
  • Register blocking
  • Cache blocking
  • Multiple dense vectors (x)
  • A has special structure (e.g., symmetric, banded,
    )
  • Hybrid data structures (e.g., splitting,
    switch-to-dense, )
  • Matrix reordering
  • How and when do we search?
  • Off-line Benchmark implementations
  • Run-time Estimate matrix properties, evaluate
    performance models based on benchmark data

184
SpTS Performance Itanium
(See POHLL 02 workshop paper, at ICS 02.)
185
Sparse Kernels and Optimizations
  • Kernels
  • Sparse matrix-vector multiply (SpMV) yAx
  • Sparse triangular solve (SpTS) xT-1b
  • yAATx, yATAx
  • Powers (yAkx), sparse triple-product (RART),
  • Optimization techniques (implementation space)
  • Register blocking
  • Cache blocking
  • Multiple dense vectors (x)
  • A has special structure (e.g., symmetric, banded,
    )
  • Hybrid data structures (e.g., splitting,
    switch-to-dense, )
  • Matrix reordering
  • How and when do we search?
  • Off-line Benchmark implementations
  • Run-time Estimate matrix properties, evaluate
    performance models based on benchmark data

186
Optimizing AATx
  • Kernel yAATx, where A is sparse, x y dense
  • Arises in linear programming, computation of SVD
  • Conventional implementation compute zATx,
    yAz
  • Elements of A can be reused
  • When ak represent blocks of columns, can apply
    register blocking.

187
Optimized AATx Performance Pentium III
188
Current Directions
  • Applying new optimizations
  • Other split data structures (variable block,
    diagonal, )
  • Matrix reordering to create block structure
  • Structural symmetry
  • New kernels (triple product RART, powers Ak, )
  • Tuning parameter selection
  • Building an automatically tuned sparse matrix
    library
  • Extending the Sparse BLAS
  • Leverage existing sparse compilers as code
    generation infrastructure
  • More thoughts on this topic tomorrow

189
Related Work
  • Automatic performance tuning systems
  • PHiPAC Bilmes, et al., 97, ATLAS Whaley
    Dongarra 98
  • FFTW Frigo Johnson 98, SPIRAL Pueschel, et
    al., 00, UHFFT Mirkovic and Johnsson 00
  • MPI collective operations Vadhiyar Dongarra
    01
  • Code generation
  • FLAME Gunnels van de Geijn, 01
  • Sparse compilers Bik 99, Bernoulli Pingali,
    et al., 97
  • Generic programming Blitz Veldhuizen 98,
    MTL Siek Lumsdaine 98, GMCL Czarnecki, et
    al. 98,
  • Sparse performance modeling
  • Temam Jalby 92, White Saddayappan 97,
    Navarro, et al., 96, Heras, et al., 99,
    Fraguela, et al., 99,

190
More Related Work
  • Compiler analysis, models
  • CROPS Carter, Ferrante, et al. Serial sparse
    tiling Strout 01
  • TUNE Chatterjee, et al.
  • Iterative compilation OBoyle, et al., 98
  • Broadway compiler Guyer Lin, 99
  • Brewer 95, ADAPT Voss 00
  • Sparse BLAS interfaces
  • BLAST Forum (Chapter 3)
  • NIST Sparse BLAS Remington Pozo 94
    SparseLib
  • SPARSKIT Saad 94
  • Parallel Sparse BLAS Fillipone, et al. 96

191
Context Creating High-Performance Libraries
  • Application performance dominated by a few
    computational kernels
  • Today Kernels hand-tuned by vendor or user
  • Performance tuning challenges
  • Performance is a complicated function of kernel,
    architecture, compiler, and workload
  • Tedious and time-consuming
  • Successful automated approaches
  • Dense linear algebra ATLAS/PHiPAC
  • Signal processing FFTW/SPIRAL/UHFFT

192
Cache Blocked SpMV on LSI Matrix Itanium
193
Sustainable Memory Bandwidth
194
Multiple Vector Performance Pentium 4
195
Multiple Vector Performance Itanium
196
Multiple Vector Performance Pentium 4
197
Optimized AATx Performance Ultra 2i
198
Optimized AATx Performance Pentium 4
199
Tuning Pays OffPHiPAC
200
Tuning pays off ATLAS
Extends applicability of PHIPAC Incorporated in
Matlab (with rest of LAPACK)
201
Register Tile Sizes (Dense Matrix Multiply)
333 MHz Sun Ultra 2i 2-D slice of 3-D space
implementations color-coded by performance in
Mflop/s 16 registers, but 2-by-3 tile size
fastest
202
High Precision GEMV (XBLAS)
203
High Precision Algorithms (XBLAS)
  • Double-double (High precision word represented as
    pair of doubles)
  • Many variations on these algorithms we currently
    use Baileys
  • Exploiting Extra-wide Registers
  • Suppose s(1) , , s(n) have f-bit fractions, SUM
    has Fgtf bit fraction
  • Consider following algorithm for S Si1,n s(i)
  • Sort so that s(1) ? s(2) ? ? s(n)
  • SUM 0, for i 1 to n SUM SUM s(i), end
    for, sum SUM
  • Theorem (D., Hida) Suppose Flt2f (less than double
    precision)
  • If n ? 2F-f 1, then error ? 1.5 ulps
  • If n 2F-f 2, then error ? 22f-F ulps (can be
    ?? 1)
  • If n ? 2F-f 3, then error can be arbitrary (S
    ? 0 but sum 0 )
  • Examples
  • s(i) double (f53), SUM double extended (F64)
  • accurate if n ? 211 1 2049
  • Dot product of single precision x(i) and y(i)
  • s(i) x(i)y(i) (f22448), SUM double
    extended (F64) ?
  • accurate if n ? 216 1 65537

204
More Extra Slides
205
Opteron (1.4GHz, 2.8GFlop peak)
Nonsymmetric peak 504 MFlops
Symmetric peak 612 MFlops
Beat ATLAS DGEMVs 365 Mflops
206
Awards
  • Best Paper, Intern. Conf. Parallel Processing,
    2004
  • Performance models for evaluation and automatic
    performance tuning of symmetric sparse
    matrix-vector multiply
  • Best Student Paper, Intern. Conf. Supercomputing,
    Workshop on Performance Optimization via
    High-Level Languages and Libraries, 2003
  • Best Student Presentation too, to Richard Vuduc
  • Automatic performance tuning and analysis of
    sparse triangular solve
  • Finalist, Best Student Paper, Supercomputing 2002
  • To Richard Vuduc
  • Performance Optimization and Bounds for Sparse
    Matrix-vector Multiply
  • Best Presentation Prize, MICRO-33 3rd ACM
    Workshop on Feedback-Directed Dynamic
    Optimization, 2000
  • To Richard Vuduc
  • Statistical Modeling of Feedback Data in an
    Automatic Tuning System

207
Accuracy of the Tuning Heuristics (4/4)
208
Can Match DGEMV Performance
DGEMV
Write a Comment
User Comments (0)
About PowerShow.com