Title: QUESTION
1QUESTION
- What is foundations of computational
- mathematics?
2FOCM
- DATA COMPRESSION
- ADAPTIVE PDE SOLVERS
3COMPRESSION - ENCODING
DATA SET Image Signal Surface BIT STREAM 1100111100...
Function f B(f)(B1,,Bn)
4COMPRESSION - ENCODING
1100111100...
f B(f)(B1,,Bn)
5DECODER
BIT STREAM B FUNCTION gB
B Image Signal Surface
6DECODER
BIT STREAM B FUNCTION gB
B
7Whos Algorithm is Best?
- Test examples?
- Heuristics?
- Fight it out?
- Clearly define problem (focm)
8MUST DECIDE
- METRIC TO MEASURE ERROR
- MODEL FOR OBJECTS TO BE COMPRESSED
9IMAGE PROCESSING
Model Real Images
Metric Human Visual System
Stochastic
Mathematical Metric
Deterministic
Smoothness Classes K
Lp Norms
Lp Norms
10Kolmogorov Entropy
- Given ? gt 0, N? (K) smallest number of ? balls
that cover K
11Kolmogorov Entropy
- Given ? gt 0, N? (K) smallest number of ? balls
that cover K
12Kolmogorov Entropy
- Given ? gt 0, N? (K) smallest number of ? balls
that cover K - H?(K) log (N?(K))
- Best encoding with distortion ? of K
13Encoders and Kolmogorov Entropy
Codebook
Approximants Code x1 0000
x2 0001 x3 0010 x4 0011
... xN(?)
bmb2b1b0
?-balls with centers xj
Max of bits ? log2N(?)
Compact set K
14ENTROPY NUMBERS
- dn(K) inf ? H?(K) ? n
- This is best distortion for K with bit budget n
- Typically dn(K) ? n-s
15SUMMARY
- Find right metric
- Find right classes
- Determine Kolmogorov entropy
- Build encoders that give these entropy bounds
16COMPACT SETS IN Lp FOR d2
Sobolev embedding line 1/q ?/21/p
Smoothness
(1/q, ?)
?
Lq
Lp
1/q
(1/p,0)
Lq Space
2
17COMPACT SETS IN L2 FOR d2
Smoothness
(1,1)-BV
(1/q, ?)
?
Lq
L2
1/q
(1/2,0)
Lq Space
2
18ENTROPY OF K
- Entropy of Besov Balls B? (Lq ) in Lp is n???d
- Is there a practical encoder achieving this
- simultaneously for all Besov balls?
- ANSWER YES
- Cohen-Dahmen-Daubechies-DeVore wavelet tree
based encoder
19COHEN-DAUBECHIES-DAHMEN-DEVORE
- Partition growth into subtrees
- Decompose image
D j T j \ T j-1
20WHAT DOES THIS BUY YOU?
- Explains performance of best encoders Shapiro,
Said-Pearlman - Classifies images according to their
compressibility (DeVore-Lucier) - Handles metrics other than L2
- Tells where to improve performance
- Better metric, Better classes (e.g. not
rearrangement invariant)
21DTED DATA SURFACE
Grand Canyon
22POSTINGS
Postings
23FIDELITY
- L2 metric not appropriate
24FIDELITY
- L2 metric not appropriate
- L? better
25OFFSET
- If surface is offset by a lateral error of ?,
the L? norm may be huge
L? error
26OFFSET
- But Hausdorff error is not large.
L? error
27CAN WE FIND dn(K)?
- K bounded functions dN(K) ? n-1 for
Nnd1 - K continuous functions dN(K) ? n-1, for N nd
log n - K bounded variation in d1 dn(K) ? n-1
- K class of characteristic functions of convex
sets - dn(K) ? n-1
28Example functions in BV, d1
- Assume f monotone encode first (jk) and last
- (j?k) square in column. Then ?k jk-j?k ? M n.
- Can encode all such jk with C M n bits.
j?k
jk
k
29ANTICIPATED IMPACTDTED
- Clearly define the problem
- Expose new metrics to data compression community
- Result in better and more efficient encoders
30NUMERICAL PDEs
- u solution to PDE
- uh or u n is a numerical
approximation - uh typically piecewise polynomial (FEM)
un linear combination of n wavelets - different from image processing because u is
unknown
31MAIN INGREDIENTS
- Metric to measure error
- Number of degrees of freedom / computations
- Linear (SFEM) or nonlinear (adaptive) method of
approximation using piecewise polynomials or
wavelets - Inversion of an operator
- Right question Compare error with best error
that could be obtained using full knowledge of u
32EXAMPLE OF ELLIPTIC EQUATION
33 CLASSICAL ELLIPTIC THEOREM
- Variational formulation gives energy norm Ht
-
- THEOREM If u in Hts then SFEM
gives - u-uh Ht lt hs uHts
- Can replace Hts by Bst (L2 )
- Approx. order hs equivalent to u in Bst (L2 )
.
.
)
h
8
8
34HYPERBOLIC
- Conservation Law
- ut divx(f(u))0, u(x,0)u0(x)
- THEOREM If u0 in BV then
- u(,,t)-uh(.,t)L1 lt h1/2 u0 BV
- u0 in BV implies u in BV this is equivalent
to approximation of order h in L1
.
.
)
35ADAPTIVE METHODS
- Wavelet Methods (WAM) approximates u
- by a linear combination of n wavelets
- AFEM approximates u by piecewise polynomial on
partition generated by adaptive subdivision
36FORM OF NONLINEAR APPROXIMATION
- Good Theorem For a range of s gt0, if u can be
approximated with accuracy O(n-s) using full
knowledge of u then numerical algorithm produces
same accuracy using only information about u
gained during the computation. - Here n is the number of degrees of freedom
- Best Theorem In addition bound the number of
computations by Cn
37AFEMs
- Initial partition P0 and Galerkin soln. u0
- General iterative step Pj Pj1 and uj
uj1 - i. Examine residual (a posteriori error
estimators) to determine cells to be subdivided
(marked cells) - ii. Subdivide marked cells - results in hanging
nodes. - iii. Remove hanging nodes by further subdivision
(completion) resulting in Pj1
38FIRST FUNDAMENTALTHEOREMS
- Doerfler, Morin-Nochetto-Siebert
- Introduce strategy for marking cells a
posterio estimators plus bulk chasing - Rule for subdivision newest vertex bisection
- THEOREM (D,MNS) For Poisson problem
algorithm convergence
.
.
)
.
.
)
39BINEV-DAHMEN-DEVORE
- New AFEM Algorithm
- 1. Add coarsening step
- 2. Fundamental analysis of completion
- 3. Utilize principles of nonlinear
approximation
40BINEV-DAHMEN-DEVORE
-
- THEOREM (BDD) Poisson problem, for a
certain range of s gt0. If u can be approximated
with order O(n-s ) in energy norm using full
knowledge of u, then BDD adaptive algorithm does
the same. Moreover, the number of computations
is of order O(n).
.
.
)
41 ADAPTIVE WAVELET METHODS
- General elliptic problem
- Auf
- ???????????????????????????????????
- ????????????????????????????????????
- Problem in wavelet coordinates
- A u f
- A l2 l2
- Av v
42 FORM OF WAVELET METHODS
- Choose a set ?? of wavelet indices
- Find Gakerkin solution u? from span????????
- Check residual and update ? ??
43COHEN-DAHMEN-DEVOREFIRST VIEW
- For finite index set ?
- A? u ? f ?????????????u ?
Galerkin sol.
- Generate sets ?j , j 0,1,2,
- Form of algorithm
- 1. Bulk chase on residual several iterations
- ?????????????????????j???????? ?j
- 2. Coarsen ??j???? ? ?j1
- 3. Stop when residual error small enough
44ADAPTIVE WAVELETSCOHEN-DAHMEN-DEVORE
- THEOREM (CDD) For SPD problems. If u can be
approximated with O(n-s ) using full knowledge of
u (best n term approximation), then CDD algorithm
does same. Moreover, the number of computations
is O(n).
45CDD SECOND VIEW
- u n1 u n - ?(A u n -f )
- This infinite dimensional iterative process
converges - Find fast and efficient methods to compute
- Au n , f when u n is finitely supported.
- Compression of matrix vector multiplication Au n
46SECOND VIEW GENERALIZES
- Wide range of semi-elliptic, and nonlinear
- THEOREM (CDD) For wide range of linear
- and nonlinear elliptic problems. If u can be
approximated with O(n-s ) using full knowledge - of u (best n term approximation), then CDD
algorithm does same. Moreover, the number - of computations is O(n).
47WHAT WE LEARNED
- Proper coarsening controls size of problem
- Remain with infinite dimensional problem as long
as possible - Adaptivity is a natural stabilizer, e.g. LBB
conditions for saddle point problems are not
necessary
48WHAT focm CAN DO FOR YOU
- Clearly frame the computational problem
- Give benchmark of optimal performance
- Discretization/Analysis/Solution interplay
- Identify computational issues not apparent in
computational heuristics - Guide the development of optimal algorithms