Title: Regularization with Singular Energies: Error Estimation and Numerics
1Regularization with Singular Energies Error
Estimation and Numerics
Institut für Numerische und Angewandte
Mathematik Westfälische Wilhelms Universität
Münster martin.burger_at_uni-muenster.de
TexPoint fonts used in EMF. Read the TexPoint
manual before you delete this box. AAAAA
2Collaborations
- Stan Osher, Jinjun Xu, Guy Gilboa (UCLA)
- Lin He (Linz / UCLA)
- Klaus Frick, Otmar Scherzer (Innsbruck)
- Don Goldfarb, Wotao Yin (Columbia)
3Introduction
- Classical regularization schemes for inverse
problems and image smoothing are based on Hilbert
spaces and quadratic energy functionals - Example Tikhonov regularization for linear
operator equations
4Introduction
- These energy functionals are strictly convex and
differentiable standard tools from analysis and
computation (Newton methods etc.) can be used - Disadvantage possible oversmoothing, seen from
first-order optimality condition - Tikhonov yieldsHence u is in the range of
(LL)-1A
5Introduction
- Classical inverse problem integral equation of
the first kind, regularization in L2 (L Id), A
Fredholm integral operator with kernel k - Smoothness of regularized solution is determined
by smoothness of kernel - For typical convolution kernels like Gaussians,
u is analytic !
6Image Smoothing
- Classical image smoothing data in L2 (A Id),
L gradient (H1-Seminorm) - On a reasonable domain, standard elliptic
regularity implies - Reconstruction contains no edges, blurs the
image (with Green kernel)
7Sparse Reconstructions ?
- Let A be an operator on (basis
repre-sentation of a Hilbert space operator,
wavelet) - Penalization by squared norm (L Id)
- Optimality condition for components of u
- Decay of components determined by A. Even if
data are generated by sparse signal (finite
number of nonzeros), reconstruction is not sparse
!
8Error estimates
- Error estimates for ill-posed problems can be
obtained only under stronger conditions (source
conditions) - cf. Groetsch, Engl-Hanke-Neubauer, Colton-Kress,
Natterer. Engl-Kunisch-Neubauer. - Equivalent to u being minimizer of Tikhonov
functional with data - For many inverse problems unrealistic due to
extreme smoothness assumptions
9Error estimates
- Condition can be weakened to
- cf. Neubauer et al (algebraic), Hohage
(logarithmic), Mathe-Pereverzyev (general). - Advantage more realistic conditions
- Disadvantage Estimates get worse with f
10Singular Energies
- Let A be the identity on
- Nonlinear Penalization by
- Optimality condition for components of u
- If rk is smooth and strictly convex, then Taylor
expansion yields
11Singular Energies
- Example becomes more interesting for singular
(nonsmooth) energy - Take
- Then optimality condition becomes
12Singular Energies
- Result is well-known soft-thresholding of
wavelets Donoho et al, Chambolle et al - Yields a sparse signal
13Singular Energies
- Image smoothing try nonlinear energy
for penalization - Optimality condition is nonlinear PDE
- If r is strictly convex usual smoothing
behaviour - If r is not convex problem not well-posed
- Try singular case at the borderline
14Total Variation Methods
- Simplest choice yields total variation method
- Total variation methods are popular in imaging
(and inverse problems), since - they keep sharp edges
- eliminate oscillations (noise)
- create new nice mathematics
15ROF Model
- ROF model for denoising
- Rudin-Osher Fatemi 89/92, Acar-Vogel 93,
Chambolle-Lions 96, Vogel 95/96,
Scherzer-Dobson 96, Chavent-Kunisch 98,
Meyer 01,
16ROF Model
- Optimality condition for ROF denoising
- Dual variable p enters !
- Subgradient of convex functional
-
17ROF Model
Reconstruction (code by Jinjun Xu)
clean noisy ROF
18ROF Model
- ROF model denoises cartoon images resp. computes
the cartoon of an arbitrary image
19Numerical Differentiation with TV
- From Master Thesis of Markus Bachmayr, 2007
20Singular energies
- Methods with singular energies offer great
potential, but still have some shortcomings - difficult to analyze and to obtain error
estimates - systematic errors (clean images not
reconstructed perfectly) - computational challenges
- some extensions to complicated imaging tasks are
not well understood (e.g. inpainting)
21Singular energies
- General problem
- leads to optimality condition
- First of all dual smoothing, subgradient p is
in the range of A
22Singular energies
- For smooth and strictly convex energies, the
subdifferential is a singleton - Dual smoothing directly results in a primal
one ! - For singular energies, subdifferentials are not
usually multivalued. The consequence is a
possibility to break the primal smoothing
23Error Estimation
- First question for error estimation estimate
difference of u (minimizer of ROF) and f in terms
of l - Estimate in the L2 norm is standard, but does
not yield information about edges - Estimate in the BV-norm too ambitious even
arbitrarily small difference in edge location can
yield BV-norm of order one !
24Error Estimation
- We need a better error measure, stronger than
L2, weaker than BV - Possible choice Bregman distance Bregman 67
- Real distance for a strictly convex
differentiable functional not symmetric - Symmetric version
25Error Estimation
- Bregman distances reduce to known measures for
standard energies - Example 1
- Subgradient Gradient u
- Bregman distance becomes
26Error Estimation
- Bregman distances reduce to known measures for
standard energies - Example 2 -
- Subgradient Gradient log u
- Bregman distance becomes Kullback-Leibler
divergence (relative Entropy)
27Error Estimation
- Total variation is neither symmetric nor
differentiable - Define generalized Bregman distance for each
subgradient - Symmetric version
- Kiwiel 97, Chen-Teboulle 97
28Error Estimation
- For energies homogeneous of degree one, we have
- Bregman distance becomes
29Error Estimation
- Bregman distance for singular energies is not a
strict distance, can be zero for - In particular dTV is zero for contrast change
-
- Resmerita-Scherzer 06
- Bregman distance is still not negative
(convexity) - Bregman distance can provide information about
edges
30Error Estimation
- Let v be piecewise constant with white
background and color values on regions - Then we obtain subgradients of the form
- with signed distance function and
31Error Estimation
- Bregman distances given by
- In the limit we obtain for being piecewise
continuous -
32Error Estimation
- For estimate in terms of l we need smoothness
condition on data - Optimality condition for ROF
-
33Error Estimation
- Subtract q
- Estimate for Bregman distance, mb-Osher 04
-
34Error Estimation
- In practice we have to deal with noisy data f
(perturbation of some exact data g) -
- Estimate for Bregman distance
-
35Error Estimation
-
- Optimal choice of the penalization parameter
- i.e. of the order of the noise variance
36Error Estimation
- Direct extension to deconvolution / linear
inverse problems - under standard source condition
- mb-Osher 04
- Extension stronger estimates under stronger
conditions, Resmerita 05 - Nonlinear inverse problems, Resmerita-Scherzer 06
37Error Estimation Future tasks
- Extension to other fitting functionals (relative
entropy, log-likelihood functionals for different
noise models) - Extension to anisotropic TV (Interpretation of
subgradients) - Extension to geometric problems (segmentation by
Chan-Vese, Mumford-Shah) use exact relaxation in
BV with bound constraints Chan-Esedoglu-Nikolova
04
38Discretization
- Natural choice primal discretization with
piecewise constant functions on grid - Problem 1 Numerical analysis (characterization
of discrete subgradients) - Problem 2 Discrete problems are the same for
any anisotropic version of the total variation
39Discretization
- In multiple dimensions, nonconvergence of the
primal discretization for the isotropic TV (p2)
can be shown - Convergence of anisotropic TV (p1) on
rectangular aligned grids - Fitzpatrick-Keeling 1997
40Primal-Dual Discretization
- Alternative perform primal-dual discretization
for optimality system (variational
inequality)with convex set
41Primal-Dual Discretization
- Discretization
- Discretized convex set with appropriate elements
(piecewise linear in 1D, Raviart-Thomas in
multi-D)
42Primal / Primal-Dual Discretization
- In 1 D primal, primal-dual, and dual
discretization are equivalent - Error estimate for Bregman distance by analogous
techniques - Note that only the natural condition
is needed to show
43Primal / Primal-Dual Discretization
- In multi-D similar estimates, additional work
since projection of subgradient is not discrete
subgradient. - Primal-dual discretization equivalent to
discretized dual minimization (Chambolle 03,
Kunisch-Hintermüller 04). Can be used for
existence of discrete solution, stability of p - Mb 07 ?
44Cartesian Grids
- For most imaging applications Cartesian grids
are used. Primal dual discretization can be
reinterpreted as a finite difference scheme in
this setup. - Value of image intensity corresponds to color in
a pixel of width h around the grid point. - Raviart-Thomas elements on Cartesian grids
particularly easy. First component piecewise
linear in x, pw constant in y,z, etc. - Leads to simple finite difference scheme with
staggered grid
45Iterative Refinement ISS
- ROF minimization has a systematic error, total
variation of the reconstruction is smaller than
total variation of clean image. Image features
left in residual f-ug, clean f, noisy u,
ROF f-u
46Iterative Refinement ISS
- Idea add the residual (noise) back to the
image to pronounce the features decreased to
much. Then do ROF again. Iterative procedure - Osher-mb-Goldfarb-Xu-Yin 04
47Iterative Refinement ISS
- Improves reconstructions significantly
48Iterative Refinement ISS
49Iterative Refinement ISS
- Simple observation from optimality condition
- Consequently, iterative refinement equivalent to
Bregman iteration
50Iterative Refinement ISS
- Choice of parameter l less important, can be
kept small (oversmoothing). Regularizing effect
comes from appropriate stopping. - Quantitative stopping rules available, or stop
when you are happy S.O. - Limit l to zero can be studied. Yields gradient
flow for the dual variable (inverse scale
space)mb-Gilboa-Osher-Xu 06,
mb-Frick-Osher-Scherzer 06
51Iterative Refinement ISS
- Non-quadratic fidelity is possible, some caution
needed for L1 fidelity - He-mb-Osher 05, mb-Frick-Osher-Scherzer 06
- Error estimation in Bregman distance
mb-He-Resmerita 07
52Iterative Refinement
- MRI Data Siemens Magnetom Avanto 1.5 T Scanner
He, Chang, Osher, Fang, Speier 06 - PenalizationTV Wavelet
53Iterative Refinement
- MRI Data Siemens Magnetom Avanto 1.5 T Scanner
He, Chang, Osher, Fang, Speier 06
54Iterative Refinement
- MRI Data Siemens Magnetom Avanto 1.5 T Scanner
He, Chang, Osher, Fang, Speier 06
55Surface Smoothing
- Smoothing of surfaces obtained as level sets
- 3D Ultrasound, Kretz / GE Med.
56Inverse Scale Space
57Iterative Refinement ISS
- Application to other regularization techniques,
e.g. wavelet thresholding is straightforward - Starting from soft shrinkage, iterated
refinement yields firm shrinkage, inverse scale
space becomes hard shrinkageOsher-Xu 06 - Bregman distance natural sparsity measure,
source condition just requires sparse signal,
number of nonzero components is smoothness
measure in error estimates
58Download and Contact
- Papers and Talks
- www.math.uni-muenster.de/u/burger
-
- e-mail martin.burger_at_uni-muenster.de