Regularization with Singular Energies: Error Estimation and Numerics - PowerPoint PPT Presentation

1 / 58
About This Presentation
Title:

Regularization with Singular Energies: Error Estimation and Numerics

Description:

Penalization by squared norm (L = Id) Optimality condition for components of u ... for penalization. Optimality condition is nonlinear PDE ... – PowerPoint PPT presentation

Number of Views:72
Avg rating:3.0/5.0
Slides: 59
Provided by: martin181
Category:

less

Transcript and Presenter's Notes

Title: Regularization with Singular Energies: Error Estimation and Numerics


1
Regularization with Singular Energies Error
Estimation and Numerics
  • Martin Burger

Institut für Numerische und Angewandte
Mathematik Westfälische Wilhelms Universität
Münster martin.burger_at_uni-muenster.de
TexPoint fonts used in EMF. Read the TexPoint
manual before you delete this box. AAAAA
2
Collaborations
  • Stan Osher, Jinjun Xu, Guy Gilboa (UCLA)
  • Lin He (Linz / UCLA)
  • Klaus Frick, Otmar Scherzer (Innsbruck)
  • Don Goldfarb, Wotao Yin (Columbia)

3
Introduction
  • Classical regularization schemes for inverse
    problems and image smoothing are based on Hilbert
    spaces and quadratic energy functionals
  • Example Tikhonov regularization for linear
    operator equations

4
Introduction
  • These energy functionals are strictly convex and
    differentiable standard tools from analysis and
    computation (Newton methods etc.) can be used
  • Disadvantage possible oversmoothing, seen from
    first-order optimality condition
  • Tikhonov yieldsHence u is in the range of
    (LL)-1A

5
Introduction
  • Classical inverse problem integral equation of
    the first kind, regularization in L2 (L Id), A
    Fredholm integral operator with kernel k
  • Smoothness of regularized solution is determined
    by smoothness of kernel
  • For typical convolution kernels like Gaussians,
    u is analytic !

6
Image Smoothing
  • Classical image smoothing data in L2 (A Id),
    L gradient (H1-Seminorm)
  • On a reasonable domain, standard elliptic
    regularity implies
  • Reconstruction contains no edges, blurs the
    image (with Green kernel)

7
Sparse Reconstructions ?
  • Let A be an operator on (basis
    repre-sentation of a Hilbert space operator,
    wavelet)
  • Penalization by squared norm (L Id)
  • Optimality condition for components of u
  • Decay of components determined by A. Even if
    data are generated by sparse signal (finite
    number of nonzeros), reconstruction is not sparse
    !

8
Error estimates
  • Error estimates for ill-posed problems can be
    obtained only under stronger conditions (source
    conditions)
  • cf. Groetsch, Engl-Hanke-Neubauer, Colton-Kress,
    Natterer. Engl-Kunisch-Neubauer.
  • Equivalent to u being minimizer of Tikhonov
    functional with data
  • For many inverse problems unrealistic due to
    extreme smoothness assumptions

9
Error estimates
  • Condition can be weakened to
  • cf. Neubauer et al (algebraic), Hohage
    (logarithmic), Mathe-Pereverzyev (general).
  • Advantage more realistic conditions
  • Disadvantage Estimates get worse with f

10
Singular Energies
  • Let A be the identity on
  • Nonlinear Penalization by
  • Optimality condition for components of u
  • If rk is smooth and strictly convex, then Taylor
    expansion yields

11
Singular Energies
  • Example becomes more interesting for singular
    (nonsmooth) energy
  • Take
  • Then optimality condition becomes

12
Singular Energies
  • Result is well-known soft-thresholding of
    wavelets Donoho et al, Chambolle et al
  • Yields a sparse signal

13
Singular Energies
  • Image smoothing try nonlinear energy
    for penalization
  • Optimality condition is nonlinear PDE
  • If r is strictly convex usual smoothing
    behaviour
  • If r is not convex problem not well-posed
  • Try singular case at the borderline

14
Total Variation Methods
  • Simplest choice yields total variation method
  • Total variation methods are popular in imaging
    (and inverse problems), since
  • they keep sharp edges
  • eliminate oscillations (noise)
  • create new nice mathematics

15
ROF Model
  • ROF model for denoising
  • Rudin-Osher Fatemi 89/92, Acar-Vogel 93,
    Chambolle-Lions 96, Vogel 95/96,
    Scherzer-Dobson 96, Chavent-Kunisch 98,
    Meyer 01,

16
ROF Model
  • Optimality condition for ROF denoising
  • Dual variable p enters !
  • Subgradient of convex functional

17
ROF Model
Reconstruction (code by Jinjun Xu)
clean noisy ROF
18
ROF Model
  • ROF model denoises cartoon images resp. computes
    the cartoon of an arbitrary image

19
Numerical Differentiation with TV
  • From Master Thesis of Markus Bachmayr, 2007

20
Singular energies
  • Methods with singular energies offer great
    potential, but still have some shortcomings
  • difficult to analyze and to obtain error
    estimates
  • systematic errors (clean images not
    reconstructed perfectly)
  • computational challenges
  • some extensions to complicated imaging tasks are
    not well understood (e.g. inpainting)

21
Singular energies
  • General problem
  • leads to optimality condition
  • First of all dual smoothing, subgradient p is
    in the range of A

22
Singular energies
  • For smooth and strictly convex energies, the
    subdifferential is a singleton
  • Dual smoothing directly results in a primal
    one !
  • For singular energies, subdifferentials are not
    usually multivalued. The consequence is a
    possibility to break the primal smoothing

23
Error Estimation
  • First question for error estimation estimate
    difference of u (minimizer of ROF) and f in terms
    of l
  • Estimate in the L2 norm is standard, but does
    not yield information about edges
  • Estimate in the BV-norm too ambitious even
    arbitrarily small difference in edge location can
    yield BV-norm of order one !

24
Error Estimation
  • We need a better error measure, stronger than
    L2, weaker than BV
  • Possible choice Bregman distance Bregman 67
  • Real distance for a strictly convex
    differentiable functional not symmetric
  • Symmetric version

25
Error Estimation
  • Bregman distances reduce to known measures for
    standard energies
  • Example 1
  • Subgradient Gradient u
  • Bregman distance becomes

26
Error Estimation
  • Bregman distances reduce to known measures for
    standard energies
  • Example 2 -
  • Subgradient Gradient log u
  • Bregman distance becomes Kullback-Leibler
    divergence (relative Entropy)

27
Error Estimation
  • Total variation is neither symmetric nor
    differentiable
  • Define generalized Bregman distance for each
    subgradient
  • Symmetric version
  • Kiwiel 97, Chen-Teboulle 97

28
Error Estimation
  • For energies homogeneous of degree one, we have
  • Bregman distance becomes

29
Error Estimation
  • Bregman distance for singular energies is not a
    strict distance, can be zero for
  • In particular dTV is zero for contrast change
  • Resmerita-Scherzer 06
  • Bregman distance is still not negative
    (convexity)
  • Bregman distance can provide information about
    edges

30
Error Estimation
  • Let v be piecewise constant with white
    background and color values on regions
  • Then we obtain subgradients of the form
  • with signed distance function and

31
Error Estimation
  • Bregman distances given by
  • In the limit we obtain for being piecewise
    continuous

32
Error Estimation
  • For estimate in terms of l we need smoothness
    condition on data
  • Optimality condition for ROF

33
Error Estimation
  • Subtract q
  • Estimate for Bregman distance, mb-Osher 04

34
Error Estimation
  • In practice we have to deal with noisy data f
    (perturbation of some exact data g)
  • Estimate for Bregman distance

35
Error Estimation
  • Optimal choice of the penalization parameter
  • i.e. of the order of the noise variance

36
Error Estimation
  • Direct extension to deconvolution / linear
    inverse problems
  • under standard source condition
  • mb-Osher 04
  • Extension stronger estimates under stronger
    conditions, Resmerita 05
  • Nonlinear inverse problems, Resmerita-Scherzer 06

37
Error Estimation Future tasks
  • Extension to other fitting functionals (relative
    entropy, log-likelihood functionals for different
    noise models)
  • Extension to anisotropic TV (Interpretation of
    subgradients)
  • Extension to geometric problems (segmentation by
    Chan-Vese, Mumford-Shah) use exact relaxation in
    BV with bound constraints Chan-Esedoglu-Nikolova
    04

38
Discretization
  • Natural choice primal discretization with
    piecewise constant functions on grid
  • Problem 1 Numerical analysis (characterization
    of discrete subgradients)
  • Problem 2 Discrete problems are the same for
    any anisotropic version of the total variation

39
Discretization
  • In multiple dimensions, nonconvergence of the
    primal discretization for the isotropic TV (p2)
    can be shown
  • Convergence of anisotropic TV (p1) on
    rectangular aligned grids
  • Fitzpatrick-Keeling 1997

40
Primal-Dual Discretization
  • Alternative perform primal-dual discretization
    for optimality system (variational
    inequality)with convex set

41
Primal-Dual Discretization
  • Discretization
  • Discretized convex set with appropriate elements
    (piecewise linear in 1D, Raviart-Thomas in
    multi-D)

42
Primal / Primal-Dual Discretization
  • In 1 D primal, primal-dual, and dual
    discretization are equivalent
  • Error estimate for Bregman distance by analogous
    techniques
  • Note that only the natural condition

    is needed to show

43
Primal / Primal-Dual Discretization
  • In multi-D similar estimates, additional work
    since projection of subgradient is not discrete
    subgradient.
  • Primal-dual discretization equivalent to
    discretized dual minimization (Chambolle 03,
    Kunisch-Hintermüller 04). Can be used for
    existence of discrete solution, stability of p
  • Mb 07 ?

44
Cartesian Grids
  • For most imaging applications Cartesian grids
    are used. Primal dual discretization can be
    reinterpreted as a finite difference scheme in
    this setup.
  • Value of image intensity corresponds to color in
    a pixel of width h around the grid point.
  • Raviart-Thomas elements on Cartesian grids
    particularly easy. First component piecewise
    linear in x, pw constant in y,z, etc.
  • Leads to simple finite difference scheme with
    staggered grid

45
Iterative Refinement ISS
  • ROF minimization has a systematic error, total
    variation of the reconstruction is smaller than
    total variation of clean image. Image features
    left in residual f-ug, clean f, noisy u,
    ROF f-u

46
Iterative Refinement ISS
  • Idea add the residual (noise) back to the
    image to pronounce the features decreased to
    much. Then do ROF again. Iterative procedure
  • Osher-mb-Goldfarb-Xu-Yin 04

47
Iterative Refinement ISS
  • Improves reconstructions significantly

48
Iterative Refinement ISS
49
Iterative Refinement ISS
  • Simple observation from optimality condition
  • Consequently, iterative refinement equivalent to
    Bregman iteration

50
Iterative Refinement ISS
  • Choice of parameter l less important, can be
    kept small (oversmoothing). Regularizing effect
    comes from appropriate stopping.
  • Quantitative stopping rules available, or stop
    when you are happy S.O.
  • Limit l to zero can be studied. Yields gradient
    flow for the dual variable (inverse scale
    space)mb-Gilboa-Osher-Xu 06,
    mb-Frick-Osher-Scherzer 06

51
Iterative Refinement ISS
  • Non-quadratic fidelity is possible, some caution
    needed for L1 fidelity
  • He-mb-Osher 05, mb-Frick-Osher-Scherzer 06
  • Error estimation in Bregman distance
    mb-He-Resmerita 07

52
Iterative Refinement
  • MRI Data Siemens Magnetom Avanto 1.5 T Scanner
    He, Chang, Osher, Fang, Speier 06
  • PenalizationTV Wavelet

53
Iterative Refinement
  • MRI Data Siemens Magnetom Avanto 1.5 T Scanner
    He, Chang, Osher, Fang, Speier 06

54
Iterative Refinement
  • MRI Data Siemens Magnetom Avanto 1.5 T Scanner
    He, Chang, Osher, Fang, Speier 06

55
Surface Smoothing
  • Smoothing of surfaces obtained as level sets
  • 3D Ultrasound, Kretz / GE Med.

56
Inverse Scale Space
57
Iterative Refinement ISS
  • Application to other regularization techniques,
    e.g. wavelet thresholding is straightforward
  • Starting from soft shrinkage, iterated
    refinement yields firm shrinkage, inverse scale
    space becomes hard shrinkageOsher-Xu 06
  • Bregman distance natural sparsity measure,
    source condition just requires sparse signal,
    number of nonzero components is smoothness
    measure in error estimates

58
Download and Contact
  • Papers and Talks
  • www.math.uni-muenster.de/u/burger
  • e-mail martin.burger_at_uni-muenster.de
Write a Comment
User Comments (0)
About PowerShow.com