Title: CoarsetoFine Image Reconstruction
1Coarse-to-FineImage Reconstruction
- Rebecca Willett
- In collaboration with
- Robert Nowak and Rui Castro
2Poisson Data 14 photons/pixel MSE 0.0169
3Iterative reconstruction
(Willett Nowak, IEEE-TMI 03)
E-Step Compute conditional expectation of new
noisy image estimate given data and current
image estimate
Traditional Shepp-Vardi M-Step Maximum
Likelihood Estimation
Improved M-Step Complexity Regularized
Multiscale Poisson Denoising
4Wedgelet-based tomography
Shepp-Logan
MLE
Wedgelet-based reconstruction
Jeff Fesslers PWLS
5Tomography
6A simple image model
piecewise constant 2-d function with smooth
edges
7Measurement model
Access only to n noisy pixels
8Image space
9Kolmogorov metric entropy
10Dudley 74
11Minimax lower bound
approx. err
estimation err.
12Adaptively pruned partitions
13Tree pruning estimation
14Partitions and Estimators
15Complexity Regularization and the Bias-Variance
Trade-off
Complexity penalized estimator
set of all possible tree prunings
16The Li-Barron bound
Li Barron, 00 Nowak Kolaczyk, 01
17The Kraft inequality
1
1
1
1
0
1
1
1
1
1
0
0
0
0
0
0
0
0000
0000
0000
0000
0000
18Estimating smooth contours - Haar
Decorate each partition set with a constant
squared approximation error
19Approximating smooth contours - wedgelets
Donoho 99
20Approximating smoother contours
(Donoho 99)
Haar Wavelet Partition
Original Image
gt 850 terms
lt 370 terms
21Estimating smoother contours - wedgelets
Use wedges and decorate each partition set with a
constant
squared approximation error
22The problem with estimating smooth contours
Haar-based estimation
Wedgelet estimation
23Computational implications
24A solutionCoarse-to-fine model selection
two-step process involves search first over
coarse model space
from which one is selected
space of all signal models is very large
25Coarse-to-fine model selection
second step involves search over small subset of
models
26C2F wedgelets two-stage optimization
Start with a uniform partition
Stage 1 Adapt partition to the data
by pruning
Stage 2 Only apply wedges in the
small boxes that remain
27C2F wedgelets two-stage optimization
28Error analysis of two-stage approach
(Castro, Willett, Nowak, ICASSP 04)
29Controlling variance in the preview stage
- Start with a coarse partition in the first stage
- lowers the variance of the coarse resolution
estimate - with high probability, pruned coarse partition
close to optimal coarse partition - unpruned boxes at this stage indicate edges or
boundaries
30Controlling bias in the preview stage
- Bias becomes large if a square containing a
boundary fragment is pruned in the first stage
(this may happen if a boundary is close to the
side of the squares) - Solution
- Compute TWO coarse partitions - one normal, and
one shifted - Refine any region unpruned in either or both
shifts
31Computational implications
32Main result in action
noisy data MSE 0.0052
stage 1 result MSE 0.1214
stage 2 result O(n7/6), MSE 0.00046
33C2F limitations The ribbon
34C2F and other greedy methods
Matching pursuit
Boosting
35More general image models
36Platelet Approximation Theory
Twice continuously differentiable
Twice continuously differentiable
- m-term approximation error decay rate
- Fourier O(m-1/2)
- Wavelets O(m-1)
- Wedgelets O(m-1)
- Platelets O(m-2)
- Curvelets O(m-2)
37Confocal microscopy simulation
Noisy Image
Haar Estimate
Platelet Estimate
38C2F limitations complex images
- Images are edges many images consist almost
entirely of edges - C2F model still appropriate for many
applications - nuclear medicine
- feature classification
- temperature field estimation
39C2F in multiple dimensions
40Final remarks and ongoing work
- Careful greedy methods can perform as well as
exhaustive searches, both in theory and practice - Coarse-to-fine estimation dramatically reduces
computational complexities - Similar ideas can be used in other scenarios
- Reduce the amount of data required (e.g., active
learning and adaptive sampling) - Reduce number of bits required to encode model
locations in compression schemes