Deconvolution, Regularization - PowerPoint PPT Presentation

1 / 32
About This Presentation
Title:

Deconvolution, Regularization

Description:

Gull (1988) Toy ... Gull, S.F., 'Bayesian Inductive Inference and Maximum ... Gull, S.F. & Daniell, G.J., 'Image reconstruction from incomplete ... – PowerPoint PPT presentation

Number of Views:74
Avg rating:3.0/5.0
Slides: 33
Provided by: maju2
Category:

less

Transcript and Presenter's Notes

Title: Deconvolution, Regularization


1
Deconvolution, Regularization Maximum Entropy
Methods
  • Mario Juricmjuric_at_astro.princeton.edu

April 27th 2005, AST Observational Seminar,
Princeton University
2
A two-figure summary
Cygnus A radio source (6cm, VLA)
3
Atmosphere
Detector
Telescope
4
Sources of signal pollution
  • Telescope and detector PSF
  • Noise
  • Incomplete coverage
  • Interferometry
  • Gamma ray astronomy

5
Mathematical model (linear)
Signal (Input)
Output
6
Recovering the signal
?
7
Deconvolution
  • Inversion?
  • Ill posed problem
  • Infinite number of solutions satisfy the noise
    requirement
  • Naïve deconvolution amplifies noise (with
    potentially catastrophic consequences)

8
Regularization
  • Problem is well-posed when it is (Hadamard,
    1902)
  • uniquely solvable and is such that the solution
    depends in a continuous way on the data.
  • If the solution depends in a discontinuous way on
    the data, then small errors, whether rounding off
    errors, measurement errors, or perturbations
    caused by noise, can create large deviations.
  • Most measurement problems are inherently
    ill-posed
  • Regularization is the process of introduction of
    additional information about the problem in order
    to obtain a well-behaved inverse

http//www.math.uu.se/kiselman/ipp.html
9
Weiner filter
  • Assumption we know Ps(k) and Pn(k)
  • Acknowledge that some of the signal (in k-space)
    has been washed out by noise and suppress those
    frequencies

http//osl.iu.edu/tveldhui/papers/MAScThesis/node
15.html
10
CLEAN algorithm
  • Assumption an image is generated by a set of
    point sources
  • Introduced by Högbom (1974) for use in
    radioastronomy
  • Iterative procedure where at each step strongest
    peak in the image is identified and subtracted
    until no more strong peaks are left

http//www.cv.nrao.edu/abridle/deconvol/node7.htm
l
11
Regularization procedures
  • Common feature Incorporate prior knowledge (or
    assumptions) about the statistical properties of
    the image
  • Intensity always has to be positive
  • Expected form of the signal
  • Eg. generated by point sources (CLEAN algorithm)
  • Signal smoothness
  • Expected signal and noise power spectrum (Wiener
    filtering, Richardson-Lucy algorithm)
  • All of these algorithms require certain dose of
    intuition about what we expect from the signal.
    Can we somehow formalize this process (and make
    it least subjective as possible)?

12
Restatement of the problem
  • Testable information
  • There is a space of all possible signals (images)
  • We have a-priori constrains
  • Eg. the positivity constraint
  • w x h image size
  • We have our measured data
  • We have constraints on how wrong our data can be
    (chi-squared)
  • The goal
  • Out of all possible signals consistent with the
    available information, select the one which
    incorporates all that is known (testable
    information), but is maximally noncommittal about
    what is not known.

Which one?
Gull (1988)
13
Toy interferometry example
  • Out of all possible signals consistent with the
    available information, select the one which
    incorporates all that is known (testable, but is
    maximally noncommittal about what is not known.
  • How do we avoid injecting unnecessary information
    into the system while doing the reconstruction?

14
Entropy
  • Information theory (Shannon)
  • Average Shannon information content of an outcome
    (also called uncertainty)
  • H is proportional to our lack of knowledge
  • Statistical mechanics (Boltzmann)
  • A measure of the number of microscopic ways that
    a given macroscopic state can be realized. For W
    alternatives, each with probability p, entropy is
    defined by (1), and by (2) for a general case of
    alternatives with unequal probabilities.
  • exp(S) is proportional to the probability of
    getting that outcome

i.e. MacKay (2003)
15
Entropy maximization subject to constraints
  • For images, identify probability with normalized
    intensity (1). Our image now bootstrapped a
    probability density function (giving the
    probability pi of the next photon arriving to
    pixel i )
  • Given that bootstrap estimate, search for a new
    PDF pi which maximizes the entropy S(I), subject
    to testable information constraints C(I).
  • This new PDF is the MAXENT-reconstructed image.

(1)
Lagrange multiplier
16
f1
f2
f3
A
f1
f2
f3
B
Skilling Bryan (1984)
17
Example reconstructions with varying amounts of
noise
Data
Reconstruction
Steinbach (1997), http//cmm.info.nih.gov/maxent/
18
General properties of a MAXENT reconstructed image
Model
Dirty map
  • Peaks resolved(superresolution)
  • Ripples removed
  • Reduced resolutionat lower peaks
  • Spurious peaks nearthe absorptionfeature

MAXENT
CLEAN
Narayan Nityananda (1986)
19
Form of entropy function
  • Specifying a different entropy function is
    equivalent to specifying a different measure of
    information
  • The standard Shannon entropy choice is motivated
    by counting the number of ways in which the image
    could have arisen

20
Popular choices for S(I)
  • Burg (1978)
  • Frieden (1978)
  • Gull Skilling (1991)

Starck et al (2002)
21
Things to note
  • MAXENT is essentially a way of generating a PDF
    on a hypothesis space which, given a measure of
    entropy, is guaranteed to incorporate only
    testable information
  • MAXENT cannot be derived from Bayes theorem
    (despite what you may find in the literature). It
    is fundamentally different, as Bayes theorem
    concerns itself with inferring a-posteriori
    probability once the likelihood and a-priori
    probability are known, while MAXENT is a guiding
    principle to construct the a-priori PDF.
  • For our application (deconvolution), we identify
    the PDF with an image (where individual
    probabilities are proportional to pixel
    intensities)
  • As probabilities are positive by definition,
    positivity of intensities is automatic
  • MAXENT produces images with least features
    (information) that are consistent with the data
    and known constraints (testable information).
    Another way of stating this is that MAXENT
    produces the most uniform image consistent with
    the data.

Jaynes (1988), Jaynes (1995), Gull (1988)
22
MAXENT Success stories in astronomy
23
Dirty map
Cygnus A radio source (6cm, VLA)
MAXENT reconstruction
Pearson Readhead (1984)
24
MAXENT reconstructions of simulated triple
sources with a) 250 events (gamma ray photons
registered at the detector),b) 500 events andc)
1000 events
Skilling, Strong Bennett (1978)
25
Original image
The Jet of M87 Original photograph vs. the
MAXENT reconstruction. Note the smoothing of the
noise and the increase in the level of detail in
the area of the jet (superresolution).
MAXENT reconstruction
Bryan Skilling (1980)
26
  • Pioneer 10
  • images of Ganymede
  • Original image
  • Wiener filtering
  • MAXENT reconstruction

Frieden Swindell (1976)
27
Summary
  • Most deconvolution we meet problems are
    inherently ill-posed (have an infinite number of
    solutions) which makes direct inversion
    impossible
  • By adding additional constraints we regularize
    the problem (select one solution)
  • Maximum entropy methods when applied to image
    reconstruction select the solution which produces
    an image having least features (information) that
    is consistent with the data. Another way of
    stating this is that MAXENT produces the most
    uniform image consistent with the data.
  • On a deeper information theoretical level, MAXENT
    methods are a consistent and systematic way to
    build priors on a hypothesis space given some
    testable information (constraints). It is a
    general way to build priors.

28
Deconvolution Trivia
  • While most people can see with a resolution of
    1, the image on our retina is blurred through a
    PSF of width as large as 5 due to various
    effects (the largest being chromatic aberration).
  • And while we still struggle with finding optimal
    deconvolution algorithms, the brain happily
    performs the procedure on a 8500x5400 (43Mpix)
    image, a few times per second, 17 hours a day,
    365 days a year.

MacKay (2003) Tidwell (1995), http//www.hitl.wash
ington.edu/publications/tidwell/ch3.html
29
Deconvolution of motion blurring
Maximum Entropy Data Consultants,
http//www.maxent.co.uk/
30
References
  • Jaynes, E.T., Probability Theory The Logic of
    Science (1995, Cambridge University Press)
  • Jaynes, E.T., The Relation of Bayesian and
    Maximum Entropy Methods (1988), in
    Maximum-Entropy and Bayesian Methods in Science
    and Engeneering (Vol. 1), 25-20
  • Gull, S.F., Bayesian Inductive Inference and
    Maximum Entropy, in Maximum-Entropy and
    Bayesian Methods in Science and Engeneering (Vol.
    1), 53-74
  • Narayan, R. and Nityananda, R. Maximum Entropy
    Image Restoration in Astronomy, ARAA (1986),
    24, 127
  • Skilling, J. and Bryan, R. K., Maximum Entropy
    Image Reconstruction General Algorithm, MNRAS
    (1984), 211, 111
  • Skilling, J., Strong, A.W., Bennet, K.
    Maximum-entropy Image Processing in Gamma-ray
    Astronomy, MNRAS (1979), 187, 145
  • Bryan, R. K. and Skilling, J., Deconvolution by
    Maximum Entropy, as Illustrated by Application to
    the Jet of M87, MNRAS (1980), 191, 91
  • MacKay, D.J.C., Information Theory, Inference
    and Learning Algorithms, Cambridge University
    Press (2003)
  • Frieden, B.R. and Swindell, W., Restored
    Pictures of Ganymede, Moon of Jupiter, Science
    (1976), 191, 1237
  • Gull, S.F. Daniell, G.J., "Image reconstruction
    from incomplete and noisy data". Nature (1978),
    272, 686-690.
  • And lots of papers, books and information at
    http//bayes.wustl.edu/ and http//astrosun2.astro
    .cornell.edu/staff/loredo/bayes/

31
  • Journal of Inverse and Ill-Posed Problems
  • http//www.kluweronline.com/issn/0928-0219/

32
Example
I1
I2
  • Set of two pixel images constrained by with total
    intensity I4

3 1 0 0 0 0 0 0 0 0 0 1 1 0 1 0 1
0 1 1 2 0 2 0 1 1 2 1 2 1 3 0 2 1 3 1 3 1
3 1 3 1
4 0 0 0 1 0 2 0 3 0 4 0
2 2 0 0 0 0 0 0 0 0 0 0 0 0 0 1
0 1 1 0 1 0 0 1 1 0 1 1 1 1 1 1 1 1 0 2
2 0 2 1 1 2 1 2 2 1 1 2 2 1 2 2 2 2 2 2 2
2 2 2 2 2
Gull Daniell (1978)
Write a Comment
User Comments (0)
About PowerShow.com