Stable Signal Recovery from Incomplete and Inaccurate Measurements - PowerPoint PPT Presentation

1 / 16
About This Presentation
Title:

Stable Signal Recovery from Incomplete and Inaccurate Measurements

Description:

A paper by Emmanuel Candes, Justin Romberg, and Terence Tao. Application ... recovery, so it is helpful to minimize the TV-norm instead of the l1 norm. ... – PowerPoint PPT presentation

Number of Views:343
Avg rating:3.0/5.0
Slides: 17
Provided by: ale137
Category:

less

Transcript and Presenter's Notes

Title: Stable Signal Recovery from Incomplete and Inaccurate Measurements


1
Stable Signal Recovery from Incomplete and
Inaccurate Measurements
  • A paper by Emmanuel Candes, Justin Romberg, and
    Terence Tao

2
Application
  • We may wish to send a signal to another party,
    but our signal may be corrupted with noise.
  • One method to correct for noise is to send the
    information three (or more) times, taking taking
    the majority at each component, in the case for
    binary data, or an average in the case of real
    number data.
  • This might be the best hope for completely random
    signals in our case, we make use of sparsity.

y Ax0 e
?
Signal or Image
Corrupted measurements
Reconstructed Signal
3
Description of the Problem
  • We want to recover a vector x0 ? Rm (a signal or
    image), from n measurements, where we take n ltlt
    m.
  • Also, the n measurements may be noisy.
  • We may reformulate the problem as follows
  • y Ax0 e,
  • where A is an n x m matrix (the measurement
    ensemble), y is an n-dimensional column vector
    (the n measurements), and e is the noise.
  • How small can n be in order to reconstruct x0
    accurately?
  • How large can the noise e be?

4
Description of Terms
  • Recovered vector x
  • Stable Small changes in the observations (the y
    column vector) should be accompanied by small
    changes in the recovery (x).
  • A the measurement ensemble, may be a random
    matrix, especially good to take care of
    worst-case scenarios with an active enemy.
  • Incomplete Measurements n ltlt m.
  • Inaccurate Measurements There may be noise in
    the measurements, which we denote by e.

5
Optimization Problem
  • This new formulation becomes an optimization
    problem, where we want to solve
  • min x1
  • st Ax y2 ? ?,
  • where ? e2.
  • That is, we wish to keep the signal as small as
    possible while ensuring that x roughly solves
    the same measurement equation as x0 (Recall y -
    Ax0 e).
  • Problems such as the above are Second Order Cone
    Problems and can be solved with Interior Point
    methods.
  • This problem can also be solved with a
    variational method minimizing E(x) Ax
    y2 cx1 for some c.

2
6
Optimal Solution Gives Accurate Reconstruction
  • We will show that a solution x to this
    optimization problem nearly approximates the
    original signal x0
  • x - x0 ? C?,
  • where C is a well-behaved constant (in many cases
    ?? 10).
  • (C may depend on n,m, but we dont want it to
    depend on the signal x0, except on the number of
    nonzero entries of x0.)
  • Furthermore, it can be shown that proportionality
    to ? is the best possible error that we may
    expect if we have no other information about the
    signal x0.

7
Sparse Signals
  • We will be relying on n ltlt m measurements, so it
    is impossible to reconstruct random signals.
  • If we make the assumption that x0 is sparse or
    approximately sparse (has few large components),
    then it is possible to reconstruct a signal
    accurately.
  • The assumption of sparsity is related to the
    patterns/redundancy that can be found in images.
  • Sparseness can be considered not only in the
    standard basis of Rm, but in other bases, like
    the curvelet or wavelet basis.

8
No Noise (e 0), Sparse
  • The data may be reconstructed exactly by solving
  • min x1
  • st Ax y,
  • If A satisfies a uniform uncertainty principle
    (behaves like an orthonormal system)
  • ?S ?2S ?3S lt 1, where ?S is the smallest
    number so that
  • (1 - ?S)c22 ?? ATc22 ? (1 ?S)c22
  • for all coefficient sequences c and all index
    sets T with T ? S, where AT is the submatrix
    formed by taking the indices from T.
  • (Decoding by Linear Programming, Candes and Tao)

9
Add Noise (Still Sparse)
  • Now add noise to the measurements.
  • The data will not be reconstructed exactly, but
    the error of the reconstruction o(?), the order
    of the noise.
  • More precisely, Theorem 1 states
  • If S is such that ?3S 3?4S lt 2, then x -
    x02 ?? CS ?,
  • with CS depending on ?4S only.

10
Noise, Approximate Sparseness
  • More generally, we can extend Theorem 1 to the
    case where x0 is approximately sparsewhen there
    are few large (instead of nonzero) entries.
  • In this case, there is an extra penalty
    proportional to the sum of the small (rather
    than zero) entries.
  • Theorem 2 Let x0,S denote the vector
    corresponding to the S largest (abs) values of
    x0. Then with the same hypotheses as Theorem 1
    (?3S 3?4S lt 2),
  • x - x02 ?? CS?? DSx0 x0,S1/?S.

11
Typical Measurement Ensembles
  • We now consider which measurement ensembles
    satisfy the conditions of Theorem 1.
  • For more generality, we typically consider the
    uniform uncertainty principle for a class of
    matrices A at the same time. This considers the
    case of an active enemy.
  • Gaussian random matrices i.i.d. with mean 0 and
    variance 1/n. The condition of Theorem 1 is
    satisfied with strong probability if
  • S ?? C n, with C well-behaved (S maximum
    support of x0).
  • Fourier ensemble Select n rows from the m x m
    discrete Fourier transform. Then Theorem 1 is
    satisfied with strong probability if
  • S ?? C n / (log m)6. (a newer result relaxes
    this to Cn/(log m)4
  • (Rudelson-Vershynin))

12
Experiment 1 Sparse
  • Sparse signals with 50 nonzero components (chosen
    uniformly at random as 1 or 1) are
    reconstructed.
  • m 1024, n 300.
  • Gaussian measurement ensemble (mean 0, variance
    1/n) corrupted with white Gaussian noise ek with
    mean 0, variance ??2 (various noise levels).

(Experiments and Tables from paper)
13
Experiment 2 Approximately Sparse
  • A compressible signal is generated by taking
  • x(t) 5.819t-10/9,
  • permuting the indices, mutiplying by 1 or 1
    randomly.
  • m 1024, n 300.
  • Gaussian measurement ensemble (mean 0, variance
    1/n) corrupted with white Gaussian noise ek with
    mean 0, variance ??2 (various noise levels).

14
Experiment 3 Real Image, Fourier Ensemble
  • An 256 x 256 image of a boat is tested. We write
    it as a 65536 dimensional vector x0.
  • If we use a Gaussian ensemble, we will need to
    take enough measurements, on the order of 25000.
  • We also need to store these measurements as a
    25000 x 65536 matrix A.
  • Since this takes up too much space, we use a
    Fourier ensemble instead.
  • Each measurement function ak is a sine or cosine
    function with randomly selected frequencies.

15
Experiment 3 and Extensions
  • We can compute Ax quickly using an FFT.
  • There are certain high frequency oscillatory
    artifacts associated with the standard recovery,
    so it is helpful to minimize the TV-norm instead
    of the l1 norm.

16
Application to Cameras
  • The ideas in this paper are already being applied
    to cameras.
  • Instead of using many pixels as in current
    digital cameras, only one pixel (covering the
    entire image) is used, with many measurements
    taken.
  • These are taken by using a micromirror to change
    randomly how light samples are collected, leading
    to the different measurements.
  • By the method in Candes, Tao, Romberg, the entire
    image can be reconstructed with enough
    measurements.
  • The advantage is a conservation of battery power
    and storage space.
Write a Comment
User Comments (0)
About PowerShow.com