Dimension Reduction in Hyperspectral Images - PowerPoint PPT Presentation

1 / 20
About This Presentation
Title:

Dimension Reduction in Hyperspectral Images

Description:

... (affecting sparsity) or different measurement ensembles (the random matrix A) ... Only the number of measurements is changed. ... – PowerPoint PPT presentation

Number of Views:122
Avg rating:3.0/5.0
Slides: 21
Provided by: ale137
Category:

less

Transcript and Presenter's Notes

Title: Dimension Reduction in Hyperspectral Images


1
Dimension Reduction in Hyperspectral Images
  • Alex Chen
  • April 9, 2007

2
Review of Sparse Signal Recovery
  • Candès, Romberg, and Tao give a method for
    reconstructing a sparse (approximately sparse)
    signal x0 using relatively few measurements (from
    a measurement ensemble A).
  • This method is an optimization problem, where we
    want to solve
  • min x1
  • st Ax y2 ? ?,
  • where ? e2.
  • That is, we wish to keep the signal as small as
    possible while ensuring that x roughly solves
    the same measurement equation as x0 (Recall y -
    Ax0 e).
  • E. Candes, J. Romberg, T. Tao. Stable Signal
    Recovery from Incomplete and Inaccurate
    Measurements. Comm. Pure Appl. Math. 59 (2006),
    1207-1223

3
Important Points of Sparse Signal Recovery
  • Reconstructed error for an approximately sparse
    signal is bounded by C(noise small entries).
  • In particular, error is independent of the
    magnitude of the large entries.
  • If noise 0, and the signal is sparse, then
    recovery is exact.
  • Choosing different bases for the signal
    (affecting sparsity) or different measurement
    ensembles (the random matrix A) can lead to
    better recovery.

4
Numerical Formulation
  • Consider the similar problem of minimizing the
    energy E(x) ½Ax y22 ?x1.
  • Solve x -? E.
  • Then x -AT(Ax y) ?sgn(x).
  • Explicit discretization gives the gradient
    descent method
  • xn1 xn ?tAT(Axn y) ?sgn(xn).

5
Reconstruction of Simple Sparse Signal
Top left Plot of a signal generated by taking a
vector of dimension 1024 with 50 nonzero entries
randomly assigned to be 1 or 1 with equal
probability.
A is 300x1024 ( Gaussian Random Matrix)
  • 0.12
  • ?t 0.1
  • of iterations 700

6
Reconstruction of Compressible Signal
Top Left Plot of a compressible signal with
values x(t) 5.819t-10/9 where t 11024 and
indices permuted randomly. The signal is
multiplied by 1 or 1 with equal probability.
(The 5.819 is chosen so that this norm matches
the previous case ?50.)
Parameters are chosen as in the previous case.
7
Gradient Descent
  • Gradient descent does not give results as good as
    in Candès, Romberg, Tao, but it gets close
    enough.
  • One trick is assuming a priori that the signals
    are integer-valued.
  • Another is to assume that signals have a known
    magnitude (the recovered vector is scaled to have
    the same norm as the original).
  • Both give much better results, but assume too
    much.

8
Hyperspectral Signals
  • Take a signal generated from one pixel of a
    hyperspectral image.
  • This is a column vector, with the dimensions
    going through the various bands of the image.
  • There are 162 dimensions, ranging from 412 nm to
    2390 nm.
  • We would like to apply the Candès, Romberg, and
    Tao (CRT) method to compress the hyperspectral
    signal.
  • Note that the noise in the measurements is now
    taken to be 0.

9
Hyperspectral Signal
This is a hyperspectral signal at pixel
(200,200)--vegetation. Note that it is not sparse
as shown.
10
CRT Method Directly on Hyperspectral Signal
Top Left Hyperspectral signal at
Urban(200,200,). Remaining plots show the
recovery by running the algorithm on the signal
itself. The parameters are unchanged from the
previous case (except for ? 0 and of
measurements as shown).
Total norm of the hyperspectral signal 1883.6
11
Fourier Transform of Signal
The Fourier transform of the signal is much
closer to being sparse.
12
CRT on Fourier Transform
Top Left Hyperspectral signal at
Urban(200,200,). Remaining plots show the
recovery running CRT on the Fourier transform.
Only the number of measurements is
changed. Running CRT on the Fourier transform is
better, but not good enough for dimension
reduction.
Total norm of the hyperspectral signal 1883.6
13
Wavelets
  • Instead of writing a function as a sum of sines
    and cosines, we can use wavelets fast decaying
    waveforms (mother wavelets) or sums of scaled and
    translated copies of a signal of finite length
    (daughter wavelets).
  • The advantage is that wavelets are localized in
    both space and frequency while sines and cosines
    are localized only in frequency.
  • References
  • 1. Daubechies. Ten Lectures on Wavelets,
    Society for Industrial and Applied Mathematics,
    1992.
  • Julia
  • http//aix1.uottawa.ca/jkhoury/haar.htm (for a
    simple illustration)

14
Haar Wavelet (1/2)
  • Computation uses the Haar wavelet basis, which
    consists of translated and scaled copies of the
    function
  • 1 if 0 ? x lt ½
  • f(x) -1 if ½ ? x lt 1
  • 0 otherwise

15
Haar Wavelet (2/2)
  • Using the Haar wavelet decomposition, the
    following signal is obtained, which is more
    sparse than the original signal or the Fourier
    transform.

16
CRT on Haar Wavelet
Top Left Hyperspectral signal in the Haar
wavelet basis at Urban(200,200,). Remaining
plots show reconstruction on various of
measurements. Running on the (unscaled) Haar
wavelet basis is comparable to Fourier.
Total norm of the hyperspectral signal 1883.6
17
CRT on Scaled Haar Basis
  • The first entries in the Haar basis are not
    weighted enough according to the information they
    store.
  • To compensate, scale the entries this makes the
    signal more sparse, while giving a more accurate
    picture of the storage.
  • Numerical results do not yet work as anticipated.

Signal in unscaled Haar basis
Signal in scaled Haar basis
18
Naïve Haar Wavelet Reconstruction
  • For comparison, we use a naïve reconstruction of
    the hyperspectral signal.
  • Order the components of the signal from largest
    to smallest and keep a certain number of entries
    (setting the rest equal to 0).
  • Does the signal have small reconstruction error?

Total norm of the hyperspectral signal 1883.6
19
Reconstruction with a Derivative (1/2)
Top Left Derivative of hyperspectral signal at
Urban(200,200,). Remaining plots show
reconstruction on various of measurements. Runni
ng the algorithm on the derivative does not give
good reconstruction relative to the norm of the
derivative.
Total norm of the derivative of the hyperspectral
signal 186.5
20
Reconstruction with a Derivative (2/2)
Top Left Hyperspectral signal at
Urban(200,200,). Remaining plots show
reconstruction on various of measurements,
converted back to the original format. Running
the algorithm gives a good reconstruction of the
general shape, but the values are heavily
distorted.
Total norm of the hyperspectral signal 1883.6
Write a Comment
User Comments (0)
About PowerShow.com