Wavelets and compression - PowerPoint PPT Presentation

1 / 69
About This Presentation
Title:

Wavelets and compression

Description:

Wavelets and compression Dr Mike Spann Contents Scale and image compression Signal (image) approximation/prediction simple wavelet construction Statistical ... – PowerPoint PPT presentation

Number of Views:258
Avg rating:3.0/5.0
Slides: 70
Provided by: spa84
Category:

less

Transcript and Presenter's Notes

Title: Wavelets and compression


1
Wavelets and compression
  • Dr Mike Spann

2
Contents
  • Scale and image compression
  • Signal (image) approximation/prediction simple
    wavelet construction
  • Statistical dependencies in wavelet coefficients
    why wavelet compression works
  • State-of-the-art wavelet compression algorithms

3
Image at different scales
4
Correlation between features at different scales
5
Wavelet construction a simplified approach
  • Traditional approaches to wavelets have used a
    filterbank interpretation
  • Fourier techniques required to get synthesis
    (reconstruction) filters from analysis filters
  • Not easy to generalize

6

Wavelet construction lifting
  • 3 steps
  • Split
  • Predict (P step)
  • Update (U step)

7
Example the Haar wavelet
  • S step
  • Splits the signal into odd and even samples

even samples
odd samples
8
Example the Haar wavelet
  • P step
  • Predict the odd samples from the even samples

For the Haar wavelet, the prediction for the odd
sample is the previous even sample
9
Example the Haar wavelet
Detail signal
l
10
Example the Haar wavelet
  • U step
  • Update the even samples to produce the next
    coarser scale approximation

The signal average is maintained
11
Summary of the Haar wavelet decomposition
Can be computed in place
..
..
-1
-1
P step
1/2
U step
1/2
12
Inverse Haar wavelet transform
  • Simply run the forward Haar wavelet transform
    backwards!

Then merge even and odd samples
Merge
13
General lifting stage of wavelet decomposition

U
P
Split
-
14
Multi-level wavelet decomposition
  • We can produce a multi-level decomposition by
    cascading lifting stages


lift
lift
lift
15
General lifting stage of inverse wavelet
synthesis
-
P
Merge
U

16
Multi-level inverse wavelet synthesis
  • We can produce a multi-level inverse wavelet
    synthesis by cascading lifting stages

lift
...
lift
lift
17
Advantages of the lifting implementation
  • Inverse transform
  • Inverse transform is trivial just run the code
    backwards
  • No need for Fourier techniques
  • Generality
  • The design of the transform is performed without
    reference to particular forms for the predict and
    update operators
  • Can even include non-linearities (for integer
    wavelets)

18
Example 2 the linear spline wavelet
  • A more sophisticated wavelet uses slightly more
    complex P and U operators
  • Uses linear prediction to determine odd samples
    from even samples

19
The linear spline wavelet
  • P-step linear prediction

Linear prediction at odd samples
Detail signal (prediction error at odd samples)
Original signal
20
The linear spline wavelet
  • The prediction for the odd samples is based on
    the two even samples either side

21
The linear spline wavelet
  • The U step use current and previous detail
    signal sample

22
The linear spline wavelet
  • Preserves signal average and first-order moment
    (signal position)

23
The linear spline wavelet
  • Can still implement in place


-1/2
P step
-1/2
-1/2
-1/2
U step
1/4
1/4
1/4
1/4
24
Summary of linear spline wavelet decomposition
Computing the inverse is trivial
The even and odd samples are then merged as before
25
Wavelet decomposition applied to a 2D image
26
Wavelet decomposition applied to a 2D image
approx
27
Why is wavelet-based compression effective?
  • Allows for intra-scale prediction (like many
    other compression methods) equivalently the
    wavelet transform is a decorrelating transform
    just like the DCT as used by JPEG
  • Allows for inter-scale (coarse-fine scale)
    prediction

28
Why is wavelet-based compression effective?
1 level Haar
Original
1 level linear spline
2 level Haar
29
Why is wavelet-based compression effective?
  • Wavelet coefficient histogram

30
Why is wavelet-based compression effective?
  • Coefficient entropies

31
Why is wavelet-based compression effective?
  • Wavelet coefficient dependencies

X
32
Why is wavelet-based compression effective?
  • Lets define sets S (small) and L (large) wavelet
    coefficients
  • The following two probabilities describe
    interscale dependancies

33
Why is wavelet-based compression effective?
  • Without interscale dependancies

34
Why is wavelet-based compression effective?
  • Measured dependancies from Lena


0.886 0.529 0.781 0.219
35
Why is wavelet-based compression effective?
  • Intra-scale dependencies


X1
X
X8
36
Why is wavelet-based compression effective?
  • Measured dependancies from Lena


0.912 0.623 0.781 0.219
37
Why is wavelet-based compression effective?
  • Have to use a causal neighbourhood for spatial
    prediction

38
Example image compression algorithms
  • We will look at 3 state of the art algorithms
  • Set partitioning in hierarchical sets (SPIHT)
  • Significance linked connected components analysis
    (SLCCA)
  • Embedded block coding with optimal truncation
    (EBCOT) which is the basis of JPEG2000

39
The SPIHT algorithm
  • Coefficients transmitted in partial order

Coeff. number
1 2 3 4 5 6 7 8
9 10 11 12 13 14.
msb
5 4 3 2 1 0
0
lsb
40
The SPIHT algorithm
  • 2 components to the algorithm 
  • Sorting pass
  • Sorting information is transmitted on the basis
    of the most significant bit-plane
  • Refinement pass
  • Bits in bit-planes lower than the most
    significant bit plane are transmitted

41
The SPIHT algorithm
N msb of (max(abs(wavelet coefficient))) for
(bit-plane-counter)N downto 1 transmit
significance/insignificance wrt bit-plane
counter transmit refinement bits of all
coefficients that are already significant
42
The SPIHT algorithm
  • Insignificant coefficients (with respect to
    current bitplane counter) organised into
    zerotrees

43
The SPIHT algorithm
  • Groups of coefficients made into zerotrees by set
    paritioning

44
The SPIHT algorithm
  • SPIHT produces an embedded bitstream

bitstream
.110010101110010110001101011100010111011011101
101.
45
The SLCCA algorithm
Bit-plane encode significant coefficients
Wavelet transform
Quantise coefficients
Cluster and transmit significance map
46
The SLCCA algorithm
  • The significance map is grouped into clusters

47
The SLCCA algorithm
  • Clusters grown out from a seed

Seed
Significant coeff
Insignificant coeff
48
The SLCCA algorithm
  • Significance link symbol

Significance link
49
Image compression results
  • Evaluation 
  • Mean squared error
  • Human visual-based metrics
  • Subjective evaluation

50
Image compression results
  • Mean-squared error 

Usually expressed as peak-signal-to-noise (in dB)
51
Image compression results
52
Image compression results
53
Image compression results
SPIHT 0.2 bits/pixel
JPEG 0.2 bits/pixel
54
Image compression results
SPIHT
JPEG
55
EBCOT, JPEG2000
  • JPEG2000, based on embedded block coding and
    optimal truncation is the state-of-the-art
    compression standard
  • Wavelet-based
  • It addresses the key issue of scalability
  • SPIHT is distortion scalable as we have already
    seen
  • JPEG2000 introduces both resolution and spatial
    scalability also
  • An excellent reference to JPEG2000 and
    compression in general is JPEG2000 by D.Taubman
    and M. Marcellin

56
EBCOT, JPEG2000
  • Resolution scalability is the ability to extract
    from the bitstream the sub-bands representing any
    resolution level

.110010101110010110001101011100010111011011101
101.
bitstream
57
EBCOT, JPEG2000
  • Spatial scalability is the ability to extract
    from the bitstream the sub-bands representing
    specific regions in the image
  • Very useful if we want to selectively decompress
    certain regions of massive images

.110010101110010110001101011100010111011011101
101.
bitstream
58
Introduction to EBCOT
  • JPEG2000 is able to implement this general
    scalability by implementing the EBCOT paradigm
  • In EBCOT, the unit of compression is the
    codeblock which is a partition of a wavelet
    sub-band
  • Typically, following the wavelet transform,each
    sub-band is partitioned into small blocks
    (typically 32x32)

59
Introduction to EBCOT
  • Codeblocks partitions of wavelet sub-bands

codeblock
60
Introduction to EBCOT
  • A simple bit stream organisation could comprise
    concatenated code block bit streams


Length of next code-block stream
61
Introduction to EBCOT
  • This simple bit stream structure is resolution
    and spatially scalable but not distortion
    scalable
  • Complete scalability is obtained by introducing
    quality layers
  • Each code block bitstream is individually
    (optimally) truncated in each quality layer
  • Loss of parent-child redundancy more than
    compensated by ability to individually optimise
    separate code block bitstreams

62
Introduction to EBCOT
  • Each code block bit stream partitioned into a set
    of quality layers




63
EBCOT advantages
  • Multiple scalability
  • Distortion, spatial and resolution scalability
  • Efficient compression
  • This results from independent optimal truncation
    of each code block bit stream
  • Local processing
  • Independent processing of each code block allows
    for efficient parallel implementations as well as
    hardware implementations

64
EBCOT advantages
  • Error resilience
  • Again this results from independent code block
    processing which limits the influence of errors

65
Performance comparison
  • A performance comparison with other wavelet-based
    coders is not straightforward as it would depend
    on the target bit rates which the bit streams
    were truncated for
  • With SPIHT, we simply truncate the bit stream
    when the target bit rate has been reached
  • However, we only have distortion scalability with
    SPIHT
  • Even so, we still get favourable PSNR (dB)
    results when comparing EBCOT (JPEG200) with SPIHT

66
Performance comparison
  • We can understand this more fully by looking at
    graphs of distortion (D) against rate (R)
    (bitstream length)

D
R-D curve for continuously modulated quantisation
step size
Truncation points
R
67
Performance comparison
  • Truncating the bit stream to some arbitrary rate
    will yield sub-optimal performance

D
R
68
Performance comparison
69
Performance comparison
  • Comparable PSNR (dB) results between EBCOT and
    SPIHT even though
  • Results for EBCOT are for 5 quality layers (5
    optimal bit rates)
  • Intermediate bit rates sub-optimal
  • We have resolution, spatial, distortion
    scalability in EBCOT but only distortion
    scalability in SPIHT
Write a Comment
User Comments (0)
About PowerShow.com