Lossless compression of greyscale images - PowerPoint PPT Presentation

1 / 13
About This Presentation
Title:

Lossless compression of greyscale images

Description:

Blending of prediction error distributions ... Blending scheme rewards predictors with low error norm in local window ... Blending PDFs produces slight benefit ... – PowerPoint PPT presentation

Number of Views:140
Avg rating:3.0/5.0
Slides: 14
Provided by: Michael2109
Category:

less

Transcript and Presenter's Notes

Title: Lossless compression of greyscale images


1
Introduction
  • Lossless compression of grey-scale images
  • TMW achieves worlds best lossless image
    compression
  • 3.85 bpp on Lenna
  • Reasons for performance are unclear
  • No intermediate results given, only final,
    optimal values

2
Image Compression
  • Compression Modelling Coding
  • Model Assign probabilities to symbols
  • Coding Transmit symbols
  • Code Length -log2(P)
  • Pixels encoded in raster order
  • Conventionally reading order
  • Left-to-right, Top-to-bottom
  • Previously encoded pixels known to both encoder
    and decoder

3
Prediction
  • Causal neighbours can be used to predict the
    value of pixel
  • Standard predictor is linear combination of
    neighbour values

4
Prediction
Sample Error Distribution
  • CALIC, JPEG LS use DPCM
  • Prediction error is coded instead of value
  • Errors tend to follow Laplacian distribution
  • Have lower entropy than raw pixel values

µ
5
TMW
  • Worlds best lossless image compression program
  • Blending of prediction error distributions
  • Use of information from local region
  • Use of large number of causal neighbours
  • Optimisation of parameters with respect to
    message length
  • We examine the first three points

6
Blending
  • Test predictors on local window of pixels
  • Calculate what the error would be for causal
    neighbours
  • Spread of errors gives a measure of how effective
    the predictor has been in the region

7
Blending
  • Generate Laplacian distributions
  • Mean is predicted value plus mean error in window
  • Variance calculated from spread of errors in
    window
  • Blend probability distributions to give final
    probability distribution
  • 0.7 0.3

8
Window Size
  • Window size involves trade off
  • Larger window size means using more pixels from
    current segments
  • Also increases likelihood of using pixels from
    neighbouring segments

9
Window Size
  • 2 windows for calculating
  • Blending weights
  • Mean/spread of distribution

Blending Window Size
Distribution Window Size
4.05 bpp
4.05 bpp
36 Pixels
78 Pixels
10
Locally Trained Predictor
  • Blending scheme rewards predictors with low error
    norm in local window
  • Find predictor to minimise this norm
  • Local window contains N pixels and N
    corresponding causal neighbour vectors
  • Perform Multiple Linear Regression to find
    optimal coefficients
  • Can minimise e2 or e

11
Neighbourhood Size
  • Noise plays large part in code length
  • Consider p p N(µ, s)
  • More neighbours tends to reduce impact of noise
  • More neighbours means more noise terms in
    predicted value
  • Noise terms have smaller coefficients, and tend
    to cancel out more

12
Future Work
  • Global Optimisation
  • Full investigation of benefits yet to take place
  • Texture modelling
  • Segmentation
  • Transmit predictors and segment maps before pixel
    values
  • Send the most important information first!

13
Conclusions
  • Blending PDFs produces slight benefit
  • Local information in prediction much more
    valuable than global information
  • Global optimisation and large neighbourhood sizes
    also important to TMWs success
Write a Comment
User Comments (0)
About PowerShow.com