Title: Information Theoretic Image Thresholding
1Information Theoretic Image Thresholding
- Laura Frost
- Supervisors Dr Peter Tischer
- Dr Sid Ray
2Aims of Project
- Investigate mixture modelling as approximation to
images histogram - Investigate relative / objective criterion for
assessment of thresholding results
3Thresholding
- Good for images with fairly distinct, homogeneous
regions - Region of uncertainty object boundaries
- Two important properties for mixture modelling
thresholding
4Mixture Modelling
- Approximate complex distribution with component
distributions - Can describe components easily
- Classify data in this case pixels
5An example
6Need to ask
- Of thresholding
- How good is a threshold?
- Of mixture modelling
- When is a mixture model a good model?
7And so keep in mind
- A good mixture model implies a good threshold
- A good threshold does not necessary imply a good
mixture model
8Methodology 1
- Test / extend / possibly improve iterative
mixture modelling - Examine Kullback-Leibler measure as possible
relative / objective criterion
9Methodology 2
- Test mixture modelling image histograms using
Snob - Examine Minimum Message Length as possible
objective criterion
10Iterative Mixture Modelling
- Fit mixture model at each grey-level
- Select grey-level that produces best model
- Good for bi-level thresholding
11Implementation
- Based on work completed by David Bertolo (Honours
2001, Monash) - Distributions
- Poisson (1 parameter)
- Gaussian, Rayleigh (2 parameters)
12Implementation
- Improve threshold selection use intersection of
components - Account for overlap
13Testing
- Synthetic and natural images
- Synthetic images created with specific
distributions - Test accuracy of model fitting
- Give lower bound for Kullback-Leibler measure
assessment
14Synthetic Images
15(No Transcript)
16Results Iterative Mixture Modelling
- Component parameters correct for synthetic images
- Component parameters for natural images (esp.
outliers at boundaries) - Subjective assessment of segmented image
17Results Iterative Mixture Modelling
- Examined five information measures
- Entropy of image H(p)
- Entropy of mixture model H(q)
- Kullback-Leibler (KL) measure I(pq)
- KL relative to entropy of image I(pq) / H(p)
- KL relative to entropy of model I(pq) / H(q)
18Results 2, 3, 4 components
- Good fit for synthetic images
- Gaussian (?)
- Poisson (?)
- Rayleigh opposite (?)
- Rayleigh right (?)
19Results 2, 3, 4 components
- Dealt with outliers at boundaries
- Gaussian (?)
- Poisson (?)
- Rayleigh opposite (?)
- Rayleigh right (?)
20Results 2, 3, 4 components
- Overall, segmented images good quality
- Gaussian (?)
- Poisson (?)
- Rayleigh opposite (?)
- Rayleigh right (?)
- Except for images with outliers
21Results 2, 3, 4 components
- Time unreasonable for 4 components (complexity
increases exponentially) - Poisson takes 4 times longer than other
distributions
22Matches example
233 Gaussian mixture model
243 Poisson mixture model
253 Rayleigh mixture model
26Results 2, 3, 4 components
- Gaussians
- H(p) lt H (q) for all images
- Poissons
- H(p) gt H (q) for all successfully thresholded
images - Rayleighs
- H(p) H(q)
27Results 2, 3, 4 components
- I(pq) decreased as no. components increased to
be expected - I(pq) / H(p)
- I(pq) / H(q)
28A relative criterion
- Is there value in comparing models of different
complexities? - From these results, probably not
- But comparing models of similar complexities
looks ok
29Mixture modelling with Snob
- Problem overfitting data on natural and
synthetic images - Eg, 512 x 512 image has 262 144 pixels to
classify - Cheaper for Snob to make more classes
30Sampling
- Randomly sampling data at different rates
- Snob finding very good classes!
- Sampling image alumina.gif at 100 pixels (from
total 65 536)
31Alumina example
- Over many runs found two classes
- 0.20 N(91.80, 23.20)
- 0.80 N(206.80, 11.40)
- Compare to Iterative method
- 0.19 N(94.30, 26.56)
- 0.81 N(206.21, 11.80)
32Alumina example
- Message length 4.94 bpp
- Not all images so successful at just 100 pixels
- All seem to be ok at about 500 pixels
33Snob and Thresholding
- Sampling at such small rates, Snob handling
missing data very well! - Since need to sample at such small rates, did not
compare Poissons as hoped - More work needs to be done, but looks promising
34An Objective Criterion
- Takes complexity of mixture model into account
when calculating message length - Message Length a very good candidate for use as
an objective criterion for thresholding
35Aims of Project
- Investigate mixture modelling as approximation to
images histogram - Investigate relative / objective criterion for
assessment of thresholding results
36Conclusion
- Iterative Method
- Consistent results
- Optimal no. of components trial and error
- Complexity
- Snob
- Problem with overfitting large no. of data
points - Sampling at very tiny rates working well
37Evaluation Criteria
- Kullback-Leibler measure
- Relative to Entropy
- Relative to model complexity
- Minimum Message Length
- Promising as objective criterion
38Future Work
- Better way to initially assign pixels to classes
- Modify Snob to do this
- More testing with Snob
- Addition of more distributions to Iterative method