Chapter 6 Image Enhancement - PowerPoint PPT Presentation

1 / 140
About This Presentation
Title:

Chapter 6 Image Enhancement

Description:

The spike-based degradation in MRI will be remove by ... The need for some partial image analysis that must be performed ... Fourier Transform does not provide any ... – PowerPoint PPT presentation

Number of Views:470
Avg rating:3.0/5.0
Slides: 141
Provided by: 4518
Category:

less

Transcript and Presenter's Notes

Title: Chapter 6 Image Enhancement


1
Chapter 6Image Enhancement
  • Chuan-Yu Chang (???)Ph.D.
  • Dept. of Electronic Engineering
  • National Yunlin University of Science
    Technology
  • chuanyu_at_yuntech.edu.tw
  • Office ES709
  • Tel 05-5342601 Ext. 4337

2
Image Enhancement
  • The purpose of image enhancement methods is to
    process and acquired image for better contrast
    and visibility of features of interest for visual
    examination and subsequent computer-aided
    analysis and diagnosis.
  • Different medical imaging modalities provide
    specific characteristic information about
    internal organs or biological tissues.
  • Image contrast and visibility of the features of
    interest depend on the imaging modality and the
    anatomical regions.
  • There is no unique general theory or method for
    processing all kinds of medical images for
    feature enhancement.
  • Specific medical imaging applications present
    different challenges in image processing for
    feature enhancement.

3
Image Enhancement (cont.)
  • Medical images from specific modalities need to
    be processed using a method that is suitable to
    enhance the features of interest.
  • Chest X-ray radiographic image
  • Required to improve the visibility of hard bony
    structure.
  • X-ray mammogram
  • Required to enhance visibility of
    microcalcification.
  • A single image-enhancement method may not serve
    both of these applications.
  • Image enhancement tasks and methods are very much
    application dependent.

4
Image Enhancement (cont.)
  • Image enhancement tasks are usually characterized
    in two categories
  • Spatial domain methods
  • Manipulate image pixel values in the spatial
    domain based on the distribution statistics of
    the entire image or local regions.
  • Histogram transformation, spatial filtering,
    region growing, morphological image processing
    and model-based image estimation
  • Frequency domain methods
  • Manipulate information in the frequency domain
    based on the frequency characteristics of the
    image.
  • Frequency filtering, homomorphic filtering and
    wavelet processing methods
  • Model-based techniques are also used to extract
    specific features for pattern recognition and
    classification.
  • Hough transform, matched filtering, neural
    networks, knowledge-based systems

5
Spatial Domain Methods
  • Spatial domain methods process an image with
    pixel-by pixel transformation based on the
    histogram statistics or neighbor.
  • Faster than Fourier transform
  • Frequency filtering methods may provide better
    results in some applications if a priori
    information about the characteristic frequency
    components of the noise and features of interest
    is available.
  • The spike-based degradation in MRI will be remove
    by Wiener filtering method.

6
Background
  • Spatial domain
  • The aggregate of pixels composing an image.
  • Operate directly on these pixels

Spatial domain process willbe denoted by
g(x,y)Tf(x,y) where f(x,y) input image
g(x,y) processed image T an
operator
mask filter kernel template windows
7
Background (cont.)
  • Transformation Function
  • sT(r)
  • where T is gray-level transformation function
  • Processing technologies
  • Point processing
  • Enhancement at any point in an image depends only
    on the gray level at that point.
  • Mask processing or filtering

thresholding
Contrast stretching
8
Some Basic Gray Level Transforms
  • Some basic Gray Level Transforms
  • s T(r)
  • r the gray level value before process
  • s the gray level value after process

9
Some Basic Gray Level Transforms (cont.)
  • Image Negatives
  • Reversing the intensity levels of an image
  • Photographic Negative
  • s L-1-r
  • Suited for enhancing white or gray detail
    embedded in dark regions of an image

10
Some Basic Gray Level Transforms (cont.)
  • Log Transformations
  • S c log (1r)
  • Maps a narrow range of low gray-level values in
    the input image into a wider range of output
    levels.

11
Some Basic Gray Level Transforms (cont.)
  • Power-Law Transformations
  • scrr
  • s c (r e )r
  • Where c and r are positive constants
  • Power-law curves with fractional values of r map
    a narrow range of dark input values into a wider
    range of output values, with the opposite being
    true for higher values of input levels.

12
Some Basic Gray Level Transforms (cont.)
  • Gamma Correction
  • The process used to correct this power-law
    response phenomena

13
Some Basic Gray Level Transforms (cont.)
  • Example 3.1
  • MR image of fractured human spine

c1, r0.6
c1, r0.4
c1, r0.3
14
Some Basic Gray Level Transforms (cont.)
15
Some Basic Gray Level Transforms (cont.)
Picewise-Linear Transformation Function
  • Contrast Stretching
  • To increase the dynamic range of the gray levels
    in the image being processed.
  • Linear function
  • If r1s1 and r2s2
  • Thresholding
  • If r1r2, s10 and s2L-1

Control points
16
Some Basic Gray Level Transforms (cont.)
Picewise-Linear Transformation Function
  • Gray-level Slicing
  • Highlighting a specific range of gray levels in
    an image.

17
Some Basic Gray Level Transforms (cont.)
  • Bit-plane Slicing
  • Highlighting the contribution made to total image
    appearance by specific bits.
  • Separating a digital image into its bit planes is
    useful for analyzing the relative importance
    played each bit of the image.
  • Determining the adequacy of the number of bits
    used to quantize each pixel.
  • Image compression.

18
Some Basic Gray Level Transforms (cont.)
  • An 8-bit fractal image

19
Some Basic Gray Level Transforms (cont.)
  • The eight bit planes of the image in Fig. 3.13

20
Histogram Processing
Histogram h(rk)nk rk is the k-th gray-level nk
is the number of pixels in the image having
gray-level k
Normalized Histogram p(rk)nk/n
21
Medical Images and Histograms
T2 weighted proton density image
X-ray CT image
22
Histogram Processing (cont.)
  • Histogram Equalization
  • Assume that the transformation function T(r)
    satisfies the follows
  • T(r) is a single-valued and monotonically
    increasing
  • 0ltT(r)lt1 for 0ltr lt1

23
Histogram Processing (cont.)
  • Histogram equalization automatically determines a
    transformation function that seeks to produce an
    output image that has a uniform histogram.
  • The histogram equalization method forces image
    intensity levels to be redistributed with an
    equal probability of occurrence.

24
Histogram Equalization
25
Histogram Modification
  • The histogram equalization method can cause
    saturation in some regions of the image resulting
    in loss of details and high frequency information
    that may be necessary for interpretation.
  • If a desired distribution of gray values is known
    a priori, a histogram modification method is used
    to apply a transformation that changes the gray
    values to match the desired distribution.
  • The target distribution can be obtained from a
    good contrast image that is obtained under
    similar imaging conditions.

26
Histogram Modification
  • The conventional scaling method of changing gray
    values from the range a,b to c,d can be given
    by a linear transformation aswhere z and znew
    are the original and new gray values of a pixel
    in the image.

27
Histogram Modification(cont.)
  • Histogram modification (Specification)
  • To specify the shape of the histogram that we
    wish the processed image to have.

(6.5)
(6.6)
(6.7)
28
Histogram Processing (cont.)
2. ?????Histogram Equalization ??,??????G(z)
1. ?????Histogram Equalization
3. ???sk????zk,??zk????0L-1
29
Procedure for histogram matching
  1. Obtain the histogram of the given image
  2. Use E.q.(6.5) to precompute a mapped level sk for
    each level rk
  3. Obtain the transformation function G(z) from the
    given pz(z) using Eq.(6.6)
  4. Precompute zk for each value of sk using the
    scheme defined in Eq(6.7)
  5. Use the value from step (2) and step (4), mapping
    rk to its corresponding level sk, then map level
    sk into the final level zk.

30
Histogram Processing (cont.)
  • Example 3.4 Comparison between histogram
    equalization and histogram matching

31
Histogram Processing (cont.)
32
Histogram Processing (cont.)
33
Image averaging
  • Signal averaging is a well-know method for
    enhancing signal-to-noise ratio.
  • Sequence images can be averaged for noise
    reduction, leading to smoothing effects.
  • Image averaging
  • Noisy image g(x,y)
  • Averaging K different noisy images
  • The standard deviation at any point in the
    average image is

(6.8)
(6.9)
(6.10)
34
Enhancement using Arithmetic/Logic Operations
(cont.)
  • Example 3.8 Noise reduction by image averaging

35
Enhancement using Arithmetic/Logic Operations
(cont.)
36
Image Subtraction
  • Image Subtraction
  • If two properly registered images of the same
    object are obtained with different imaging
    conditions, a subtraction operation on the
    acquired image can enhance the information about
    the changes in imaging conditions.
  • The enhancement of difference between images

37
Image Subtraction (cont.)
  • The value in a difference image can range from a
    minimum of -255 to a maximum of 255. How to solve
    this problem?
  • Solution 1 g(x,y)g(x,y)255/2
  • Solution 2 g(x,y)g(x,y)-min(g(x,y)) g(x,y)
    g(x,y)255/max(g(x,y))

38
Neighborhood Operations
 
  • The spatial filtering methods using neighborhood
    operations involve the convolution of the input
    image with a specific mask to enhance an image.
  • The gray value of each pixel is replaced by the
    new value, computed according to the mask applied
    in the neighborhood of the pixel.
  • The neighborhood of a pixel may be defined in any
    appropriate manner based on a simple
    connectedness or any other adaptive criterion.

39
Basic of spatial filtering
  • Basic of spatial filtering

If image size MN, mask size mn
where a(m-1)/2 b(n-1)/2
Convolving a mask with an image
40
Basic of spatial filtering (cont.)
41
Smoothing Spatial Filter
  • Smoothing filters are used for blurring and for
    noise reduction.
  • Smoothing Linear Filter
  • Sometimes are called averaging filter , lowpass
    filter
  • Box filter
  • A spatial averaging filter in which all
    coefficients are equal
  • Weighted average
  • Pixels are multiplied at different coefficient

42
Smoothing Spatial Filter (cont.)
  • Example 3.9
  • Image smoothing with masks of various sizes

43
Smoothing Spatial Filter (cont.)
44
Order-Statistic Filters
  • Order-Statistic Filters (Nonlinear spatial
    filters)
  • Based on ordering the pixels contained in the
    image area encompassed by the filter. And then
    replacing the value of the center pixel with the
    value determined by the ranking result.
  • Median filter
  • Particularly effective in the presence of impulse
    noise (salt-pepper noise)
  • AlgorithmStep 1 sort the value of the pixels
    encompassed by the filter.Step 2 determine
    their median.Step 3 assign the median to the
    center pixel.

45
Order-Statistic Filters
  • Max filter
  • Min filter

46
Sharpening Spatial Filters
  • Objectives
  • To highlight fine detail in an image
  • To enhance detail that has been blurred
  • The derivatives of a digital function are defined
    in terms of differences
  • First derivative
  • Must be zero in flat segment
  • Must be nonzero at the onset of a gray-level step
    or ramp
  • Must be nonzero along ramps

47
Sharpening Spatial Filters
  • Second derivative
  • Must be zero in flat areas
  • Must be nonzero at the onset and the end of
    gray-level step or ramp.
  • Must be zero along ramps of constant slope

48
Sharpening Spatial Filters (cont.)
49
Sharpening Spatial Filters (cont.)
  • Summary
  • First-order derivatives generally produce thicker
    edges in an image.
  • Second-order derivatives have a stronger response
    to fine detail
  • First-order derivatives generally have a stronger
    response to a gray-level step
  • Second-order derivatives produce a double
    response at step changes in gray level.

50
Use of Second Derivatives for Enhancement- The
Laplacian
  • Isotropic filter (rotation invariant)
  • Whose response is independent of the direction of
    the discontinuities in the image.
  • Laplacian

51
Use of Second Derivatives for Enhancement- The
Laplacian
52
Laplacian Second Order Gradient for Edge
Detection
 
53
Image Sharpening with Laplacian
 
54
Use of Second Derivatives for Enhancement- The
Laplacian
  • Image enhancement

55
Use of Second Derivatives for Enhancement- The
Laplacian (cont.)
  • Example 3.11
  • Imaging sharpening with the Laplacian.

56
Use of Second Derivatives for Enhancement- The
Laplacian (cont.)
  • Example 3.12
  • Image enhancement using a composite Laplacian mask

57
Use of Second Derivatives for Enhancement- The
Laplacian (cont.)
  • Unsharp masking and high-boost filtering
  • Used in publishing industry
  • Unsharp masking To sharpen images consist of
    subtracting a blurred version of an image from
    the image itself.

58
Use of Second Derivatives for Enhancement- The
Laplacian (cont.)
  • Example 3.13
  • Image enhancement with a high-boost filter

59
Use of First Derivatives for Enhancement -The
Gradient
  • The gradient of f at coordinates (x,y) is defined
    as the two-dimensional column vector
  • The magnitude of this vector is given by

60
Use of First Derivatives for Enhancement -The
Gradient (cont.)
61
Use of First Derivatives for Enhancement -The
Gradient
  • Example 3.14
  • Use of the gradient for edge enhancement.

62
Image Averaging
 
63
Median Filter
64
Feature Enhancement Using Adaptive Neighborhood
Processing
  • adaptive neighborhood-based image processing
    technique
  • Using a low-level analysis and knowledge about
    desired features in designing a contrast
    enhancement function.
  • The contrast enhancement function is then used to
    enhance mammographic features while suppressing
    the noise.
  • An adaptive neighborhood structure is defined as
    a set of two neighborhoods inner and outer

65
Feature Enhancement Using Adaptive Neighborhood
Processing
  • Three types of adaptive neighborhood can be
    defined
  • constant ratio
  • maintains the ratio of the inner to outer
    neighborhood size at 13
  • constant difference
  • allows the size of the outer neighborhood to be
    (cn) x (cn)
  • feature adaptive
  • adapts the arbitrary shape and size of the local
    features to obtain the Center and Surround
    regions is defined using the pre-defined
    similarity and distance criteria.
  • Center consisting of pixels forming that
    feature
  • Surround consisting of pixels forming the
    background for that feature.

66
Feature Enhancement Using Adaptive Neighborhood
Processing
  • The procedure to obtain the Center and the
    Surround regions
  • The inner and outer neighborhoods around a pixel
    are grow using the constant difference adaptive
    neighborhood criterion.
  • To define the similarity criterion, gray-level
    and percentage thresholds are defined.
  • Using these thresholds, the region around each
    pixel in the image is grown in all directions
    until the similarity criterion is violated.
  • The region forming all pixels, which have been
    included in the neighborhood of the centered
    pixel satisfying the similarity criterion are
    designated as the Center region.
  • The Surround region is composed of all pixels
    contiguous to the Center region.

67
Feature Enhancement Using Adaptive Neighborhood
Processing
  • The local contrast C(x,y) fro the centered pixel
    is then computed as
  • The Contrast Enhancement Function (CEF) is used
    as a function to modify the contrast distribution
    in the contrast domain of the image.
  • The contrast histogram is analyzed and correlated
    to the requirements of feature enhancement. Using
    the CEF, a new contrast value C(x,y) is
    computed.
  • The new contrast value C(x,y) is used to compute
    a new pixel value for the enhanced image g(x,y)
    as

68
Feature Adaptive Neighborhood
Region growing for a feature adaptive neighborhood
Image pixel values in a 7x7 neighborhood
Central and Surround regions for the feature
adaptive neighborhood
69
Micro-calcification Enhancement
70
Frequency-Domain Filtering
  • Frequency domain filtering methods process an
    acquired image in the Fourier domain to emphasize
    or de-emphasize specified frequency components.
  • The low frequency range components usually
    represent shapes and blurred structures in the
    image.
  • The high frequency information belongs to sharp
    details, edges and noise.
  • A low-pass filter with attenuation to
    high-frequency components would provide image
    smoothing and noise removal.
  • A high-pass filter with attenuation to
    low-frequency extracts edge and sharp details for
    image enhancement and sharpening effects.

71
Filtering in the Frequency domain (cont.)
  • ????
  • g(x,y)h(x,y)f(x,y)
  • ????
  • H(u,v) is called a filter.
  • The Fourier transform of the output image is
  • G(u,v)H(u,v)F(u,v)
  • The filtered image is obtained simply by taking
    the inverse Fourier transform of G(u,v)
  • Filtered Image F-1G(u,v)

72
Filtering in the Frequency domain (cont.)
  • Basics of filtering in the frequency domain
  • Multiply the input image by (-1)xy to center the
    transform
  • Compute F(u,v), the DFT of the image from (1)
  • Multiply F(u,v) by a filter function H(u,v)
  • Compute the inverse DFT of the result in (3)
  • Obtain the real part of the result in (4)
  • Multiply the result in (5) by (-1)xy

73
Filtering in the Frequency domain (cont.)
  • Basic steps for filtering in the frequency domain

74
Frequency-Domain Methods
  • ????g(x,y)??????f(x,y)??point spread function
    (PSF) h(x,y)???(convolution)??????????
  • ?Fourier transform????
  • ????????inverse filtering??

????????H(u,v)????????????F(u,v)
???N(u,v)??????Fourier Transfor?????,??????H(u,v)?
0?????,N(u,v)/H(u,v)?????F(u,v)??
75
Low-pass Filtering
  • The ideal low-pass filter suppresses noise and
    high-frequency information providing a smoothing
    effect to the image.
  • An ideal low-pass filter can be designed by
    assigning a frequency cut-off value w0. The
    frequency cut-off value can also be expressed as
    the distance D0 from the origin in the Fourier
    domain.

76
Low-Pass Filtering (cont.)
  • Ideal Low-pass Filter
  • 2 D ideal lowpass filter

??(u,v)????????????
(4.3-2)
(4.3-3)
77
Low-Pass Filtering (cont.)
  • ????(cutoff frequency)
  • H(u,v)1?H(u,v)0???????
  • ????
  • ?????

(4.3-4)
(4.3-5)
78
ExampleImage power as a function of distance
from the origin of the DFT
???5, 15, 30, 80, and 230
????92, 94.6, 96.4, 98, and 99.5
79
Example 4.4 Image power as a function of distance
from the origin of the DFT (cont.)
??????(ringing)
80
Low-Pass Filtering (cont.)
81
Low-Pass Filtering (cont.)
  • ??????????(Butterworth low-pass filter)
  • BLPF????????????
  • ????????H(u,v)????????????

82
Low-Pass Filtering (cont.)
  • ???BLPF??????????
  • ???BLPF??????,?????
  • ???BLPF??????,

83
Chapter 4 Image Enhancement in the Frequency
Domain
84
Low-Pass Filtering (cont.)
  • ???????(Gaussian low-pass filter)

85
Low-Pass Filtering
The low-pass filtered MR brain image
Low-pass filter function H(u,v)
The Fourier transform of the filtered MR brain
image
The Fourier transform of the original MR brain
image
86
High Pass Filtering
  • The high-pass filtering is used for image
    sharpening and extraction of high-frequency
    information such as edges.

87
High Pass Filtering (cont.)
  • ???????(Ideal Highpass Filters)
  • ?????????(Butterworth Highpass Filters)
  • ??????? (Gaussian Highpass Filters)

(6.33)
(6.34)
(6.35)
88
High Pass Filtering (cont.)
89
High Pass Filtering (cont.)
  • Spatial representations of typical (a) ideal (b)
    Butterworth, and (c) Gaussian frequency domain
    highpass filters

90
High Pass Filtering (cont.)
  • Result of ideal highpass filtering (a) with
    D015, 30, and 80

91
High Pass Filtering (cont.)
  • Result of BHPF order 2 highpass filtering (a)
    with D015, 30, and 80

92
High Pass Filtering (cont.)
  • Result of GHPF order 2 highpass filtering (a)
    with D015, 30, and 80

93
Inverse Filtering
  • Direct inverse filtering
  • Compute an estimate, ,of the transform of
    the original image simply by dividing the
    transform of the degraded image, G(u,v), by the
    degradation function
  • ????degradation function,?????????degraded
    image,??N(u,v)?????
  • ?degradation function?0????,N(u,v)/H(u,v)???.

(5.7-1)
(5.7-2)
94
Inverse Filtering (cont.)
  • One way to get around the zero or small-value
    problem is to limit the filter frequencies to
    values near the origin.
  • We know that H(0,0) is equal to the average value
    of h(x,y) and that this is usually the highest
    value of H(u,v) in the frequency domain.
  • Thus, by limiting the analysis to frequencies
    near the origin, we reduce the probability of
    encountering zero values.
  • In general, direct inverse filtering has poor
    performance.

95
Inverse Filtering (cont.)
Cutoff H(u,v) a radius of 40
??G(u,v)/H(u,v)
Cutoff H(u,v) a radius of 85
Cutoff H(u,v) a radius of 70
96
Minimum Mean Square Error (Wiener) Filtering
  • Incorporated both the degradation function and
    statistical characteristics of noise into the
    restoration process.
  • The objective is to find an estimate f of the
    uncorrupted image f such that the mean square
    error between them is minimized.

(5.8-1)
?????,??? K???
(6.20)
(6.21)
97
Example 5.12
  • Fig. (a) is the full inverse-filtered result
    shown in Fig. 5.27(a).
  • Fig. (b) is the radially limited inverse filter
    result of Fig. 5.27(a).
  • Fib. (c) shows the result obtained using
    Eq(5.8-3) with the degradation function used in
    Example 5.11.

98
Example 5.13
  • From left to right,
  • the blurred image of Fig. 5.26(b) heavily
    corrupted by additive Gaussian noise of zero mean
    and variance of 650.
  • The result of direct inverse filtering
  • The result of Wiener filtering.

99
Constrained Least Squares Filtering
  • The difficulty of the Wiener filter
  • The power spectra of the undegraded image and
    noise must be known
  • A constant estimate of the ratio of the power
    spectra is not always a suitable solution.
  • Constrained Least Squares Filtering
  • Only the mean and variance of the noise are
    needed.

100
Constrained Least Squares Filtering
  • We can express Eq(5.5-16) in vector-matrix form,
    as
  • Suppose that g(x,y) is of size M x N, then we can
    form the first N elements of the vector g by
    using the image elements in first row of g(x,y),
    the next N elements from the second row, and so
    on.
  • The resulting vector will have dimensions MN x 1.
    these are also the dimensions of f and h.
  • The matrix H then has dimensions MN x MN
  • Its elements are given by the elements of the
    convolution given in Eq(4.2-30).
  • Central to the method is the issue of the
    sensitivity of H to noise.
  • To alleviate the noise sensitivity problem is to
    base optimality of restoration on a measure of
    smoothness, such as the second derivation of an
    image.

(5.9-1)
101
Constrained Least Squares Filtering (cont.)
  • To find the minimum of a criterion function, C,
    defined assubject to the constraintwhere
    is the Euclidean vector norm, and
    is the estimate of the undegraded image.
  • The frequency domain solution to this
    optimization problem is given by the
    expressionwhere g is a parameter that must
    be adjusted so that the constraint inEq(5.9-3) is
    satisfied.

(5.9-2)
(5.9-3)
(5.9-4)
102
Constrained Least Squares Filtering (cont.)
  • P(u,v) is the Fourier transform of the function.
  • This function is the same as the Laplacian
    operator.
  • Eq.(5.9-4) reduces to inverse filtering if g is
    zero.

(5.9-5)
103
Constrained Least Squares Filtering (cont.)
g were selected manually to yield the best visual
results.
104
Constrained Least Squares Filtering (cont.)
  • It is possible to adjust the parameter g
    interactively until acceptable results are
    achieved.
  • If we are interested in optimality, the parameter
    g must be adjusted so that the constraint in
    Eq(5.9-3) is satisfied.
  • Define a residual vector r as
  • Since, from the solution in Eq(5.9-4), is a
    function of g, then r also is a function of this
    parameter. It can be shown thatis a
    monotonically increasing function of g.
  • What we want to do is adjust gamma so that

(5.9-6)
(5.9-7)
(5.9-8)
105
Constrained Least Squares Filtering (cont.)
  • Because f(g) is monotonic, finding the desired
    value of g is not difficult.
  • Step 1 specify an initial value of g.
  • Step 2 Compute r2
  • Step 3 Stop if Eq(5.9-8) satisfied otherwise
    return to Step 2 after increasing g ifor
    decreasing g ifUse the new value of g in
    Eq(5.9-4) to recompute the optimum estimate

106
Constrained Least Squares Filtering (cont.)
  • In order to use the algorithm, we need the
    quantities and . To compute , from
    Eq(5.9-6) thatFrom which we obtain r(x,y) by
    computing the inverse transform of R(u,v).
  • Consider the variance of the noise over the
    entire image, which we estimate by the
    sample-average methodwhereis the sample
    mean.

(5.9-9)
(5.9-10)
(5.9-11)
(5.9-12)
107
Constrained Least Squares Filtering (cont.)
  • With reference to the form of Eq(5.9-10), the
    double summation in Eq(5.9-11) is equal to
  • This gives us the expression
  • We can implement an optimum restoration algorithm
    by having knowledge of only the mean and variance
    of the noise.

(5.9-13)
108
Constrained Least Squares Filtering (cont.)
  • The initial value used for g was 10-5, the
    correction factor for adjusting g was 10-6, the
    value for a was 0.25.

109
Homomorphic filter
  • ????f(x,y)??????????????
  • ?(6.36)??????????????????,
  • ?????
  • ?

(6.36)
(6.38)
(6.39)
110
Homomorphic filter
  • ???????H(u,v)???G(u,v) ,?
  • ?????
  • ?
  • ?(6.41)????

(6.40)
(6.41)
111
Homomorphic filter (cont.)
  • ??g(x,y)????????????,???????????????????
  • ??

(6.42)
112
Homomorphic filter (cont.)
  • ????????????

113
Homomorphic filter (cont.)
  • The illumination component of an image generally
    is characterized by slow spatial variations.
  • The reflectance component tends to vary abruptly
  • The low frequencies of the Fourier transform of
    the logarithm of an image with illumination and
    the high frequencies with reflectance.

114
Homomorphic filter (cont.)
  • The HF requires specification of a filter
    function H(u,v) that affects the low-and high
    frequency component of the Fourier transform in
    different ways.
  • The filter tends to decrease the contribution
    made by the low frequencies (illumination) and
    amplify the contribution made by high frequencies
    (reflectance).
  • The net result is simultaneous dynamic range
    compression and contrast enhancement.

??????????????????
?rHgt1, rLlt1?????????,??????????
rHgt1
rLlt1
????(??),?????(??) ,????????
115
Example 4.10
  • In the original image
  • The details inside the shelter are obscured by
    the glare from the outside walls.
  • Fig. (b) shows the result of processing by
    homomorphic filtering, with gL0.5 and gH2.0.
  • A reduction of dynamic range in the brightness,
    together with an increase in contrast, brought
    out the details of objects inside the shelter.

116
Wavelet Transform
  • Fourier Transform only provides frequency
    information.
  • Fourier Transform does not provide any
    information about frequency localization.
  • It does not provide information about when a
    specific frequency occurred in the signal.
  • Short-Term Fourier Transform
  • Windowed Fourier Transform can provide
    time-frequency localization limited by the window
    size.
  • The entire signal is split into small windows and
    the Fourier Transform is individually computed
    over each windowed signal.
  • The STFT provide some localization depending on
    the size of the window, it does not provide
    complete time-frequency localization.
  • Wavelet Transform is a method for complete
    time-frequency localization for signal analysis
    and characterization.

117
Wavelet Transform
  • The wavelet transform provides a series expansion
    of a signal using a set of orthonormal basis
    function that are generated by scaling and
    translation of the mother wavelet y(t), and the
    scaling function f(t).
  • The wavelet transform decomposes the signal as a
    linear combination of weighted basis functions to
    provide frequency localization with respect to
    the sampling parameter such as time or space.
  • The multi-resolution approach (MRA) of the
    wavelet transform establishes a basic framework
    of the localization and representation of
    different frequencies at different scales.

118
Wavelet Transform
  • In MRA
  • Scaling function is used to create a series of
    approximations of a function or image, each
    differing by a factor of a from its nearest
    neighboring approximations.
  • Wavelets are then used to encode the difference
    in information between adjacent approximating.

119
Wavelet Transform..
  • Wavelet Transform
  • Works like a microscope focusing on finer time
    resolution as the scale becomes small to see how
    the impulse gets better localized at higher
    frequency permitting a local characterization
  • Provides Orthonormal bases while STFT does not.
  • Provides a multi-resolution signal analysis
    approach.

120
Wavelet Transform
  • Using scales and shifts of a prototype wavelet, a
    linear expansion of a signal is obtained.
  • Lower frequencies, where the bandwidth is narrow
    (corresponding to a longer basis function) are
    sampled with a large time step.
  • Higher frequencies corresponding to a short basis
    function are sampled with a smaller time step.

121
Wavelet Transform
  • A scaling function f(t) in time t can be defined
    as
  • The scaling and translation generates a family of
    functions using the following dilation equations
    (refinement equation)where hn is a set of
    filter (low-pass filter) coefficient.
  • To induce a multi-resolution analysis of L2(R),
    where R is the space of all real numbers, it is
    required to have a nested chain of closed
    suspaces defined as

(6.44)
k??fj,k(t)?x????,j??fj,k(t)???(?x????)
????????????? ????????,????? ????????????????
(6.45)
(6.46)
???????????????????????????????????
122
Wavelet Transform
??????scaling function??????? ?????????scaling
function??????? ??V0???????V1????
123
Wavelet Transform
  • Define a function y(t) as the mother wavelet
  • The wavelet basis induces an orthogonal
    decomposition of L2(R)
  • y(t) can be expressed as a weighted sum of the
    shifted y(2t) aswhere gn is a set of filter
    (high-pass filter) coefficients.

(6.47)
Wj is a subspace spanned by y(2jt-k)
(6.48)
(6.49)
124
Wavelet Transform
  • The wavelet-spanned subspace is such that it
    satisfies the relation
  • Since the wavelet functions span the orthogonal
    complement spaces, the orthogonality requires the
    scaling and wavelet filter coefficients to be
    related through the following
  • Let xn be an arbitrary square summable sequence
    representing a signal in the time domain such that

(6.50)
(6.51)
(6.52)
125
Wavelet Transform
  • The series expression of a discrete signal xn
    using a set of orthonomal basis function jknis
    given bywhere Xk ltjk (l),x(l)gtSjk
    (l)xl?????where Xk is the transform of xn
  • All basis function must satisfy the
    orthonormality conditionwith

(6.53)
(6.54)
126
Wavelet Transform
  • The series expansion is considered to be complete
    if every signal from l2(Z) can be expressed using
    the expression in Eq.(6.35)
  • Using a set of bio-orthogonal basis function, the
    series expansion of the signal xn can be
    expressed aswhereand

??xn????bi-orthogonal basisfunctions ????
(6.55)
127
Wavelet Transform
  • Using a quadrature-mirror filter theory, the
    orthonormal bases jk(n) can be expressed as
    low-pass and high-pass filters for decomposition
    and reconstruction of a signal.
  • It can be shown that a discrete signal xn can
    be decomposed into Xk aswhereand

Low-pass filter
(6.56)
High-pass filter
h0?h1?????? g0?g1??????
128
Wavelet Transform
  • A perfect reconstruction of the signal can be
    obtained if the orthonomal bases are used in
    decomposition and reconstruction stages as
  • The scaling function provides low-pass filter
    coefficients and the wavelet function provides
    the high-pass filter coefficients.

Wavelet function (high-pass function)
(6.57)
Scaling function (low-pass filter)
129
Wavelet Transform
  • A multi-resolution signal representation can be
    constructed based on the differences of
    information available at two successive
    resolutions 2j and 2j-1.
  • Decomposing a signal using the wavelet transform
  • The signal is filtered using the scaling function
    (low-pass filter)
  • Sub-sampling the filtered signal (scale
    information)
  • Filtering the signal with the wavelet (high-pass
    filter) and subsampling by a factor of two.
    (detail signal)
  • The difference of information between resolution
    2j and 2j-1 is called detail signal at
    resolution 2j .

130
Wavelet Transform
Decomposition
Reconstruction
  • Figure 6.19. (a) A multi-resolution signal
    decomposition using Wavelet transform and (b) the
    reconstruction of the signal from Wavelet
    transform coefficients.

131
Wavelet Transform
  • The signal decomposition at the jth stage can
    thus be generalized as
  • To decompose an image, the above method for 1D
    signals is applied first along the rows of the
    image, and then along the columns.
  • The image at resolution 2j1, represented by
    Aj1, is first low-pass and high-pass filtered
    along the rows.
  • The result of each filtering process is
    subsampled.
  • Next the subsampled results are low-pass and
    high-pass filtered along each column.
  • The results of these filtering processes are
    again subsampled.

(6.58)
132
Wavelet Transform
  • Figure 6.20. Multiresolution decomposition of an
    image using the Wavelet transform.

133
Wavelet Transform
  • This scheme can be iteratively applied to an
    image to further decompose the signal into
    narrower frequency bands.
  • Each frequency band can be further decomposed
    into four narrower bands.
  • Each level of decomposition reduces the
    resolution by a factor of two, the length of the
    filter limits the number of levels of
    decomposition.
  • Daubechies (1992) proposed the least asymmetric
    wavelets
  • Computed for different support widths as larger
    support widths provide more regular wavelets.
  • See Figure 6.21 and Table 6.1

134
Wavelet and Scaling Functions
135
Wavelet Transform
  • Table 6.1 Coefficients for the Corresponding
    Low-pass and High-Pass Filter for the Least
    Asymmetric Wavelet

N High-Pass Low-Pass
0 -0.107148901418 0.045570345896
1 -0.041910965125 0.017824701442
2 0.703739068656 -0.140317624179
3 1.136658243408 - 0.421234534204
4 0.421234534204 1.136658243408
5 -0.140317624179 - 0.703739068656
6 -0.017824701442 -0.041910965125
7 0.045570345896 0.107148901418
136
Wavelet Decomposition Space
137
Image Smoothing and Sharpening Using the Wavelet
Transform
  • The wavelet transform provides a set of
    coefficients representing the localized
    information in a number of frequency bands.
  • For denoising and smoothing is to threshold these
    coefficients in those bands that have a high
    probability of noise and then reconstruct the
    image using the reconstruction filters (Eq.6.57).
  • The reconstruction process integrates information
    from specific bands with successive upscaling of
    resolution to provide the final reconstructed
    image at the same resolution as of the input
    image.
  • If certain coefficients related to the noise or
    noise-like information are not included in the
    reconstruction process, the reconstructed image
    shows a reduction of noise and smoothing effects.

138
Image Decomposition
Image
139
Image Processing and Enhancemenet
MR?????????
??high-high band????MR??
??high-high band????MR??
140
  • It is difficult to discriminate image features
    from the noise based on the spatial distribution
    of gray values.
  • A useful distinction between the noise and image
    features may be made, if some knowledge about the
    processed image features and their behavior is
    known a prior.
  • The need for some partial image analysis that
    must be performed before the image enhancement
    operations are performed.
Write a Comment
User Comments (0)
About PowerShow.com