Title: ASAM Image Processing 20082009
1ASAM - Image Processing2008/2009
- Lecture 21
- Revision lecture III
Ioannis Ivrissimtzis 13-May-2009
2Revision questions lecture 12
- i. Describe the method of intensity slicing for
pseudo-colouring. - ii. Describe gray to colour transformations and
image filtering for pseudo-colouring. - iii. Describe colour transformations and give one
example for each of the RGB, CMY and HSI colour
spaces.
3Intensity slicing
- A simple way to produce a pseudo-colouring is the
intensity slicing. - We consider the intensity levels
- where corresponds to black and to
white. - We use the M colours
4Intensity slicing
- A pixel with intensity s is assigned the colour
if - The intensity is assigned the colour
.
5Example of intensity slicing
- Monochrome image of the Picker Thyroid Phantom
Results of intensity slicing into eight colour
regions
Images from Gonzalez and Woods
6Gray to colour transformations
- The gray level to colour transformations are
defined by three functions from the set of gray
levels to the sets of intensities of the red,
green and blue bands respectively. - This transformation can be seen as consisting of
three independent intensity transformation as
described in Lecture 2 of this course.
7Gray to colour transformations
- The three intensity transformations
- describe a gray level to colour transformation
- A pixel with intensity s gets the RGB colour
8Image filtering for pseudo-colouring
- In a different approach to pseudo-colouring we
compute three independent transformations of the
image rather than the gray scale. - For a grayscale image we compute
its transforms - The pseudo-coloured image is written in the RGB
space as
9Example
Notice that in the pseudo-coloured image the
outer ring of Saturn is much more visible.
The red band with high frequency information
Original grayscale image
Pseudo-coloured
10Colour transformations
- Similarly to the intensity transformations for
grayscale images and gray to colour
transformations for pseudo-colouring, we have
colour transformations. - In the RGB space for example, under a colour
transformation, each RGB colour - is mapped to a colour
11RGB example
- The colour transformation
- inverses the colours in a way reminiscent of the
negatives of the colour films.
12RGB example
Original image
RGB negative
13CMY example
- Visual inspection of the image shows an excess of
magenta. - To balance the colour we convert it to the CMY
space and transform the magenta component.
14CMY example
1
1
0
Transformation function of magenta
Original image heavy on magenta
Corrected image
15HSI example
- We want to brighten the image using histogram
equalisation. - Histogram equalisation on each RGB component we
will change the hues and the processed image will
look unnatural.
16HSI example
- Instead we
- Convert the image to the HSI space.
- We transform the intensity component applying
histogram equalisation on it. - In addition, we transform the saturation
component to get less saturated colours.
17HSI example
1
1
0
Transformation function of saturation
Original image
Processed image
The transformation function of the intensity
component was computed by applying histogram
equalisation on it.
18Revision questions lecture 13
- i. Describe the morphological operations
dilation, erosion, opening and closing for binary
images. - ii. Describe how binary morphology can be used
for boundary extraction.
19Revision questions lecture 13
- iii. Find the number of ones at the binary image
I after dilation, erosion, opening and closing
with the structuring element S.
20Structuring elements
- Similarly to the case of spatial linear
filtering, a binary image can be processed by
another binary image called structuring element. - A typical structuring element has few 1-valued
pixels, and one of his pixels is designated a its
origin. - In an analogy between spatial linear filtering
and morphological image processing, the
structuring element can be seen as the equivalent
of the mask and its origin as the equivalent of
the centre of a mask.
21Dilation
- Let A be a binary image and B a structuring
element. - The dilation of A by B is the set consisting of
all the locations of the origin of B where the
translated B overlaps at least some portion of A.
- The mechanics of the dilation are similar to that
of spatial linear filtering. We translate B in
all possible positions and if there is some
overlap with A, then the location of the origin
belongs to the dilation.
22Erosion
- Let A be a binary image and B a structuring
element. - The erosion of A by B is the set consisting of
all the locations of the origin of B where the
translated B fits entirely within A. - The mechanics of the erosion are similar to that
of dilation. We translate B in all possible
positions and if it fits entirely within A, then
the location of the origin belongs to the
erosion.
23Opening
- In practical applications, dilations and erosions
are used most often in various combinations. - The opening of the binary image A by the
structuring element B is the erosion of A by B,
followed by dilation of the result by B.
24Closing
- The closing of the binary image A by the
structuring element B is the dilation of A by B,
followed by erosion of the result by B. - We can show that the closing of A by B is the
complement of union of all translations of B that
do not overlap A.
25Boundary extraction
- Let A ? B denotes the dilation of A by B and let
A - B denotes the erosion of A by B. - The boundary of A can be computed as
- A - ( A - B )
- where be is a 3x3 square structuring element.
- That is, we subtract from A an erosion of and
obtain its boundary.
26Exercise
- After dilation the image has 44 one-valued images
27Exercise
- After erosion the image has 8 one-valued images
28Exercise
- After opening the image has 20 one-valued images
29Exercise
- After closing the image has 24 one-valued images
30Revision questions lecture 14
- i. Describe the use of the following masks for
point and line detection - ii. Describe edge detection based on the
Laplacian of the Gaussian.
31Point detection
- For the detection of isolated points we can use
the mask
It is a form of the Laplacian with the diagonal
directions included.
32Point detection
- We say that a point has been detected at the
location (x,y), if the absolute value of the
response is above a threshold T. - If the detected points are labelled 1 and all
others are labelled 0, we can visualise the
results of point detection by a binary image
g(x,y)
33Line detection
- A line is an edge segment in which the intensity
of the background on either side is either much
higher or much lower than the intensity of the
line pixels. - The Laplacian mask
- can be used for line detection.
34Line detection
- The previous mask is isotropic. It does not have
preference for any particular direction. - Anisotropic masks can be used to detect lines in
specified directions
Horizontal
45
Vertical
- 45
35Line detection
- Let R1, R2, R3, R4 be the responses of the four
masks centred on (x,y). - The pixel (x,y) is said to be more likely
associated with the direction of the mask with
the highest response in absolute values. - For example, if
- we say that (x,y) is more likely associated with
the horizontal direction.
36Laplacian of Gaussian
- Edge detectors must be able to cope with noise.
They must also be able to act at any scale. - The Marr-Hilderth edge detector, solves these two
problems by first applying a Gaussian mask on the
image, and then a Laplacian. - The size of the Gaussian mask depends on the
scale of the edges we want to detect.
37Laplacian of Gaussian
- The Laplacian of the Gaussian of an image can be
efficiently implemented using a single mask
computed as the convolution of a Gaussian mask by
a Laplacian mask. - An example of a Laplacian of Gaussian mask is
38Laplacian of Gaussian
- The final step is to compute the zero-crossings,
that is, the pixels where the Laplacian of the
Gaussian changes sign. - Typically, we define a binary image with 0s
corresponding to negative pixel values and 1 to
positive. - Then we compute the boundary of the binary image,
e.g. with morphological operators.
39Revision questions lecture 15
- i. Use the Sobel masks Sx, Sy to compute the
gradient at the centre of the image I. -
- ii. Describe the Canny edge detector.
40Exercise
The first order difference in the
x-direction (vertical) is the response the Sobel
mask
The first order difference in the
y-direction (horizontal) is the response of the
Sobel mask
41Exercise
The gradient at the centre of I is
I
42Canny edge detector
- The Canny edge detection algorithm consists of
the following steps - Smooth the input image with a Gaussian filter
- Compute the gradient magnitude and angle
- Apply non-maxima suppression to the gradient
magnitude - Use double thresholding and connectivity analysis
to detect and link edges
43Non-maxima suppression
- In the array (image) of the gradient magnitude
the edges of the original image are represented
by wide ridges. See previous lecture - The non-maxima suppression step thins these
ridges, by keeping only the locally maximum
values of the gradient magnitude .
44Non-maxima suppression
- The maximum suppression first quantizes the
gradient angle into the four directions of a
pixels neighbourhood.
For each pixel, we find the sector of the
gradient angle, and assign to the pixel one of
the four directions, horizontal, - 45, vertical,
or - 45.
45Non-maxima suppression
- The second step of non-maxima suppression makes
zero the gradient magnitude of a pixel if it is
smaller than the gradient magnitude of any of the
two pixels in the quantized gradient direction. - For example, if the quantized gradient direction
is horizontal, then we compare
with the gradient magnitude values at its left
and its right, and make it zero if it is smaller
than either of them.
46Double thresholding
- We use a high and a low threshold for
the gradient magnitude values. - The edges are found by a tracking algorithm
starting from pixels above the high threshold and
stopping and pixels below the low threshold.
47Edge tracking
- The edge tracking algorithm
- Start a new edge from a pixel above the high
threshold that has not already been visited - Track the edge, following the quantized gradient
angle in both directions and mark all pixels
above the low threshold as edges. The edge
tracking stops at pixels below the low threshold.
- Until all pixels above the high threshold have
been visited.
48Example
Original image
Thresholded gradient of smoothed image
Result of Canny algorithm
49Revision questions lecture 16
- i. Describe a local edge linking algorithm for
detecting edges in the horizontal and vertical
direction.
50Edge linking algorithm
- Compute the gradient magnitude and angle arrays,
M(x,y) and a(x,y) of the input image.
- Form a binary image g, whose value at any pair of
coordinates (x,y) is given by - where TM is a threshold used for edge detection,
A is a specified angle direction, and TA defines
the band of acceptable directions about A.
51Edge linking algorithm
- Scan the rows of g and fill (set to 1) all gaps
(sets of 0s) in each row that do not exceed a
specified length K. - By definition, a gap is bounded at both ends by
one or more 1s. - Step 3 of the algorithm links edges in the
horizontal direction. - To link edges in any other direction ?, we rotate
g by ? and apply Steps 2-3. We then rotate the
result back by -?.
52Local edge linking algorithm 2
Gradient magnitude
Horizontal edge linking
Vertical edge linking
For the horizontal edge linking, TM was set at
30 of the maximum gradient value, A90 because
the gradient direction is vertical to the edge,
TA45. K25, that is, we fill gaps of 25 or
fewer pixels.
53Edge linking algorithm
The logical OR of the horizontal and vertical
edge linking
The final result after morphological image
post-processing