Title: Edge detection
1Edge detection
Mainly a summary of chapter 8 in Computer
Vision A modern approach, by David A. Forsyth
and Jean Ponce, 2003
- IP/CV Seminar
- Veronica Gidén, Nakajima Laboratory
- May 2006
2What is an edge?
- Points in the image where the brightness changes
particularly sharply - We want edge points to be associated with the
boundaries and other kinds of meaningful changes. - However, it is hard to tell a meaningful edge
from a nuisance edge
3A 1-dimensional edge
- We can intuitively say that there should be an
edge between the 4th and 5th pixels in the
following 1-dimensional data - 5 7 6 4 152 148 149
- But where should we set the threshold?
- Edge detection is often non-trivial
4Why do we want to find edges?
- Sharp changes in image properties usually reflect
important events and changes in properties of the
world, like - discontinuities in depth
- discontinuities in surface orientation
- changes in material properties
- variations in scene illumination
- Reduces the amount of data and filters out less
relevant information and preserves the important
structural properties of an image.
5Example of an application innon-photo realistic
rendering
- Client-Server Visualization of City Models
- through Non Photorealistic Rendering
- Jean-Charles Quillet Gwenola Thomas Jean-Eudes
Marvie - September 23, 2005
6Edges and derivatives
- The definition of a derivative
- Thus, since an image is discrete, we can estimate
the derivative as a symmetric, finite difference
- Edges are fast changes in intensity - they have
large derivatives! - Edge detection algorithms generally compute a
derivative of the intensity change -
differentiation.
7Edges and derivatives
8Problem Noise
- Image noise is a primary problem, since edge
detectors are constructed to respond strongly to
sharp changes - The noise pixels are typically uncorrelated, they
can be very different - Noise IS sharp changes in an image!
- So, we can't just differentiate the image!
9What is noise?
- Image measurements from which we do not know how
to extract information - Or from which we do not care to extract
information - All the rest is signal
10The solution smoothing
- Smooth the image with a filter before the
differentiation - Since differentiation is linear and
shift-invariant, there is some filter kernel that
differentiates - Can obtain the differentiated image I by
convolution with an appropriate filter K - First smoothen the image and then differentiate
it - Thus convolve with the derivative of the
smoothing filter! - Most often, we use a Gaussian smoothing filter
11Filters and convolution
- A filter is an array, in which the values
determines what it does - Spacial filtering moving the filter mask from
point to point in an image while at each pixel,
the response of the filter is calculated. This is
called convolution. - 2D discrete convolution
- Or shorter
12Linear, shift-invariant filters
- Linear filters the response is given by a sum of
products of the filter coefficients and the
corresponding image pixels in the area spanned by
the filter - For linear filters, the output for the sum of two
images is equal to the sum of the outputs for the
images separately - Shift-invariant filters the value of the output
depends on the pattern in an image neighbourhood,
not the position of the neighbourhood
13Gaussian filters
- The Gaussian distribution is also called the
normal distribution - In the 2D frequency domain
- Pixels where this distribution is non-zero are
used to build a convolution matrix, which is
applied to the original image. - Each pixel's value is set to a weighted average
of that pixel's neighborhood smoothing.
14Why does smoothing help?
- E.g. contours of object long chain of points
where the image derivative is large - Large derivatives due to noise tends to be a
local event - Smoothing tends to suppress noise but not really
the changes we are actually interested in
(edges!)
15Why use a Gaussian smoothing filter?
- Convolving a Gaussian with a Gaussian results in
another Gaussian - Can get heavily smoothed images by resmoothing
smoothed images - This is useful, since discrete convolution can
be expensive and we often want differently
smoothed versions of an image
16Why use a Gaussian smoothing filter?
- When convolving with a kernel with very small
standard deviation, the values are very small on
a large area outside of the center - we can use a
small array! - For a large standard deviation we need a big
array, but we prefer to smooth repeatedly with a
much smaller array - A smoothed image is redundant and therefore, some
pixels can be discarded - We can smooth, subsample and so on
- Results in an image that has the same info as the
heavily smoothed image but is much smaller and
easier to obtain.
17Why use a Gaussian smoothing filter?
- Gaussians are separable kernels
- Convolving with a 2D kernel gives the same
result as convolving with two 1D kernels (x and
y direction)
18Main strategies for detecting edges
- Two main strategies, and both model edges as fast
changes in brightness - 1) Zero-crossing-based
- Look for zero crossings in the second derivative
of the image - 2)Search-based
- Detect edges by searching for maxima and minima
in the first derivative of the image
19Zero-crossing-basedUsing the Laplacian to
detect edges
- In 1D, the 2'd derivative magnitude is 0 when the
1'st derivative magnitude is extremal edges! - Extending this to 2D, we have the Laplacian
operator - Linear and rotationally invariant
- Smoothing and applying the Laplacian
- Method Convolve the image with the Laplacian of
a Gaussian (LoG), then mark the points where the
result is 0.
20Results
21Usage, advantages and drawbacks
- Usage Adding some percentage of the result back
to the image gives a a picture with sharpened
edges and where details are more easily seen - Advantage Get thin edges
- Drawbacks
- the Laplacian of a Gaussian filter is not
oriented behaves strangely at corners - Spaghetti loops
- Due to noise, all 0-crossings may not lie on an
edge
22Search-basedGradient-based edge-detectors
- More frequently used than zero-crossing-based
- Compute some estimate of the gradient magnitude
and use it to find edge points - Problem Get thicker edges than with the
Laplacian - To solve this, we look for points where the
gradient magnitude reaches a maximum along the
direction perpendicular to the edge. Estimate
this direction using the direction of the
gradient.
23The gradient
- First derivatives in image processing are
implemented using the magnitude of the gradient - For a function f(x,y) the gradient of f at
coordinate (x,y) is defined as the 2D vector f
Gx Gy df/dx df/dy - The magnitude is the length of the vector
- Usually computed asGxGy
24Gradient-based edge-detectors
25Gradient-based edge-detectors
- Algorithm for gradient-based edge-detectors
- Form an estimate of the image gradient
- Obtain the gradient magnitude from this estimate
- Identify image points where the value of the
gradient magnitude is maximal in the direction
perpendicular to the edge and also large these
points are edge points
26Non-maximum suppressionand edge-following
- Selecting local maxima of the gradient magnitude,
we get isolated points, but we want an edge! - Select the point with the maximum gradient
magnitude along the gradient direction get a
chain. - Expect edge points to occur along curve like
chains - The important steps
- Determine whether a given point is an edge
point - If it is, find the next edge point
27Non-maximum suppression
28Hysteresis
- Problem We get too many curves, not just object
boundaries - Reason We mark maxima without regarding how big
they are - Solution Use a threshold test so that
maximagtthreshold - New problem We get broken edge curves
- Solution Use two thresholds, the larger when
starting an edge chain and the smaller while
following it hysteresis
29Problems
Standard-deviation of the Gaussian
Threshold
s1 pixel
High threshold
s4 pixel
High threshold
s4 pixel
Low threshold
30Problem corners
- Edge detectors fail at corners
- The partial derivatives can't describe oriented
corners, since they cross there - Special corner detectors look for neighbourhoods
where the gradient swings sharply
31Problem object boundaries
- Object boundaries are NOT the same as sharp
changes in image values! - Objects may not have a strong contrast with the
backgrounds - Textures may generate edges of their own
- Shadows may generate edges
- The solution
- Control the illumination
- Large smoothing parameters and high contrast
thresholds
32Thank you for listening!